Azure Data Engineer 17668
Veritaz AB
📍 Sverige
⏰ Heltid
📋 Tidsbegränsad anställning
🗓 Ansök senast 10 juni 2026
✦ Få fler intervjuer
Generera ett personligt brev anpassat för just den här rollen — på under en minut.
Skapa ansökan – från 49 kr Gratis att söka · Ingen registrering · Premium 49 kr/månOm jobbet
Veritaz is a leading IT staffing solutions provider in Sweden, committed to advancing individual careers and aiding employers in ensuring the perfect talent fit. With a proven track record of successful partnerships with top companies, we have rapidly grown our presence in the USA, Europe, and Sweden as a dependable and trusted resource within the IT industry.
Assignment Description
We are looking for an experienced Azure Databricks Data Engineer to support the development and delivery of production-grade data platforms within modern cloud-native environments.
What You Will Work On
Develop and maintain production-grade Azure Databricks data platforms
Build scalable batch and streaming data pipelines
Design and implement bronze/silver/gold data architectures
Develop reusable transformation frameworks and data processing workflows
Work with Apache Spark, PySpark/Scala, and Spark SQL solutions
Implement and optimize Delta Lake capabilities including schema evolution and incremental processing
Configure governance, permissions, lineage, and auditability using Unity Catalog
Support platform observability, monitoring, and operational optimization
Perform Spark performance tuning and optimization activities
Work with Azure integrations including ADLS, ADF/Synapse, Key Vault, and Entra ID
Contribute to CI/CD pipelines and deployment automation processes
Collaborate with cross-functional teams within enterprise data platform initiatives
What You Bring
Strong experience in Azure Databricks engineering
Proven experience delivering production-grade data platforms
Advanced expertise in Apache Spark technologies
Strong experience with PySpark and/or Scala
Solid understanding of Spark SQL and distributed data processing
Deep expertise in Delta Lake including:
ACID transactions
Partitioning strategies
Schema evolution
OPTIMIZE/ZORDER
Incremental and CDC patterns
Experience building scalable batch and streaming pipelines
Strong knowledge of bronze/silver/gold data architectures
Experience with governance and security in multi-team environments
Knowledge of Unity Catalog permissions, lineage, and auditability
Experience with service principals and managed identities
Strong understanding of Spark performance tuning and observability
Experience integrating Azure services such as ADLS, ADF/Synapse, Key Vault, and Entra ID
Experience with CI/CD practices using Azure DevOps or GitHub Actions
Strong analytical and problem-solving skills
Assignment Description
We are looking for an experienced Azure Databricks Data Engineer to support the development and delivery of production-grade data platforms within modern cloud-native environments.
What You Will Work On
Develop and maintain production-grade Azure Databricks data platforms
Build scalable batch and streaming data pipelines
Design and implement bronze/silver/gold data architectures
Develop reusable transformation frameworks and data processing workflows
Work with Apache Spark, PySpark/Scala, and Spark SQL solutions
Implement and optimize Delta Lake capabilities including schema evolution and incremental processing
Configure governance, permissions, lineage, and auditability using Unity Catalog
Support platform observability, monitoring, and operational optimization
Perform Spark performance tuning and optimization activities
Work with Azure integrations including ADLS, ADF/Synapse, Key Vault, and Entra ID
Contribute to CI/CD pipelines and deployment automation processes
Collaborate with cross-functional teams within enterprise data platform initiatives
What You Bring
Strong experience in Azure Databricks engineering
Proven experience delivering production-grade data platforms
Advanced expertise in Apache Spark technologies
Strong experience with PySpark and/or Scala
Solid understanding of Spark SQL and distributed data processing
Deep expertise in Delta Lake including:
ACID transactions
Partitioning strategies
Schema evolution
OPTIMIZE/ZORDER
Incremental and CDC patterns
Experience building scalable batch and streaming pipelines
Strong knowledge of bronze/silver/gold data architectures
Experience with governance and security in multi-team environments
Knowledge of Unity Catalog permissions, lineage, and auditability
Experience with service principals and managed identities
Strong understanding of Spark performance tuning and observability
Experience integrating Azure services such as ADLS, ADF/Synapse, Key Vault, and Entra ID
Experience with CI/CD practices using Azure DevOps or GitHub Actions
Strong analytical and problem-solving skills