Data Engineer (Databricks)
About the RoleWe are seeking a Databricks Data Engineer with strong expertise in designing and optimising large-scale data engineering solutions within the Databricks Data Intelligence Platform. This role is ideal for someone passionate about building high-performance data pipelines and ensuring robust data governance across modern cloud environments.Key ResponsibilitiesDesign, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data.Optimise Spark and Delta Lake performance through efficient cluster configuration, adaptive query execution, and caching strategies.Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads.Implement data quality, lineage tracking, and access control policies aligned with Databricks Unity Catalogue and governance best practices.Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles.Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel for versioned data management.Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key Vault, and Azure Functions.What We''re Looking ForProven experience with Databricks, PySpark, and Delta Lake.Strong understanding of workflow orchestration, performance optimisation, and data governance.Hands-on experience with Azure cloud services.Ability to work in a fast-paced ..... full job details .....
Perform a fresh search...
-
Create your ideal job search criteria by
completing our quick and simple form and
receive daily job alerts tailored to you!