Overview
Databricks Data Engineer (f / m / d) role at Axpo Group
Axpo is driven by a single purpose – to enable a sustainable future through innovative energy solutions. As Switzerland's largest producer of renewable energy and a leading international energy trader, Axpo leverages cutting-edge technologies to serve customers in over 30 countries. We thrive on collaboration, innovation, and a passion for driving impactful change.
What You Will Do
- Be a core contributor in Axpo’s data transformation journey by using Databricks as our primary data and analytics platform.
- Design, develop, and operate scalable data pipelines on Databricks, integrating data from a wide variety of sources (structured, semi-structured, unstructured).
- Leverage Apache Spark, Delta Lake, and Unity Catalog to ensure high-quality, secure, and reliable data operations.
- Apply best practices in CI / CD, DevOps, orchestration (e.g., Dragster, Airflow), and infrastructure-as-code (Terraform).
- Build re-usable frameworks and libraries to accelerate ingestion, transformation, and data serving across the business.
- Work closely with data scientists, analysts, and product teams to create performant and cost-efficient analytics solutions.
- Drive the adoption of Databricks Lakehouse architecture and ensure that data pipelines conform to governance, access, and documentation standards defined by the CDAO office.
- Ensure compliance with data privacy and protection standards (e.g., GDPR).
- Actively contribute to the continuous improvement of our platform in terms of scalability, performance, and usability.
What You Bring & Who You Are
A university degree in Computer Science, Data Engineering, Information Systems, or a related field.Strong experience with Databricks, Spark, Delta Lake, and SQL / Scala / Python.Proficiency in dbt, ideally with experience integrating it into Databricks workflows.Familiarity with Azure cloud services (Data Lake, Blob Storage, Synapse, etc.).Hands-on experience with Git-based workflows, CI / CD pipelines, and data orchestration tools like Dragster and Airflow.Deep understanding of data modeling, streaming & batch processing, and cost-efficient architecture.Ability to work with high-volume, heterogeneous data and APIs in production-grade environments.Experience working within enterprise data governance frameworks, and implementing metadata management and observability practices in alignment with governance guidance.Strong interpersonal and communication skills, with a collaborative, solution-oriented mindset.Fluency in English.Technologies You’ll Work With
Core : Databricks, Spark, Delta Lake, Python, dbt, SQLCloud : Microsoft Azure (Data Lake, Synapse, Storage)DevOps : Bitbucket / GitHub, Azure DevOps, CI / CD, TerraformOrchestration & Observability : Dragster, Airflow, Grafana, Datadog, New RelicVisualization : Power BIOther : Confluence, Docker, LinuxNice to Have
Experience with Unity Catalog and Databricks Governance FrameworksExposure to Machine Learning workflows on Databricks (e.g., MLflow)Knowledge of Microsoft Fabric or SnowflakeExperience with low-code analytics tools like DataikuFamiliarity with PostgreSQL or MongoDBFront-end development skills (e.g., for data product interfaces)Benefits
Working Hours : Flexible hours with 60% remote and 40% at our offices in Madrid, Torre Europa.Meal allowances and option to use them for public transportation or childcareInternet compensation to cover home internet costsMicrosoft ESI Certifications : Access to the Enterprise Skills Initiative programTraining courses and learning resources to support professional growthGym coverage with substantial support for staying activeHealth insurance with extended options for dependentsSeniority level
Not ApplicableEmployment type
Full-timeJob function
Information TechnologyIndustriesUtilities#J-18808-Ljbffr