Databricks Data Engineer
Who We Are
Axpo is driven by a single purpose - to enable a sustainable future through innovative energy solutions. As Switzerlands largest producer of renewable energy and a leading international energy trader, Axpo leverages cutting-edge technologies to serve customers in over 30 countries. We thrive on collaboration, innovation, and a passion for driving impactful change.
About the Team
You will report directly to our Head of Development and join a team of highly committed IT data platform engineers with a shared goal : unlocking data and enabling self-service data analytics capabilities across Axpo. Our decentralized approach means close collaboration with various business hubs across Europe, ensuring local needs shape our global platform. Youll find a mindset committed to innovation, collaboration, and excellence.
What You Will Do
As a Databricks Data Engineer, you will :
- Be a core contributor in Axpos data transformation journey by using Databricks as our primary data and analytics platform.
- Design, develop, and operate scalable data pipelines on Databricks, integrating data from a wide variety of sources (structured, semi-structured, unstructured).
- Leverage Apache Spark, Delta Lake, and Unity Catalog to ensure high-quality, secure, and reliable data operations.
- Apply best practices in CI / CD, DevOps, orchestration (e.g., Dragster, Airflow), and infrastructure-as-code (Terraform).
- Build re-usable frameworks and libraries to accelerate ingestion, transformation, and data serving across the business.
- Work closely with data scientists, analysts, and product teams to create performant and cost-efficient analytics solutions.
- Drive the adoption of Databricks Lakehouse architecture and help standardize data governance, access policies, and documentation.
- Ensure compliance with data privacy and protection standards (e.g., GDPR).
- Actively contribute to the continuous improvement of our platform in terms of scalability, performance, and usability.
What You Bring Who You Are
Were looking for someone with :
A university degree in Computer Science, Data Engineering, Information Systems, or a related field.Strong experience with Databricks, Spark, Delta Lake, and SQL / Scala / Python.Proficiency in dbt, ideally with experience integrating it into Databricks workflows.Familiarity with Azure cloud services (Data Lake, Blob Storage, Synapse, etc.).Hands-on experience with Git-based workflows, CI / CD pipelines, and data orchestration tools like Dragster and Airflow.Deep understanding of data modeling, streaming batch processing, and cost-efficient architecture.Ability to work with high-volume, heterogeneous data and APIs in production-grade environments.Knowledge of data governance frameworks, metadata management, and observability in modern data stacks.Strong interpersonal and communication skills, with a collaborative, solution-oriented mindset.Fluency in English.Technologies Youll Work With
Core : Databricks, Spark, Delta Lake, Python, dbt, SQLCloud : Microsoft Azure (Data Lake, Synapse, Storage)DevOps : Bitbucket / GitHub, Azure DevOps, CI / CD, TerraformOrchestration Observability : Dragster, Airflow, Grafana, Datadog, New RelicVisualization : Power BIOther : Confluence, Docker, LinuxNice to Have
Experience with Unity Catalog and Databricks Governance FrameworksExposure to Machine Learning workflows on Databricks (e.g., MLflow)Knowledge of Microsoft Fabric or SnowflakeExperience with low-code analytics tools like DataikuFamiliarity with PostgreSQL or MongoDBFront-end development skills (e.g., for data product interfaces)Department Installation / Maintenance / Servicing / Craft Locations Madrid Remote status Hybrid
Spark, Databricks, Python, SQL, Scala