We're looking for a Data Engineer (Azure / Databricks) ready to take part in a large-scale data transformation project for an international mobility leader. Join a global team driving the shift from legacy systems to a modern federated data architecture built on Databricks and Azure.
Location : Hybrid in Madrid (3 days remote)
Salary range : Up to 50K + benefits
ABOUT THE ROLE
- Migrate and refactor existing ETL / ELT pipelines (SQL, Informatica, .NET) into Databricks notebooks using PySpark.
- Ensure data quality, completeness, and performance during migration.
- Collaborate with local data owners to validate and reconcile datasets.
- Implement data quality checks, monitoring, and logging for Databricks workflows.
- Use Delta Live Tables to automate and orchestrate pipelines efficiently.
- Support Infrastructure-as-Code automation using Terraform or similar tools.
- Optimize performance and storage usage across the Databricks-managed Data Lake.
- Align with global data standards and contribute to the federated data model design.
ABOUT YOU
You're a hands-on Data Engineer passionate about building scalable data pipelines and modernizing legacy environments. You thrive in international, collaborative contexts and love solving complex data migration challenges.
3–5 years of experience as a Data Engineer or ETL Developer.Strong expertise in Azure Databricks (PySpark, SparkSQL, notebooks).Proven experience migrating legacy ETL logic into modern data pipelines.Solid SQL skills and experience with query optimization.Hands-on experience with Terraform or other IaC tools.Proficiency in Git and Agile environments.English working proficiency (Spanish or French is a plus).BENEFITS
Meal allowance (10€ per day)
30 days of holidays per year
Private medical insurance (up to 25.50€ / month)
Training and certification support
Performance bonus & referral program (up to 1,500€ per referral)