Senior Data Engineer – Databricks (m / f /
Our engineering department is growing, and we’re looking for a Senior Data Engineer specialized in Databricks to join our team in Spain, supporting our global growth.
As Senior Data Engineer, you will design and optimize data processing algorithms on a talented, cross‑functional team. You are familiar with the Apache open‑source suite of technologies and want to contribute to the advancement of data engineering.
What We Offer
- Flexible work options, including fully remote or hybrid arrangements for candidates located in Spain
- A chance to accelerate your career and work with outstanding colleagues across 3 continents
- Balance work and personal life through our workflow organization – decide whether you work at home, in the office, or on a hybrid setup
- Annual performance review and regular feedback cycles, connecting colleagues through networks rather than hierarchies
- Individual development plan, professional development opportunities, and educational resources such as paid certifications and unlimited access to Udemy Business
- Local, virtual, and global team events to help UT colleagues get acquainted with one another
What You’ll Do
Design, implement, and maintain scalable data pipelines using the Databricks Lakehouse Platform with a strong focus on Apache Spark, Delta Lake, and Unity CatalogLead the development of batch and streaming data workflows that power analytics, machine learning, and business intelligence use casesCollaborate with data scientists, architects, and business stakeholders to translate complex data requirements into robust, production‑grade solutionsOptimize performance and cost‑efficiency of Databricks clusters and jobs, leveraging tools such as Photon, Auto Loader, and Job WorkflowsEstablish and enforce best practices for data quality, governance, and security within the Databricks environmentMentor junior engineers and contribute to the evolution of the team’s Databricks expertiseWhat You’ll Bring
Deep hands‑on experience with Databricks on Azure, AWS, or GCP, including Spark (PySpark / Scala), Delta Lake, and MLflowStrong programming skills in Python or Scala, and experience with CI / CD pipelines (e.g., GitHub Actions, Azure DevOps)Solid understanding of distributed computing, data modeling, and performance tuning in cloud‑native environmentsFamiliarity with orchestration tools (e.g., Databricks Workflows, Airflow) and infrastructure‑as‑code (e.g., Terraform)A proactive mindset, strong communication skills, and a passion for building scalable, reliable data systemsProfessional Spanish & English communication skills (C1‑level, written and spoken)About Us
Ultra Tendency is an international premier data engineering consultancy for Big Data, Cloud, Streaming, IoT, and Microservices.
We design, build, and operate large‑scale data‑driven applications for major enterprises such as the European Central Bank, HUK‑Coburg, Deutsche Telekom, and Europe’s largest car manufacturer. Founded in Germany, UT has developed a reliable client base and now runs 8 branches in 7 countries across 3 continents.We do more than just leverage tech – we build it. We contribute source code to +20 open‑source projects, including Ansible, Terraform, NiFi, and Kafka.Senior Level
Mid‑Senior level
Employment Type
Full‑time
Job Function
Information Technology
Industries
Computers and Electronics Manufacturing
EEO Statement
Ultra Tendency welcomes applications from qualified candidates regardless of race, ethnicity, national or social origin, disability, sex, sexual orientation, or age. Data privacy statement : Data Protection for Applicants – Ultra Tendency
#J-18808-Ljbffr