About MIGx
MIGx is a global consulting company with an exclusive focus on the healthcare and life science industries, with their particularly demanding requirements on quality and regulatory aspects. We have been managing challenges and solving problems for our clients in the areas of compliance, business processes and many others.
MIGx interdisciplinary teams from Switzerland, Spain and Georgia have been taking care of projects in the fields of M&A, Integration, Application, Data Platforms, Processes, IT management, Digital transformation, Managed Services and compliance.
The Opportunity
We’re looking for a Data Engineer to join our growing Data and AI Engineering team of professionals who thrive at the intersection of data, technology, and healthcare. Whether you're early in your career or already have hands-on experience, we welcome curious minds, team players, and problem-solvers eager to build high-quality data solutions for the life sciences industry.
At MIGx, you’ll contribute to modern data mesh and data fabric architectures, develop cloud-native pipelines, and help implement DataOps practices that ensure our systems are robust, observable, and production-ready.
What You’ll Do
- Build and manage ETL / ELT pipelines using tools like Databricks, dbt, PySpark, and SQL.
- Contribute to scalable data platforms across cloud environments (Azure, AWS, GCP).
- Implement and maintain CI / CD workflows using tools such as GitHub Actions and Azure DevOps.
- Apply DataOps principles : pipeline versioning, testing, lineage, deployment automation, and monitoring.
- Integrate automated data quality checks, profiling, and validation into pipelines.
- Ensure strong data observability via logging, metrics, and alerting tools.
- Collaborate on infrastructure as code for data environments using Terraform or similar tools.
- Connect and orchestrate ingestion from APIs, relational databases, and file systems.
- Work in agile teams, contributing to standups, retrospectives, and continuous improvement.
What We’re Looking For
We believe diverse perspectives and backgrounds lead to better ideas. Even if you don’t meet every requirement, we’d still love to hear from you.
Core Experience & Skills
Experience with cloud-native data engineering using tools such as Databricks, dbt, PySpark, and SQL.Comfort working with at least one major cloud platform (Azure, AWS, GCP) — and openness to others.Hands-on experience with CI / CD automation, especially with GitHub Actions or Azure Pipelines.Strong Python programming skills for transformation, scripting, and automation.Working knowledge of data quality, validation frameworks, and test-driven data development.Familiarity with observability practices including metrics, logging, and data lineage tools.Understanding of DataOps concepts, including reproducibility, automation, and collaboration.Team-first mindset and experience in agile environments (Scrum or Kanban).Professional working proficiency in English (our internal and client-facing working language).Nice to Have
Experience with Snowflake or similar cloud data warehouses.Knowledge of data lineage tools and frameworks.Infrastructure automation using Terraform, Bash, or PowerShell.Exposure to data modeling techniques like Data Vault or dimensional modeling.Familiarity with data testing tools.Understanding of GxP or other healthcare data regulations.Experience with non-relational data systems (e.g., MongoDB, CosmosDB).Spanish and / or Catalan language skills.What we offer
Hybrid work model and flexible working schedule that would suit night owls and early birds25 holiday days per yearPossibilities of career development and the opportunity to shape the company futureAn employee-centric culture directly inspired by employee feedback - your voice is heard, and your perspective encouragedDifferent training programs to support your personal and professional developmentWork in a fast growing, international companyFriendly atmosphere and supportive Management team.