job requisition id R2793369
Main Responsibilities :
Establish technical designs to meet Sanofi requirements aligned with the architectural and Data standards.
Ownership of the entire back end of the application, including the design, implementation, testing, and troubleshooting of core application logic, databases, data ingestion and transformation, data processing and orchestration of pipelines, APIs, CI / CD integration, and other processes.
Fine-tune and optimize queries using Snowflake platform and database techniques.
Optimize ETL / data pipelines to balance performance, functionality, and operational requirements.
Assess and resolve data pipeline issues to ensure performance and timeliness of execution.
Assist with technical solution discovery to ensure feasibility.
Assist in setting up and managing CI / CD pipelines and developing automated tests.
Develop and manage microservices using Python.
Conduct peer reviews for quality, consistency, and rigor for production solutions.
Design application architecture for efficient concurrent user handling, ensuring optimal performance during high usage periods.
Promote best practices and standards for code management, automated testing, and deployments.
Own all areas of the product lifecycle : design, development, testing, deployment, operation, and support.
Create detailed documentation on Confluence to support and maintain the codebase and its functionality.
About you
Experience developing backend, integration, data pipelining, and infrastructure with relevant technologies and tools (Snowflake, AWS, Spark, Informatica / IICS or equivalent).
Bachelor’s degree in computer science, engineering, or similar quantitative field.
Expertise in database optimization and performance improvement.
Expertise in Python, PySpark, and Snowpark.
Experience with data warehouses and object-relational databases (Snowflake and PostgreSQL) and writing efficient SQL queries.
Experience in cloud-based data platforms (Snowflake, AWS).
Proficiency in developing robust, reliable APIs using Python and FastAPI Framework.
Experience with job scheduling and orchestration (Airflow is a plus).
Expertise in ELT and ETL, working with large datasets, and performance and query optimization.
Understanding of data structures and algorithms.
Experience with modern testing frameworks (SonarQube, K6 is a plus).
Strong collaboration skills and willingness to work with others to ensure seamless integration of server-side and client-side components.
Knowledge of DevOps best practices and tools, especially in setup, configuration, maintenance, and troubleshooting, including containers (Kubernetes, Argo, Red Hat OpenShift), infrastructure as code (Terraform), monitoring and logging (CloudWatch, Grafana), scripting and automation (Python, GitHub, GitHub Actions).
Languages : Fluent spoken and written English.
Why choose us?
Progress requires people — from diverse backgrounds, locations, and roles, united by a desire to make miracles happen. You can be one of those people. Embrace change, explore new ideas, and seize opportunities. Let’s pursue progress and discover the extraordinary together.
At Sanofi, we provide equal opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity.
Watch our ALL IN video and review our Diversity, Equity, and Inclusion initiatives at sanofi.com!
LI-EUR
LI-Hybrid
Sanofi
Why Sanofi
We transform medicine, reinvent work, and empower our people to excel in their careers and lives. Our dynamic, inclusive workplace is built on trust and respect, supporting employees' personal and professional growth.
J-18808-Ljbffr
Data Analyst • Barcelona, Cataluña, España