Overview
Join to apply for the AI Platform Engineer role at Mainder .
About Our Partner : We are partnering with an innovative technology company focused on advancing AI-powered software solutions through cutting-edge integration of Large Language Models and intelligent systems. They are building sophisticated AI infrastructure designed to bridge the gap between AI research and production-ready applications, with particular focus on scalable model deployment, MLOps pipelines, and enterprise-grade AI services. The company is seeking highly motivated professionals to help scale their AI platform from experimental prototypes to commercial deployment.
Position Overview : Support our partner in their search for an AI Platform Engineer to join their European office. You will bridge AI research and production software, building and maintaining AI infrastructure that enables researchers to deploy work safely, reproducibly, and at scale. Your role includes architecting model serving infrastructure, implementing MLOps pipelines, optimizing AI performance, and collaborating with AI developers and backend engineers to integrate AI capabilities into production systems.
Your Mission : AI Infrastructure and Model Deployment, MLOps and Pipeline Development, API Development and Integration, Collaboration and Technical Enablement, Performance Optimization and Reliability.
- Build and maintain AI infrastructure including model serving, vector databases, and embedding pipelines
- Deploy and serve LLMs from multiple providers (OpenAI, Anthropic, HuggingFace, fine-tuned models)
- Implement vector database solutions (Pinecone, Chroma, Weaviate, FAISS) for efficient retrieval
- Optimize inference latency, costs, throughput, and reliability across AI services
- Design and implement caching, rate limiting, and retry strategies for production AI systems
- Enable AI developers to deploy their work reproducibly and safely at scale
- Version models, prompts, datasets, and evaluation results systematically
- Implement experiment tracking using tools like Weights & Biases or MLflow
- Build CI / CD pipelines specifically for model deployment and testing
- Monitor model performance, drift, and system health in production
- Set up comprehensive logging and observability for AI services
- Define workflows from notebook / test repository → PR → staging → production
- Establish best practices for moving from research to production
- Design and implement robust APIs for AI inference using FastAPI
- Create endpoints for prompt testing, model selection, and evaluation
- Build APIs for prompt management and experimentation
- Integrate AI services seamlessly with backend application architecture
- Ensure API reliability, security, performance, and proper error handling
- Implement async programming patterns for efficient AI service delivery
- Work closely with AI developers (researchers) to productionize their experiments
- Collaborate with backend engineers to integrate AI capabilities into the product
- Define and document workflows for AI development and deployment
- Review code and mentor AI developers on software engineering best practices
- Document AI infrastructure, APIs, and operational procedures
- Enable research teams to move faster from idea to production
- Optimize AI inference latency and cost efficiency
- Implement monitoring and alerting for AI service health
- Debug complex distributed AI systems and resolve production issues
- Ensure high availability and fault tolerance of AI services
- Conduct performance profiling and implement optimization strategies
- Balance trade-offs between latency, cost, throughput, and model quality
- Enable AI researchers to deploy their innovations into production systems safely and efficiently
- Build the AI platform infrastructure that scales from experiments to enterprise deployment
- Reduce inference costs and latency while maintaining model quality
- Establish MLOps standards that support the company's long-term AI strategy
- Directly influence the reliability, performance, and scalability of AI-powered features
- Bridge the gap between cutting-edge research and practical production systems
Key Requirements
Proven experience (7+ years) in software engineering, preferably with focus on AI / ML systemsStrong programming skills in Python with experience in production environmentsExperience with LLMs and AI / ML in production : OpenAI API, HuggingFace, LangChain, or similar frameworksUnderstanding of vector databases (Pinecone, Chroma, Weaviate, FAISS) and similarity searchCloud infrastructure experience : GCP (Vertex AI preferred) or AWS (SageMaker)API development expertise : FastAPI, REST, async programming patternsCI / CD and DevOps skills : Docker, Terraform, GitHub ActionsMonitoring and observability experience for distributed systemsProblem-solving mindset : comfortable debugging complex distributed AI systemsOperating experience with AI deployment in enterprise environmentsFluent oral and written communication in English (additional European languages are a plus)Experience fine-tuning or training machine learning modelsFamiliarity with AI frameworks (LangChain, Pydantic AI, or similar)Knowledge of prompt engineering techniques and evaluation methodologiesExperience with real-time inference and streaming responsesBackground in data engineering or ML engineering rolesUnderstanding of RAG (Retrieval-Augmented Generation) architecturesExperience with experiment tracking tools (MLflow, Weights & Biases)Contributions to open-source AI / ML projectsKnowledge of Kubernetes for container orchestrationExperience with model versioning and A / B testing frameworksFamiliarity with cost optimization strategies for LLM deploymentsWhat They Offer
Highly Competitive Compensation : Top-of-market salary package that reflects your expertise and the value you bringCutting-Edge Technology : Work with state-of-the-art AI technologies and the latest LLMs from leading providersWork-Life Balance : Flexible work arrangements with options for remote workProfessional Growth : Opportunities to attend industry conferences, engage with the AI / ML community, and expand your technical expertiseImpact-Driven Culture : Join a passionate team focused on solving challenging problems at the intersection of AI research and production engineeringTechnical Autonomy : Shape the AI platform architecture and have real influence on infrastructure decisionsLearning Environment : Work alongside AI researchers and engineers pushing the boundaries of what's possibleWhy Join?
At this company, you will be working on technology that brings cutting-edge AI research into real-world production applications. This is your opportunity to build the AI platform infrastructure that enables researchers to deploy innovative models at scale while maintaining enterprise-grade reliability and performance. You\'ll see your work directly enable groundbreaking AI capabilities, while collaborating with a talented team of AI developers, backend engineers, and product leaders. If you\'re ready to take your expertise in AI infrastructure and MLOps to the next level and want to be at the forefront of production AI systems, we want to hear from you!
We are dedicated to creating a diverse, inclusive, and authentic workplace. If this role excites you but your background doesn\'t perfectly match every qualification, we still encourage you to apply. You could be the perfect fit for this position or another opportunity within our growing team.
EEO Statement : We are an equal opportunity employer and value diversity. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
Location
Barcelona, Catalonia, Spain
#J-18808-Ljbffr