- Location
- Paris, France
- Last Published
- Apr. 23, 2026
- Sector
- AI/ML
- Function
- Other Engineering
By applying, you agree to our Applicant Privacy Policy.
Role Summary Research Engineer, Data Infrastructure The Data Infrastructure team at Mistral AI is architecting the backbone of our frontier model training and fine-tuning ecosystem. We are building the specialized compute and data fabrics required to power the development of world-class AI. Our vision is to operate some of the largest compute fleets in production and build data lakes and metadata systems with a roadmap toward exabyte-scale architecture. We are currently in the process of building a high-performance training platform designed for massive scale across both on-premise and cloud-native Kubernetes environments. We are leading a strategic transition from legacy scheduling to modern orchestration. With numerous clusters distributed across various regions, we are focussed on implementing sophisticated multi-cluster orchestration and cloud-bursting capabilities to better utilize our global resources and ensure our researchers have seamless access to compute wherever it resides. Our mission is to evolve our current systems into a platform that is as durable as it is flexible. Location: Paris / London (hybrid) or remote EU/UK with one hub day per month. About the Role This role focuses on building and operating the next generation of data infrastructure at Mistral AI. You will be a core contributor to our evolution, helping us design and scale massive compute fleets and storage systems designed for high performance and scalability.
You will help us move toward a future of decoupled control and data planes, scaling big data compute and storage platforms while ensuring secure and governed data access for MLOps and research. You will take full lifecycle ownership: from architecting the migration away from legacy orchestrators to implementing production-grade pipelines and participating in on-call rotations for critical training jobs. In this role, you will:
- Build & Scale: Help us reach our goal of operating massive distributed compute and storage systems
- Global Orchestration: Architect and maintain multi-cluster orchestration layers to optimize workload placement across diverse hardware and regions.
- Design Future-Proof Storage: Architect our transition to modern storage formats to handle fine-tuning datasets at a scale that anticipates exabyte growth.
- Platform Engineering: Contribute to the development of our internal training platform, ensuring seamless model training and fine-tuning capabilities across Kubernetes and SLURM based environments.
- Metadata & Lineage: Implement and manage systems to provide clear visibility and lineage as our data and model pipelines grow in complexity.
- Operational Excellence: Use modern deployment workflows to manage cloud-native deployments, ensuring our data platform can scale by orders of magnitude while remaining reliable and efficient.
You might thrive in this role if you:
Have 4+ years of experience in Data Infrastructure, MLOps, or Infrastructure Engineering.
Have experience or a strong interest in supporting foundational compute and storage platforms.
Are proficient in Python and enjoy solving the "brittle data lake" problem with modern, columnar storage standards.
Are well-versed in Kubernetes-native tooling and excited to debug large-scale distributed systems across multi-cluster environments.
Take pride in building and operating scalable, reliable, and secure systems from the ground up.
Are comfortable with ambiguity and the challenges of building high-scale infrastructure in a rapid-growth AI environment.