- Locations
- Paris, France
- London
- France
- Last Published
- Dec. 11, 2024
- Sector
- AI/ML
- Functions
- Software Engineering
- Data Science
Role Summary We are seeking an AI Scientist, Safety to evaluate, enhance, and build safety mechanisms for our large language models (LLMs). This role involves identifying and addressing potential risks, biases, and misuses of LLMs, ensuring that our AI systems are ethical, fair, and beneficial to society. You will work to monitor models, prevent misuse, and ensure user well-being, applying your technical skills to uphold principles of safety, transparency, and oversight. Location : Paris or London
Responsibilities
- Adversarial & Fairness Testing
- Design and execute adversarial attacks to uncover vulnerabilities in LLMs.
- Evaluate potential risks and harms associated with LLM outputs.
- Assess LLMs for biases and unfairness in their responses, and develop strategies to mitigate these issues.
- Tools & Monitoring
- Develop monitoring systems (eg. moderation tools) to detect unwanted behaviors in Mistralβs products.
- Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale.
- Investigate and respond to incidents involving LLM misuse or harmful outputs, and develop post-incident recommendations.
- Analyze user reports of inappropriate content or accounts.
- Contribute to the development of AI ethics policies and guidelines that govern the responsible use of LLMs.
- Safety Fine Tuning
- Work on safety tuning to improve robustness of models.
- Collaborate with the AI development team to create and implement safety measures, such as content filters, moderation tools, and model fine-tuning techniques.
- Keep up-to-date with the latest research and trends in AI safety, LLMs, and responsible AI, and continuously improve our safety practices.
- Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale
You may be a good fit if
- You have a degree in Computer Science, AI, Machine Learning, or a related field. Advanced degrees (MSc, PhD) are preferred.
- You are familiar with Python and you are a highly proficient software engineer in a least one programming language (e.g. Python, Rust, Go, Java)You have, hands-on experience with AI frameworks and tools (e.g., TensorFlow, PyTorch, Jax)
- You have high technical engineering competence. This means being able to design complex software and make them usable in production
- You have a high scientific track record in a field of science.
- You are self-starter, autonomous and low-ego.
- Collaborative and have a real team player mindset. Note that this is not an exhaustive or necessary list of requirements, please consider applying if you believe you have the skills to contribute to Mistral's mission. Now, it would be ideal if
- You have proven experience in AI safety, responsible AI, or a related field. Familiarity with LLMs and their potential risks is essential.
- You have hands-on experience with Generative AI e.g. experience with transformer based models and a broad knowledge of the field of AI, and specific knowledge or interest in fine-tuning and using language models for applications.
- You are able to navigate the full MLOps technical stack, with a focus on architecture development and model evaluation and usage
Benefits
- Competitive cash salary
- Equity France π₯ Food : Daily lunch vouchers π₯ Sport : Monthly contribution to a Gympass subscription π΄ Transportation : Monthly contribution to a mobility pass π§ββοΈ Health : Full health insurance for you and your family πΌ Parental : Generous parental leave policy π Visa sponsorship UK π§ββοΈ Health : Competitive Healthcare program π Pension : Monthly contribution π΄ Transportation: Monthly contribution π₯ Sport: Monthly contribution π Visa sponsorship