- Location
- San Francisco
- Last Published
- Dec. 13, 2024
- Sector
- AI/ML
- Function
- Software Engineering
Who we are At Twelve Labs, we are pioneering the development of cutting-edge multimodal foundation models that have the ability to comprehend videos just like humans do. Our models have redefined the standards in video-language modeling, empowering us with more intuitive and far-reaching capabilities, and fundamentally transforming the way we interact with and analyze various forms of media. With a remarkable $77 million in Seed and Series A funding, our company is backed by top-tier venture capital firms such as NVIDIA’s NVentures, NEA, Radical Ventures, and Index Ventures, and prominent AI visionaries and founders such as Fei-Fei Li, Silvio Savarese, Alexandr Wang and more. Headquartered in San Francisco, with an influential APAC presence in Seoul, our global footprint underscores our commitment to driving worldwide innovation. We are a global company that values the uniqueness of each person’s journey. It is the differences in our cultural, educational, and life experiences that allow us to constantly challenge the status quo. We are looking for individuals who are motivated by our mission and eager to make an impact as we push the bounds of technology to transform the world. Join us as we revolutionize video understanding and multimodal AI. About the role As a Machine Learning Engineer at Twelve Labs, you will drive our ML systems + platform engineering efforts in all facets of our e2e research & engineering workflows. Scaling our training, inference, and evaluation systems — all while improving the reliability of our model deployments / operations / versioning / etc. — is the essence of the role. This role is a perfect fit for engineers who get excited by the prospect of advancing the State of the Art in vision-language modeling by perfecting ML systems and infrastructure!
In this role, you will
- Advance our industry-leading enterprise video solutions by incorporating already-great research into fault tolerant, low latency e2e systems
- Own model deployment, metadata management, and high-throughput inference strategy for both retrieval ("Marengo") and generative ("Pegasus") models
- Mentor junior engineers/researchers, and hold a high bar around code quality / engineering best practices
- Build the highest impact, not the flashiest, libraries and services
- Lead by example in interviewing, hiring, and onboarding passionate and empathetic engineers
- Deliver industry leading applied research solutions to problems like VLM finetuning, auto-labeling of video-text datasets, and model-based filtering of said datasets to optimize (end-)model performance
- Work across teams to understand and manage project priorities and product deliverables, evaluate trade-offs, and drive technical initiatives from ideation to execution to shipment
You may be a good fit if you have
- 7+ years of industry experience (or 4+ with a PhD in a related technical domain)
- A PhD, or a Master's degree, in machine learning or a closely related discipline
- Led teams of 3+ engineers as a technical lead
- Expertise optimizing model inference with TensorRT, ONNX, Triton Inference Server, or directly related technologies
- Built Kubernetes-based systems for distributed data/ML workflows or worked extensively with HPC tools such as Slurm
- Scaled ML systems and/or data infrastructure to workloads of petabyte+ scale or have built 0-to-1 mission critical AI/ML applications from scratch
- A passion for, and experience in, both ML modeling and ML/AI systems software engineering
- Strong Python expertise and considerable prior work history with at least one statically typed language (we use Golang)
- Experience with FFmpeg or other high performance image/video processing libraries (bonus points for past work with such processing on GPUs/accelerators)
- Acquired, filtered, (re)labeled, or sanitized large scale language or vision-language datasets for LLM/VLM pretraining
- Strong communication skills in written and spoken English