- Location
- San Francisco
- Last Published
- Nov. 29, 2024
- Sector
- AI/ML
- Function
- Software Engineering
Who we are At Twelve Labs, we are pioneering the development of cutting-edge multimodal foundation models that have the ability to comprehend videos just like humans do. Our models have redefined the standards in video-language modeling, empowering us with more intuitive and far-reaching capabilities, and fundamentally transforming the way we interact with and analyze various forms of media. With a remarkable $77 million in Seed and Series A funding, our company is backed by top-tier venture capital firms such as NVIDIA’s NVentures, NEA, Radical Ventures, and Index Ventures, and prominent AI visionaries and founders such as Fei-Fei Li, Silvio Savarese, Alexandr Wang and more. Headquartered in San Francisco, with an influential APAC presence in Seoul, our global footprint underscores our commitment to driving worldwide innovation. We are a global company that values the uniqueness of each person’s journey. It is the differences in our cultural, educational, and life experiences that allow us to constantly challenge the status quo. We are looking for individuals who are motivated by our mission and eager to make an impact as we push the bounds of technology to transform the world. Join us as we revolutionize video understanding and multimodal AI. As a Software Engineer, Data at Twelve Labs, you will build core data infrastructure for acquiring, preprocessing, cleaning, filtering, and labeling multimodal text-vision datasets for model training. In this role, you will have a larger impact on the quality of our models than perhaps any other engineering role at the entire company: well filtered & labeled data is core to everything we do. This role is a perfect fit for distributed systems engineers who want to advance video understanding by delivering world class systems for *unstructured* multimodal corpora.
In this role, you will
- Acquire, filter, label (leveraging techniques like RLAIF), and sanitize large-scale vision-language datasets for LLM/VLM pretraining
- Scale our data systems to enable our evolution from double-digit to triple-digit billion parameter models (and beyond!)
- Mentor junior engineers/researchers, and hold a high bar around code quality / engineering best practices
- Establish strong relationships with 3rd party data vendors and human-in-the-loop data labeling services
- Build the highest impact, not the flashiest, libraries and services
- Lead by example in interviewing, hiring, and onboarding passionate and empathetic engineers
- Work across teams to understand and manage project priorities and product deliverables, evaluate trade-offs, and drive technical initiatives from ideation to execution to shipment
You may be a good fit if you have
- 7+ years of industry experience (or 4+ with a PhD in a related technical domain)
- A PhD, or a Master's degree, in machine learning or a closely related discipline
- Led teams of 3+ engineers as a technical lead
- Experience building model-bootstrapped language or vision-language datasets (RLAIF, etc.)
- Managed data acquisition for large generative or contrastive models
- Experience with FFmpeg or other high performance image/video processing libraries (bonus points for past work with such processing on GPUs/accelerators)
- Deep experience as a backend and/or data engineer & an interest in ML/AI systems
- Strong Python expertise and considerable prior work history with at least one statically typed language (we use Golang)
- Strong communication skills in written and spoken English