Who's World is This?
The landscape of AI is one of two worlds, intimately intertwined, but never fully integrated.
In one world, you are a researcher. You are seeing advancements in your field happening so quickly that it is almost impossible to keep track. Your research work is largely in NLP, where Transformers have...transformed your field in a flash. Keeping on top of the SQuAD or GLUE leaderboard requires checking on a nearly weekly basis, and this is just one example of many. Your colleagues are seeing similar advancements in reinforcement and self-supervised learning. Others are just beginning to understand the capabilities (both positive and spooky) of GANs. All you know is that they all share two things in common: Math and Code. 1000s of years of humanity have made us pretty good at organizing the first. The second, at least for machine learning, not so much. You daydream of a better way to organize your code and improve experimentation velocity.
In the other world, you are a ML practitioner at a tech forward company. Tasked with building and operating mission critical machine learning models, you are less concerned about the latest and greatest research models, and more concerned with infrastructure that scales. It's not that you don't want to use the latest research, it's that turning that research into production grade code just isn't realistic. You briefly wish that someone would make a way for you to grab the latest research off the shelf, but quickly get back to the task at hand. As you stitch together pieces of infrastructure clearly intended for an app instead of a model, you can't help but feel like you are constantly shoving a square peg into a round hole. Taking a glance out at the Hudson, you daydream of a single source of truth for all of your organizations models, one place to map training runs, visualize pipelines, deploy models. Not a walled garden, but a truly flexible and "developer first" experience, complete with all of the best of breed integrations one would expect from a modern software tool.
It turns out these stories didn't take place in two worlds, but in one city.
The first story was that of Will Falcon, cofounder and CEO of Grid.ai, and former PhD researcher at FAIR and NYU (now PhD dropout) (okay, his research was not in NLP, it was in self supervised...but it made for a better story). He had seen firsthand how painful developing research grade machine learning models could be, and had decided to spin up a side project to help organize some of this code, which he called Pytorch Lightning.
The second is that of Luis Capelo, CTO of Grid.ai and former ML Engineer Lead at Glossier and Head of Data Products at Forbes. Here he had experienced the operational challenges inherent in training, organizing, and deploying production quality machine learning models first hand, and knew there had to be a better way.
The two met and got to thinking: what if we combined our experiences to finally bridge the gap between research and production. A single product that would serve both researchers and practitioners alike, with a user experience that would streamline the process from end to end and empower even the most novice data scientists to leverage cutting edge research. With Lightning, they already had the beginnings of a foundation. Now they just needed to build the rest. And so Grid was born.
"We Are a User Experience Company"
The Index team had the good fortune of meeting Will and Luis soon after the inception of Grid, and we could not be more delighted to have been involved in leading both the Seed and Series A. I remember reading the code on Github and texting Sarah about blown away I was by how thoughtful the abstractions were. We have met many A.I. companies during my time at Index, but it was immediately clear to me that this one was different. It turns out there was good reason for this feeling. According to Will, Grid is not an AI company, "we are a user experience company".
"Clean abstractions are and have always been the core of technological innovation."
— Bryan Offutt, Index Ventures
C was once the most popular language in the world, but now we have Python. Maintaining your own data centers was once the norm, but now we have the public cloud. Computers were once text based, but now we have GUIs. There will always be C programmers, companies running data centers, and terminal users, but I think it's fair to say that they will never again account for the majority. These changes have fundamentally altered the course of programming, infrastructure, and personal computing, unlocking their powers to an entirely new demographic without limiting them for the power user. Clean abstractions are and have always been the core of technological innovation.
Why should the same not be true of ML?
When the power grid came online, it revolutionized the way that we live. Houses could now be built so that wiring up an oven, a lightbulb, and a heater was a simple, unified experience: plug it in and flip a switch. No bespoke installation for each product, no danger of electrocution from incorrect wiring , it just works. So it is with Grid.ai — the name isn't an accident. Whether you are a professional looking to develop and train your latest homemade model using your favorite labeling, experiment management, monitoring, and deployment tooling, or a weekend garage tinker looking to quickly wire up the SOTA for an end to end chatbot, it's all the same when you use Grid. No more walled gardens, painful integration setups, or bugs from code copy pasted from the internet.
Bring your tooling. Pick your model. Plug them in. Flip the switch. That's the power of Grid.
Published — Oct. 8, 2020