See, Hear, Speak: Investing in LiveKit and the Future of Realtime Applications

LiveKit co-founders Russ d'Sa and David Zhao

The nature of software is changing. As models become more capable, they're enabling applications that simply weren't possible before: agents that can see, speak, and act in the physical world. At the same time, how we interact with technology is shifting from clicks and screens toward voice, video, augmented reality, and robotics. But these new interaction paradigms demand a different kind of infrastructure that’s built for the low-latency, continuity, and reliability these experiences require.

Last year, through friends of Index, I got connected with LiveKit founders Russ d’Sa and David Zhao. I knew they were building developer tools for adding voice and video to applications like OpenAI’s ChatGPT voice mode, but I didn’t fully grasp the scope of their vision until I sat down with them. Having spent years around developer-first, open-source infrastructure, I know how rare it is to see technical depth like Russ and David’s paired with such sharp product instincts. And once I saw the platform up close and understood the sophistication of the architecture, it was clear this was a true cloud infrastructure play for the next generation of applications.

LiveKit makes realtime, stateful voice and video workloads reliable at global scale. The platform now spans the full agent development lifecycle—from the framework and networking layer to model infrastructure, telephony, and observability. This is fundamentally a networking and distributed compute challenge, and one that Russ and David approach with real systems depth. They’re solving one of modern infrastructure’s hardest technical problems, and doing so in a way that feels natural and delightful for developers to use.

You don’t have to look hard to find social proof. More than 200,000 developers and teams are already using LiveKit, from solo builders to massive enterprises across customer support, healthcare, financial services, transportation, consumer applications, and robotics. That growth has been almost entirely bottom-up, with barely any GTM machinery behind it. When you talk to customers, they rave about how developer-friendly LiveKit is, how much control it gives them over scaling, and how much engineering time it saves.

And this is just the beginning. In the near term, voice agents are becoming the first line of interaction in call centers and customer workflows. As robotics and autonomy take off, the same underlying requirements will apply to systems interacting with the physical world through cameras, microphones, and sensors. As foundation models commoditize, value will increasingly accrue to the infra layer that solves this highly complex challenge in an elegant and integrated way. Russ and David have talked about making it as easy to build voice AI apps as it is to build web apps, from agent frameworks and networking to platform, compute, and complete backend ownership for realtime AI workloads.

At Index, we’re firm believers that exceptional companies are built by exceptional people. Russ is a humble yet ambitious product visionary who has devoted his career to building more natural paradigms of human-computer interaction. David is a thoughtful, rigorous technical architect with a particular knack for designing the right abstractions that developers intuitively love. They’ve worked together for nearly two decades, across multiple companies, and have built deep trust through both success and adversity. They’re complementary in the best ways, and clear-eyed about what it will take to scale an enduring infrastructure business.

I’m thrilled to lead LiveKit’s Series C, to join the board, and to welcome Russ, David, and their team into the Index family. We believe they’re establishing one of the most important infrastructure layers in the AI stack, and we’re excited to partner with them as they build the platform for a world where software listens, speaks, and interacts in real time.

In this post: Eryk Dobrushkin, LiveKit, Sofia Dolfe

Published — Jan. 22, 2026