Index OnAir: Looking Forward- 2021 and Beyond (Part I, AI)

by Index Ventures

Illustration by Allegra Lockstadt

Loading embedded content...

Artificial intelligence has developed rapidly in the past ten years, but is facing increased scrutiny over bias, accountability and job security. Those issues will all recede, say Index partners Sarah Cannon and Mike Volpi, as they share thoughts on the future of AI with Index principal Bryan Offutt.

Bryan Offutt: AI is fascinating from an engineering perspective, in framing the problem and developing a system you can deploy into a production environment, and it's fascinating philosophically because of the ethical concerns in how the technology is applied. It inspires us to think about what it means to be human. And progress over the past decade has been quite astonishing. What will be the largest developments in the next three to five years?

Mike Volpi: AI is essentially trying to mimic human beings. Think about all the amazing things we do as humans, and that AI is going to do some of those on our behalf. Cars will be driving using AI, and robots will be doing some of the work that humans have done begrudgingly. Less-talked about is the intellectual work that could be done by AI. When I was a kid, we weren’t allowed to bring calculators to a math test because it was considered cheating. There are going to be a lot of students using AI to do things like essay writing. We’re going to have to define what is cheating and what is real work, but it’s inevitable that AI is going to write and communicate on our behalf. We're going to see a lot of interesting applications that produce text or audio, but the producer might be the AI.

Sarah Cannon: Given what has happened in 2020, we’ll have a lot more video data inputs, and not just text or images. On the policy side, we might have a new approach to regulation, and possibly a national strategy around it. Another area I’m interested in is in the arts, and how AI could be used as a tool to create music, opera or film. We can debate how good those might be!

Bryan: One of the major changes we've seen over the past couple of years has been GPT-3. It’s the culmination of natural language processing research, resulting in a model that can generate new things dependent on context. Those can be pretty difficult to tell apart from something created by a person. Tell us more about what this means for education, and for business.

Mike: Zoom out a bit, and AI learns to do generative work in the same way we do: we read a lot of documents, and assemble reason, deduction and extraction from that reading. When asked to generate, we reconstruct that in our head to come up with an interesting concept. But how much of that is genuinely creative versus reassembly of knowledge? Creativity is such a deeply human thing, yet we probably aren’t as creative as we think we are! If you read a book about leadership, and then you practice that, that’s not creative — you're just reading somebody else’s ideas. Over a decade we will find it harder to distinguish between human and AI creativity. Who knows if that’s good or bad. But the process being conducted by machines versus humans is actually quite similar. It just so happens that we do it organically with neurons, and machines do it electronically with bits and bytes.

Bryan: As an aficionado of art, what do you think this means for creativity and our relationship with machines?

Sarah: One definition of creativity is asking new questions. It’s one thing to leverage existing information to come up with something new. But then there's asking the philosophical question of what it means to be human, or what it means to be human versus robot. As far as I know, GPT-3 can't do that. Is there a Moore's law for AI, some predictable rate by which we get breakthroughs in AI? I also wonder what a product built with GPT-3 might actually look like. Is it a messaging product? Are products just built differently when this kind of technology is available? And when entrepreneurs are aware of the kind of technology they can leverage from open AI, how will they do things differently in their own businesses?

Bryan: Most people that are viewed as truly great creatives don't tend to come from very cookie cutter backgrounds. It does feel like creativity is the combination of disparate experiences, in a unique way. This is the argument for diversity of life experience within your team. So when we develop algorithms, how can we bring the same principle of diverse experience to that, through the data? Humanity is plagued by bias, and with machine learning models bias is a similar problem. It's not just a function of the data — it's the process of developing an objective function in the problem you're applying it to. This is a huge topic of conversation in the coming years. How are you thinking about bias in the context of machine learning?

Mike: The word ‘bias’ generally has a negative connotation. We have to step back one second and appreciate that bias is just a shortcut humans use when we're too lazy to think something through. That can be useful for certain functions that we perform in life, but can have very negative consequences when applied in the wrong context. In AI, bias can happen when all the data points fit in a certain direction, and you have never seen data outside that context. You make a shortcut decision based on a finite set of data that turns out not to be representative of the world. It doesn’t matter how good the machine learning algorithm is — if you have bad data bias, you’ll get bad, biased outcomes.

But AI can’t interpret data it doesn’t see. We need to fix the data, but that’s super hard to do in our world. Biases permeate every part of human life, so if we want machines to be unbiased, we have to start by unbiasing ourselves.

Secondly there’s the issue of model drift, where the algorithm is supposed to produce a certain type of result but just starts heading in the wrong direction. It’s particular to machine learning, and there are a lot of good tools and systems that safeguard against it. Within the next two to three years that problem will be mostly contained.

Bryan: Sarah and I once missed a flight because we were having such an in-depth discussion about the ethics of AI and its impact on society. What are the main considerations, Sarah?

Sarah: Everyone asks if our jobs will be replaced, and I don’t think that’s a simple question. Some people will be impacted dramatically by having their jobs replaced, and current policy is not well set up to address that.

Secondly, I think there are aspects of everybody's jobs that will be impacted by artificial intelligence. And then there are others who will benefit greatly from new opportunities in entirely new jobs. So the most significant impact is obviously on the people who will lose their income.

At a policy level, are there rules needed around synthetic datasets? Historical data sets that we have are potentially concentrated on certain populations or certain countries, so would you introduce synthetic data to feed to the algorithm to correct for historical bias?

Thirdly, there's an interesting geopolitical question of whether different countries would have different approaches to managing aggregated data. China has a certain approach, obviously. Even technology platforms like WeChat, which has multiple products and a lot of advantageous data, and more data means better accuracy in predicting results. That can create some winner-takes-most dynamics that have wide-ranging, economic and geopolitical consequences.

Bryan: In some ways this is similar to the industrial revolution, a period of time where machines were taking over work traditionally done by people. There was both job displacement, but also job creation. Things that used to be handmade became produced in an impersonal way in a factory. What parallels are there with the industrial revolution and AI?

Sarah: The innovation during the industrial revolution was electricity, and then all of a sudden refrigeration is different, and homes are different. It leads to all these derivative impacts. The steam engine is the other historical parallel that economists talk about. AI is equivalent to one of those major shifts. It will change not just one sector but all sorts of products and markets. Is it equivalent to the birth of the internet? Economically, the global, cross-industry impact will be in the trillions of dollars. And if it’s inline with those historical paradigms, it will last generations.

Mike: I'm a technology optimist. Whether you refer to the industrial revolution, mass productization, electricity, the automobile, ATM machine, the internet — in every one of those cases jobs did disappear. But since the industrial revolution, life expectancy has nearly doubled. There's no question that we live more comfortable lives, and a lot of that is because of technology. AI is fundamentally no different than that. We worry that our jobs will disappear in ten years, but it’s the boring, repetitive jobs that are most likely to disappear. The more intellectual and stimulating, creative and fun and different your job is, the less likely it is to disappear.

Humans are extraordinarily adaptable, and find different ways to make ourselves useful and effective. AI doesn't mean we're all going to become programmers, but we are going to find a lot of ways to create value, and have more fulfilling lives. And in the short term, AI will take away the most painful, annoying jobs like lifting heavy objects, mindlessly sorting for hours on end, or driving for significant periods of time. I 100% believe that there will be more valuable and rewarding jobs to do. And when you hear people say that robots are going to take over the world, remember that they learn from us, from our Wikipedia entries on history and everything else.

Bryan: What's the boldest prediction you could make about how AI will change our lives?

Mike: We’ll be healthier, live longer, and be happier. That said, I think we're going to face a whole new set of problems that we didn't imagine before. We thought social networking, for example, would have some amazingly positive effects, which it probably has had. But it's also had some somewhat negative effects that we're figuring out how to deal with right now, as we did in this current election cycle. AI will present similar problems. But I'm confident that we’ll figure them out, and on balance the trajectory will be extraordinarily positive for people,

Sarah: It will have a dramatic impact on borders. If I'm able to communicate with people all over the world and transact somehow, the world will feel more interconnected. Maybe AI could do diplomacy for us; negotiating is a hard problem. Also self-driving cars, within 50 years, will enable people around the world to be far more connected in a deep way.

There will be a lot more quantification of emotion, and we’ll be having relationships with AI. In 50 years, the main difference will be the inputs. We’ll just have a lot more data, and a lot more understanding of our emotions that will enable us to have deeper relationships than are possible today.

Bryan: A subtle change has happened with the movement from traditional programming to machine learning . We used to live in a world where when you program a computer, you'd write instructions and tell the computer how to view the world. As we move towards models, the relationship we have has changed dramatically. Now the computer is telling you things about the world. How we navigate that will be very interesting.

I tend to share the optimistic viewpoint that it will make our lives better in the long term, though there will certainly be turbulence in that process. But I look forward to seeing what transpires. That’s what makes it all fun.

*This transcript has been edited for clarity and brevity.

IV_Perspectives_Default.jpg Play video

Get more insights into the start-up landscape, more Founder interviews, and Company perspectives by subscribing to our channel.

In this post: Mike Volpi, Bryan Offutt

Published — Dec. 18, 2020