Did Money Corrupt an A.I. Utopia?

Listen to this episode

S1: On December 11th, 2015, Silicon Valley did what it does best, announced a new kind of company that was supposed to change the world. Its backers included Sam Altman, a near legendary investor and president of the startup accelerator Y Combinator, and Elon Musk, who is, you know, Elon Musk.

S2: The company was called Open a-I, and the idea, simply put, was to get artificial intelligence right.

S3: It’s something that both Altmann and Musk talk about a lot here, Saltman last year.

Advertisement

S4: What does it mean to build something that is more capable than ourselves? Like, what does that say about our humanity? What’s that world going to look like? What’s our place in that world? How is that going to be equitably shared? How do we make sure that it’s not like a handful of people in San Francisco making decisions and reaping all the benefits? I think we have an opportunity that comes along only every couple of centuries to redo the socioeconomic contract and how we include everybody. And that make everybody a winner and how we don’t destroy ourselves in the process is a huge question for opening eyes.

S3: The answer to that question was, as the name suggests, openness. They promised a kind of radical transparency. Employees who were researching neural nets and machine learning were encouraged to publish their papers and their code of company research led to patents. It would share them with the world and not keep them under lock and key. This was the defining ethos of open A.I. and since that launch back in 2015, they’ve become a dominant player in the world of artificial intelligence.

Advertisement
Advertisement
Advertisement
Advertisement

S1: I follow open eyes work and have been following their work because they are one of the top AI research labs in the world. That’s Karin Howe, senior A.I. reporter at m.i.a.’s Tech Review.

S5: So it seemed like the time was ripe to actually start looking into the inside story of how they operate.

S1: So you made arrangements to spend some time embedded in their office. Tell me about that experience.

S6: So I reached out to them saying, hey, I want to do this profile of you. I think the time is right. I wanted to have a chance to understand the culture. Better eat lunch with them, hang out with them a bit, just like roam around in and get a feel for the space. And they were like, great, come stop by our office. We will set up interviews with all of the main technical people at the company.

Advertisement

S1: So in August, Karen flew from Boston to San Francisco and went to the corner of 18th and Folsom to a three story structure with the words Pioneer Building painted on the side.

S7: And I mentioned this to my editor after my first day.

S6: I said, I feel like they’re giving me all of the access without any of the access so that the building has three floors and the first floor is the common area and the dining room.

S7: And I was only allowed to stay there so I couldn’t go to the second or third floor where it, like all the researchers saw and all of their interesting research is done. And then the first day there were like, you’re good, you’re gonna be up to eat lunch with the team. And like 30 minutes before lunch, they were like, actually, we want to take you out for lunch. And I later learned from some sources there that it was because there was an all company meeting happening at the time and they needed me out of the office. But they actually didn’t mention that. Could just change of plans.

Advertisement
Advertisement
Advertisement

S3: You’re going out of the office as a reporter, especially a reporter writing about technology. Karen doesn’t usually assume she’s gonna get a lot of access to the company. She covers, but with open eye, she expected things to be different.

S6: I immediately was like, this is not at all open. I am not experiencing the transparency that they purport to have. And even just that small thing like that small misalignment made me think, I wonder what other misalignments there are.

S3: Karen spent the next six months trying to understand these contradictions. She did nearly three dozen interviews and she found something much messier and more complicated than she was expecting. Today on the show, the story of an A.I. company that wanted to put purpose over profit and how that went sideways. I’m Lizzie O’Leary and you’re listening to what next TBD. A show about technology, power and how the future will be determined. Stay with us.

Advertisement

S1: We hear the term a-I a lot. It gets thrown around in the context of our conversation today. How how should we be thinking about what a I is?

S6: So it was started 70 years ago at this point and at the time it was conceived as the study of figuring out how to replicate human intelligence in computers. So there were two main theories that came out of this initial founding of the field. One theory was humans are intelligent because we can learn. So if we can replicate the ability to learn in machines, then we can create machines that have human intelligence. And the other theory was humans are intelligent because we have a lot of knowledge. So if we can encode all of our knowledge into machines, then it will have human intelligence. And so these two different directions have kind of defined the entire trajectory of the field. Almost everything that we hear today is actually from this learning branch and it’s called machine learning or deep learning more recently.

Advertisement
Advertisement
Advertisement
Advertisement

S1: And in this story you wrote, the conversation is focused on a G.I. or artificial general intelligence. And I’m wondering if you can describe the difference between the two things and why a G.I. matters in the eyes of the people you were reporting on.

S5: I think now a lot of people use A.I. to mean the A.I. systems that we have now or the algorithms we have now. Whereas Ajai is supposed to refer to what we’re trying to achieve long term.

S2: Pursuing a G.I., particularly with a long term view, was the central mission of open A.I.. And yeah, there was the traditional Silicon Valley talk of changing the world, but also this sense that if HDI was done wrong, it could have very scary consequences.

Advertisement

S5: So Elon Musk has this idea and he’s constantly quoted about this that we don’t as a society or like as a human race, we don’t really know what the impact of reaching Ajai will be.

S8: Not really all that worried about the short term stuff like narrow AI. It’s not a species level risk, whereas digital superintelligence is.

S6: And he thinks that there is a possibility that it will be devastating and sort of beneficial if humanity collectively decides that creating digital superchargers is the right move.

S8: Then we should do so very, very carefully. Very, very carefully.

S1: It makes it sound like he’s talking about like Skynet from Terminator, right?

Advertisement

S6: Yeah. It’s very science fiction sounding. And this is a pretty controversial opinion. But one of the initial researchers who was part of opening AI early on. Peter Abele, he’s a professor at UC Berkeley. He mentioned that the reason why he joined is because Elon Musk was pretty compelling and saying, you know, even if it’s a point 0 0 1 percent, it’s still a possibility. So shouldn’t we be thinking about it? And that was persuasive enough to him to join the company.

Advertisement
Advertisement

S2: Can you give me some examples of projects that open A.I. is working on or problems that they’re trying to solve?

S6: Yes. So there are many theories within the field of A.I. about what can help us get to more advanced computers. So each team is kind of playing out one hypothesis. One of these hypotheses is the idea that humans actually learn a lot from language. And so there is this one team that specifically is trying to develop advanced AI systems purely through giving it a ton of language data. Another theory is that humans have intelligence because we have physical bodies and our physical bodies allow us to move through the world and pick up objects and explore objects. So if you’ve ever seen a baby playing with toys like they’re going through this intense exploration phase where they’re just like biting everything they see and learning from that experience. So there’s another team, the robotics team at opening eye that’s testing that hypothesis.

Advertisement

S9: We’re trying to build robots that learn to live with like humans to trial and error based on just manipulating objects and and playing and fidgeting and all of that stuff.

S5: Can we develop advanced AI capabilities from just that?

S9: What we’ve done is train an algorithm to solve the Rubik’s Cube.

S5: One has with a robot in the way that the field is heading is towards systems that combine all of these approaches. And that is kind of what opening AI wants to do is as they test each of these hypotheses and these teams start getting better and better results that prove out these hypotheses, they want to merge all of these things together and create one mega system that learns from language and has a robotic body.

Advertisement
Advertisement
Advertisement
Advertisement

S10: And this and then the other one open A.I. launched.

S11: The idea was that if you’re gonna make a complete system, learns language and has a robotic body, it probably shouldn’t be done behind closed doors. And Silicon Valley leaders agreed between donations from Musk Altman and Peter Teale. It launched with a billion dollars in funding, but perhaps the defining feature of the company and what made it different from just about every other major player working on A.I. like Google’s DeepMind and Facebook A.I. were the expectations around that money open A.I. would be a nonprofit in its first ever announcement, declared that it would be unconstrained by need to generate financial returns.

S7: So there’s this huge debate that’s happening in the areas just community right now.

S6: And I think that’s being reflected in a lot of other scientific research communities of what does it mean when your funding is coming from corporate sources rather than government sources?

S12: Because almost all of the best research is now coming out of Google and Facebook and Microsoft. And it really changes the direction of our research because these labs, ultimately the type of research you do is more focused on short term gains. So that was what opening I was trying to go up against. They were trying to create this new funding model as a non-profit to be a counterfactual to the corporate funded models, but also acknowledge that government funding is just lacking. And so they were like, here’s here’s the perfect solution. We don’t have the government funding, but we also don’t have to take the corporate dollars. We can occupy this third space as a non-profit and instead of being constrained by a quarterly financial goals, really pursue long term ambitious work that might not have returns for 10, 20, 30 years. So that that was the original promise, that opening I sort of made when they started as a nonprofit.

Advertisement
Advertisement
Advertisement

S1: And so if we fast forward to 2017, a bunch of Corps members start drafting this internal document to lay out their path to Ajai. But they run in to a pretty big problem. Yeah.

S12: So they started mapping out the breakthrough research results that were happening in the field. And they realized that all of these other corporate labs, the way that they were getting to breakthrough results was by massively increasing the amount of computing power that they were using and the amount of data that they were using. And that is very costly. So they realized quickly that hosting a nonprofit probably wasn’t financially feasible if they were going to take this approach of competing with these corporate labs and trying to outpace them in the breakthrough results that they were achieving.

S1: And so last year.

S13: Today, I decided to raise money at Microsoft going all in on a I. The world’s biggest public company announcing an investment of $1 billion in eLong Musk’s open a-I to build artificial intelligence that can tap.

S1: The company announced that it had secured a second billion dollar investment, this time from Microsoft. That money would go to fund a for profit arm of the company, a for profit arm of a non-profit company. That was one of those disconnects that Karen wanted to dig into when she went out to San Francisco in August.

S5: It rattled a lot of employees. And when it happened, they had internal meetings where they talked about, you know, this is this is going to change what we do. Like, Microsoft is really aligned with our values. They’re like onboard with this beneficial HCI goal. We don’t have to change the TAC of our research to specifically focus on commercial applications.

Advertisement
Advertisement
Advertisement

S6: And for a while that was true. And recently that is no longer true.

S14: So sources told me that now Sam Almen has been talking about how his vision for 2020 is much more focused on actually trying to make some money. His message, as it was communicated to me was we now need to focus on doing research to make money and not the other way around. And I think one of the reasons must be, I would assume, is that they kind of need to start giving Microsoft some payback to this huge billion dollar investment that they made.

S15: How would you differentiate them, then from something like Google’s DeepMind? You know, are they all that different even though they were founded as a non-profit?

S16: Honestly, I don’t think they are. Some people might disagree with me on this, but I I I do think that open and deep minder are almost interchangeable.

S6: And I think ultimately they are more tightly interlinked with their owner because they are owned by alphabet, whereas opening.

S16: I has an investor relationship with Microsoft. But yes, I would consider them to be pretty much the same at this point.

S2: You know, at the really macro level. What is all this for? These people talk about changing the world and in the way that Silicon Valley people often do.

S17: But what’s the goal?

S6: There are a lot of problems that humans face that we have failed to solve, like climate change, global hunger, global poverty, and the really rosy, rosy view of A.I. is that maybe if we can develop intelligence that is unconstrained by the inefficiencies of communication or the mistakes that humans make, our need for sleep and like all of these other things, then then maybe these like other intelligent counterparts can really help us solve these challenges that we really need to solve.

Advertisement
Advertisement
Advertisement

S18: That’s the very perfect idealistic view. And the reality is that technologies are always there, impacts are always unequally distributed. And one of the things that we’re already seeing with A.I. is that it really concentrates power in the hands of the few, which is kind of an antithesis of like this grand vision to benefit everyone. And it’s pretty unclear right now how to actually prevent that from happening.

S11: Karen Howell, thank you so much for talking with us.

S19: Thank you so much for having me.

S20: Karen Howe is a senior reporter for the M.I.T. Tech Review. All right, that’s it for today. What next? TBD is produced by Ethan Brooks and hosted by me. Lizzie O’Leary. And this part of a larger what next family. TBD is also part of Future Tense, a partnership of Slate, Arizona State University and New America. You should go back and listen to Wednesdays episode of What Next. Mary talks about police shootings in Colorado and how they are fueled by a toxic combination of guns and meth. All right. Mary and her team will be back on Monday. Thanks for listening. Talk to you next week.