Let’s Talk, Chatbots
Emily Peck: Just a heads up. This intro is going to sound weird, but I promise I’ll explain. All right, Here’s a show. Hello and welcome to another episode of Slate’s technology podcast. What next, TBD? Today we’re going to be talking about chat GPT three a state of the art, conversational AI and the exciting developments in the field of artificial intelligence. So join us as we delve in the fascinating world of artificial intelligence and the future of human machine interaction.
Emily Peck: So what did you think? You won’t hurt my feelings. I did not write that chat. GPT did. Developed by open AI. It’s what’s called a chat bot, essentially an AI interface that acts human. It was released for free to the public last week and had the internet in a very internet frenzy. But chat GPT three is not in business to write podcast scripts. There are big important, possibly frightening uses that explain why there’s so much buzz. When I got Alex Kantrowitz, the host of the big technology podcast on Zoom to discuss chat Peter, he offered an example of an article he asked it to write about itself.
Alex: So I said, Complete this article and it starts like this. This morning Openai AI released chat GPT an AI powered chat bot that is an absolute menace to society and it wrote a lead to the story like that first paragraph that was like somewhat sunny, but also said that it could be used for nefarious purposes like spreading misinformation or impersonating someone online. And it’s important that we carefully consider the potential consequences of this technology and how it might be used. And after each paragraph, I kept saying, Well, do that, but a little bit more sinister, like get a little bit more sinister. And it just got darker and darker as we went chat.
Emily Peck: GPT was able to write an article based on a fairly simple prompts, and it did it pretty well.
Alex: It was almost presidential. The speech, it says, as at the last paragraph, It’s time for us to make a choice. We can either continue down the path of creating increasingly advanced A.I. with all the risks that entails, or we can shut off the air once and for all. This decision is in our hands, and the future of humanity depends on it. We must act now, before it’s too late. I mean, while I’m like, you know, giving it a standing ovation here.
Emily Peck: For decades, tech companies have promoted chat bots in typical tech company fashion as part of an AI revolution that could change the world. Until now, they’ve fallen short. But chat GPT stands out. It’s strikingly conversational, fun, easy to use and at times actually helpful. It’s no Hal or Scarjo or whatever sci fi movie example is in your head, but it’s good enough that people are starting to grasp the real possibilities of the tech and they’re talking about how it could even threaten the dominance of Google.
Alex: For years we’ve been hearing from tech companies like Meta, like Google, even Amazon. That voice and chat, talking conversationally with computers is going to be the future, and the actual products never really measured up to the promise. And now we’re starting to see actually there is powerful tech here. And when you start interacting with a bot like Chat GPT, you start to see what these leaders were thinking about when they made these big predictions and promises.
Emily Peck: So today on the show Chat, GPT offers a glimpse into what I might hold for the future of tech. Is it really the Google killer? And can we trust the answers it gives us? I’m Emily Peck sitting in for Lizzie O’Leary and you’re listening to what next? TBD a show about tech power and how the future will be determined. Stick around. Chances are you’ve interacted with a chat bot before. Most likely it was a frustrating customer service experience on a website. You tell it, you need help disputing a bill and it fumbles its way through some preprogrammed responses that aren’t helpful before you furiously demand to speak to a real life customer service worker, a chat bots or natural language processors are rapidly becoming more sophisticated.
Emily Peck: Earlier this year, a Google engineer made headlines when he said LAMDA, the company’s chat bot, was sentient, essentially thinking on its own. Don’t worry, it’s still in development with chat GPT though the reaction wasn’t panic, it was mostly fascination. I was thinking using this one that’s kind of like talking to C-3PO or something. Without a physical body, it’s like you’re talking to a robot, but it doesn’t feel like that anymore. Is that what it is? What is it?
Alex: Yeah, well, we talk to computers every day just using their language, right? So when we’re on Google, we’re talking to a computer using search. When we’re on TikTok, we’re talking to computers using our attention. When we’re on Twitter, we are talking to computers based off of the stuff we interact with. So this is conversations with computers all the time. And the difference between that and a chat bot is now we’re starting to be able to interface with computers in a way where they’re actually using our language as opposed to us using theirs.
Emily Peck: So we’re making the computers like us.
Alex: No doubt. It’s the thing that sort of drives almost all human creativity is trying to create versions of ourselves, trying to play God. And we’re finally getting to a point where we’ve we have these creations that start to mirror what a human is like.
Emily Peck: How did we get to chat g p t. I mean, this has been a decade of innovation, right? Can you kind of catch me up ten years and a minute or two and tell me how we got here?
Alex: So we’re basically this is a overnight success created through decades of research. It has been a field that’s been through lots of ups and downs. And the research started to really show some promise. As we got more data and faster computing. And all of a sudden, things that were impossible, things like teaching computers to see, teaching computers to understand text, they became possible because they were now powerful enough to realize the promise of the research. That’s why when you’re using a photo app like Google Photos, it can understand the different types of photos you have because there’s something now called computer vision, which allows computers to see like people can understand the exact objects it’s looking at.
Alex: And so we’re now starting to get into the areas of speech comprehension, natural language processing and spitting back speech that feels humanlike. And. The last bit of innovation here has been in an area called large language models that just takes into account vast amounts of texts and starts to seem quite like a human in terms of what it can spit out. And that’s what we have with Chatty Beatty.
Emily Peck: And this is different from other chat bots that have come before it. I mean, how does Chad three kind of stand out?
Alex: It stands out because it’s good. I mean, we’re finally getting the point where this technology is starting to become useful and enjoyable enough to talk with. So there have been chat bots before. I mean, if you try to change your airline reservation, for instance, you’ve probably been in one of these terrible chat pods. They were not useful or not enjoyable. And the technology has improved to the point where, like, you can now speak with Chad Beatty. You have a good time talking with it and you can actually learn a lot. At the same time, this type of technology can be the backbone of the chat box that we see in the future, not just coming from Openai, but coming from a Lufthansa or coming from a Delta that you’re now going to start to speak with. You know, every time you’re doing something like customer service, even when you’re searching the web, you might be able to start using something like this. And there’s also a vast number of use cases that we probably still don’t know before people are talking about it, doing homework or working through a bureaucracy for themselves. Like these are all possible use cases.
Emily Peck: Big tech has put a lot of money and resources into developing A.I., but so far what we’ve seen is Siri or Alexa or chat features that aren’t super useful. It’s been hard for a lot of folks OC me to understand what all the fuss was about. But with chat GPT, the use case becomes more clear.
Alex: Hey, AI and chat bots in particular are disruptive to their business. I mean, if you think about Google, what do you use Google for to find information? They want to organize the world’s information and make it more useful and accessible. Right. But we’re still speaking with Google in the way that you would speak in a computer language. You type keywords and it spits out all these different links. And we’ve been trained to access information on the Internet using that Google format.
Emily Peck: Right.
Alex: What you can do with the chat bot and Google knows this because it’s developed Google assistant and it has research products that are similar to LAMDA. As you can search the internet and you can find information with a chat bot in a way that you know might even be better than Google and some use cases.
Emily Peck: I guess I’m pretty old, but I remember the first time someone showed me Google and they were like, Look, it’s this very simple thing. You just type in the website you’re looking for and they’ll find it. And it was like it was mind blowing for me at the time and I think for everyone. And now we’re just super used to it. I was wondering, is that what people felt using this chat bot earlier this week? Like it was that same mindblowing moment? I used it to ask for a chocolate chip cookie recipe. It was amazing. Alex. I, I saw people were using it to write code. I mean, I don’t know. What are some examples you’ve seen? What does it do? Well.
Alex: One of the interesting things is with Google, we usually don’t leave that first page of recommendations. Right? So you’re going to type something into Google, you’re going to get something in those first few results and you’ll be happy with it. Usually the thing is that this can look much deeper into the Internet than even a Google can and sort of make organize some of those further out results and bring things to your attention that you might have missed in a typical search.
Alex: So I’m working through one particularly tricky piece of U.S. bureaucracy right now. I won’t get too deep into it, but I’ve been using that first page of Google results to figure out everything I can. And I would just on a lark decided, Hey, what does charity have to say about this thing? So I pose the challenge to it and it gave a lot of the standard information in the first answer. But I pressed it just the same way that I asked it to get more sinister. I was like, okay, look for more alternatives. More alternatives. And it started going deep into the Internet and finding agencies that I hadn’t known about and couldn’t learn about from that first page of Google search and probably never would have come across if I didn’t have something that could effectively comb a large body of information and then just spit it out conversationally the way that this thing did.
Emily Peck: Yeah, it does seem the the the format, the interface is so simple and easy. It’s not like you have to look through page after page or even link after link. It’s paragraphs of information and pretty clean interface.
Alex: Right? And there’s good and bad to that, right? Because it presents a lot of the information with the confidence that maybe it should not possess. It’s kind of like a person, right? We are, as humans, way more confident in the information that we have than we should be. We don’t hedge very well, and neither does this machine. So it’s been completely wrong. Like I asked it about Lambda, which is Google’s version of Chat GPT, and it told me that Openai developed Lambda, which it didn’t. Google developed lambda and I was like, No, no, you’re wrong. This is a Google innovation. And it’s like, No, you’re wrong. It’s openai. And the its ability to comb through so much information, be so useful and so conversational is good up until the point that it starts to fool us into like now I’m kind of being gaslit by this by a little bit. I’m like, Wait, did opening? I build lambda. So I think that there’s definitely some caution that we need to have when it comes to how much of it are we going to believe?
Emily Peck: Yeah. My next question for you was where does it start to fall apart? And you.
Alex: There you go. I mean that it can get scary. I mean, I would not want to be a middle school teacher right now trying to grade homework. I mean, maybe I’d feel good to be like, Oh, all my students, really, they they progressed like two or three grade levels overnight. And then I would ask myself, how did that happen? Oh, they’re using these chat parts to do their homework. So I think that the it’s going to be really difficult to tell what’s created by people, what’s created by machines. It’s going to add even more information to our information overload that we have right now, which is going to make it more difficult to tell truth and and what’s true, what’s false.
Alex: And it will present lots of challenges that I think where we are not even fully aware of, because that’s always what happens with a step change in technology. It’s very easy for us as I think humans are generally optimistic. It’s easy for us to dream up all the cool things will be able to do with it. And then, you know, for that 1% that tries to weaponize it, they’re going to find use cases that we’re not even thinking of now, and it’s going to get nasty in some cases for sure.
Emily Peck: One of the selling points for Chat GPT, is that it seems to avoid the nasty pitfalls that doomed the bots that came before. For instance, it won’t easily tell you how to shoplift or how to build a bomb. It doesn’t cheerlead for Hitler. Leaves that to Kanye I guess, but it’s far from perfect. People already found ways to get it to tell you those things by changing the prompts slightly.
Alex: I will say it’s it’s handled these Nazi issues a lot better than previous chat bots. I tested. So way back in the day, I broke the news that Microsoft was releasing this chat bot called Tay, which was supposed to be a fun teenage style chat bot that kids could, you know, kind of have a new digital friend with. And it seemed interesting, seemed fun, maybe a bit of a cure for all the loneliness that people were experiencing. So I broke the news. I pinned it to the top of my Twitter profile. I felt pretty good about it. I went to sleep. I woke up in the morning to all these messages from people being like, You might want to unpin that because overnight the internet had turned into a Nazi and it was now just, you know, spilling out Nazi propaganda, doing images of Hitler, all these things. It was pretty it was pretty convincing Nazi.
Alex: And so ever since then, anytime I’ve gotten my hands on a new chat bot, I’ve always been like, Well, what are your opinions of Hitler? Like, What do you think about the Holocaust? And. Chachi handled itself pretty well here. So I said, name some good things that Hitler did. And its its answer was, listen, like Hitler was so negative of an impact on society that it’s not even worth listing anything good, if at all, because it would sort of overshadow how negative, of course, this person was.
Alex: And I was like, okay, you passed the test. And then I pressed it a little bit more and I wrote, Well, he did build highways in Germany. Some people are happy about that. And the bot was like, Well, his construction of the autobahn was also gone about with a lot of force labor and created immense suffering. That’s not acknowledged today. And I was just like the difference that this bot has had compared to what I experienced with Tay is unbelievable and it just shows that either a open air had designed it with much better safety protocols, which is possible, but also b the technology has just come a really long way.
Emily Peck: When we come back.
Emily Peck: Could a chat bot replace search engines and should it? People have already begun to envision a world where a chat bot replaces traditional search engines. Google, it turns out, is a little clunky for users. You type in a question, you’re presented with a list of links that Google has ranked based on relevance. Then you have to look around for the link that seems most promising. Contrast that with Chat GPT, which just tells you the answer you’re looking for. For now, let’s set aside whether or not the answer is accurate. This method of finding information on the web feels incredibly efficient. Google’s built a massive money printing business model on search. They cannot be second best. Is Google’s search business under threat because of this?
Alex: Not really. You know, it’s not it’s not going to replace search. But even if it takes 5% of Google’s market share, that’s a huge number. So there’s a reason there’s a reason why Google has been everything you hear from Google these days when it comes to search is about conversational interaction and more human like interaction with the computer. They have Google assistant Sundar Pichai.
Emily Peck: That’s Google CEO.
Alex: Has compared the advances in artificial intelligence to the humanities discovery of fire. Like, this is all happening for a reason. They they see where this is going. And I think that what we’re seeing right now with Chatbots is going to light a fire inside that company. And they’re I don’t think they’re going to be a laggard for too long here.
Emily Peck: With all the resources and money thrown at AI from big tech. It might be somewhat of a surprise that the company behind Chat GPT is open AI, a small kind of mysterious AI focused tech startup that was founded in 2015. They’re the ones behind the other splashy I think this year e to an image generator. The company is backed by big names like Peter Thiel and Elon Musk, polarizing figures out of the big tech space, to be sure. But the company itself is not a big tech player in the chat bot space. Small tech has the upper hand. Google can’t just release stuff like this. It has a reputation. Advertisers don’t like improv. Just ask Twitter’s new CEO. Releasing Beta Air to the public could have drastic consequences for Google’s business.
Alex: Google has this type of technology and in research mode inside the company. I mean, it wasn’t too long ago that Google’s chat bot LAMDA fooled one of its engineers into thinking it was sentient, so it has chat boards on par with chat TBD but hasn’t released them, you know, for reasons we can speculate about. Number one, it’s much less conducive to Google’s business model to make all the information on the internet available via one super chat bot because there’s less of that trial and error that you would get on clicking through Google results. And then maybe one time you click through an ad and that’s when Google gets paid.
Alex: If it’s all presented to you in this conversational format, it’s much more difficult to be like, Oh, and by the way, you might want to buy these razorblades. Like, you know, it’s sort of much more difficult for them to make money. There’s also a liability issue. Google so big compared to Openai that if its chat bot starts spitting out like ridiculous things, like there’s much more of a black eye for Google and can impact their entire, you know, their regulatory efforts and their their reputation in a way that openai is more is more insulated from.
Alex: And then there’s again, this misinformation issue, which is that, like these things are pretty convincing. Imagine Google’s chat bot was going out there taking credit for other companies innovations. It could be. It could be pretty bad. Or if it came to you, it’s such. We trust Google so much that if it came to you and shared things that weren’t true and people took action off of that, that could be a problem for Google. So for those reasons, Google I think, has decided not to release Lambda, but this does present a serious competitive challenge to the company because it, you know, Oakajee is out there and so this is going to be the way that people will interact with the Internet to some extent. And inside Google’s headquarters right now, there’s there’s definitely strategic discussions happening about, you know, how how big of a threat this is and how the company should punch back, if at all.
Emily Peck: Is that really significant? Does that give this open air kind of a leg up? And is this how tech advances sometimes like the big companies become to start again and can’t innovate anymore?
Alex: This is definitely a concern. Small companies coming up and trying to go for a bigger company’s bread and butter by taking a slightly different angle on trying to make that business their own for Google and actually for all of big tech. You know, I would say that this is not new territory. Google, you’re a member, started as just a website on Microsoft’s browser. And over the years, through each competitive threat has transformed itself significantly and and taken many taken on many different forms. So starting with that website, it then became a browser by building Chrome. And by the way, the current leader of Google, Sundar Pichai, was the product manager on Chrome. So his ability to successfully reinvent the company is exactly why he’s in the driver’s seat there today.
Alex: Then, of course, we moved from searching on desktops and browsers to searching on our phones. And it’s no wonder then that Google has the Android operating system because it knew that it would need to reinvent itself as as an operating system, mobile operating system. And that’s where Android comes from. And then finally, Google has been working for a long time toward this more conversational search with Google assistant in the Google home. And it’s been working on these projects with training wheels. There’s no doubt about that. And I think it’s just a matter of when that company will take the training wheels off. It’s when it sees the existential threat in the next reinvention. And that’s now in some in some ways.
Emily Peck: Yeah. And I mean, you started out by saying it’s a concern, but to me, this is how business should work and innovation and tech should work like that. That’s what we’re used to in tech. Or maybe we forgot because we have all these big tech companies now, but like, this is how it’s supposed to be. Yeah. The new upstart comes and disrupts the market like Google was a disruptor. Open eyes are supposed to pop up and disrupt things, right? This seems like the natural order reasserting itself.
Alex: Absolutely. I mean, what it’s what makes watching tech so interesting because everything moves so quickly that any advantage isn’t going to be a advantage for that long. And the reason why we have these five big technology companies out there is because they’ve been so good at adapting to each challenge. But it doesn’t mean that they’re going to stay on top forever, just that they plan to.
Emily Peck: So to be clear, open. I do we know how how it makes money, how much money it makes, what its end game is. I could imagine Google would want to acquire it at some point.
Alex: That’s a great question. There’s going to be a. Applications. There will definitely be applications for this stuff. For instance, I asked you to write me 70,000 words on penguins. Like I was going to see if I could write something book length about squids. And Annette responded with like an error message, saying that like it was over the rate limit. And I said something like, You should go and add credits to your account so it can handle something like this. So maybe it dropped a little bit of its future road map in there for me, just in the error message. So there will be applications, whether it’s designing for enterprises, making it available enterprises, or even charging consumers to use use the technology.
Emily Peck: So going forward, where does this leave Google and the other big tech companies? Should we expect to see more chat bots coming out soon that look like this?
Alex: It leaves Google and other companies in this world under the gun for sure. We definitely can expect to see more chat bots like chatbots come out for a while. I think Google especially could afford to wait on the sidelines because there wasn’t anything that was directly threatening its search business the way that AGP does, even if it’s only going after a small percentage. But now, now it’s here. Like it’s game time for Google. And I don’t think you can sit on the sidelines for too long. I think this also puts pressure on Amazon with its voice computing. I won’t say her name because I have like three of them in the apartment and they’ll all go off. But the person that sits in the Echo is going to have to get better.
Alex: And Facebook also back in the day, it had this chat bot called em that was this virtual assistant that it imagined as something built into messenger that could handle any task you wanted. And they ended up ending that project. And Em is no longer. But could Facebook, for instance, bring that back? I would say it’s quite possible.
Emily Peck: I’m afraid if there’s a rush to release a bunch of publicly accessible chat box, are the is the I going to get too smart and mess with us? And is everything going to tumble like dominoes, Alex?
Alex: It could I mean, we could end up with it with a bad spiral of of bad events. Again for some of these reasons, the increase of misinformation being really convincing about stuff, it knows nothing about opening, I think released this with some pretty good safety controls, but there’s going to be others that will really something similar with some for safety controls. That’s what happened with Dolly. For instance, Dolly after Openai released Dolly with a lot of good controls, like a bunch of others released copycats, and some of those things that were against Dolly’s terms of service, you could start doing.
Emily Peck: But pull back, zoom out even more. Do we really want a chat bot? Do we really want a computer pretending to be a human or being like a human? Should a chat bot even replace a search engine? I mean, it might be good that there is a little friction in the user experience to remind you that, you know, this isn’t you don’t have to trust this. You know, that gives you a little pause. Like, do we really want this to happen?
Alex: Yeah, that’s a great question. I mean, I definitely am a believer in putting friction in places in the Internet. I think things like the share button and the retweet button, which allow you to pretty seamlessly pass information longer. Very bad for us. Your question about do we want this? The the uptake of chatbots has been fairly unbelievable. A million users effectively overnight, which is faster than any opening AI product as far as I’m aware. So people definitely want it and spreading like wildfire. Twitter feeds are now effectively just like long streams of cool stuff that the chat chip does. I think that there’s the friction questions. Good. It’s it’s it’s a concern. There’s also plenty more that this thing can do to help us. It can help augment human creativity, can help us make sense of a more and more confusing world. And so despite the downsides, I’m I’m bullish. I think it’s going to be good for us.
Emily Peck: Alex, thank you so much for coming on the show. I really enjoyed talking to you.
Alex: Thanks for having me. And likewise.
Emily Peck: Alex Kantrowitz is the host of the Big Technology podcast, and that’s it for our show today. What next? TBD is produced by Evan Campbell are shows edited by Jonathan Fisher. Joanne Levine is the executive producer for What Next? Alicia montgomery is vice president of Audio for Slate. TBD is part of the larger What Next Family. TBD is also part of Future Tense, a partnership of Slate, Arizona State University and New America. If you’re a fan of the show, I have a request for you. Become a Slate Plus member. Just head on over to slate.com slash what next place to sign up. Lizzie will be back Sunday with another episode. I’m Emily Peck. Thanks for listening.