Why hasn't Google launched a chatbot?
The real AI breakthrough is as likely to come from a tiny startup as a tech titan
One of the hottest of ChatGPT takes is that it will end Google’s dominance of search. That’s a bit foolish. But changes to how companies make money from links are coming, because AI chat will change how users interact with data, and in search, the user interface is the business model.
Google’s AI is unparalleled. Not only do they have a huge amount of talent, they also have a vast amount of computing power in their data centers, and access to more data on which to train their algorithms and run experiments than anyone else.
They (along with Apple) also make their own chipsets. And not just those in servers: Google’s Tensor chip (and Apple’s M1 chip) are both ferociously powerful edge devices, and with algorithms like Stablediffusion running pretty comfortably on a modern laptop, that’s a lot of distributed edge computing. If you think Google and Apple didn’t see this coming, maybe ask yourself why they’ve been building AI silicon for years in anticipation of this moment.
Ethical and business model obstacles
Google’s got a great chatbot. As I’ve mentioned before, it’s good enough to have convinced an employee it was sentient months ago. But Google hasn’t launched it. Why not?
The real obstacles are ethics and unclear business models.
The killer app for Google Glass
Remember Google Glass? A weird headset with a camera and a network connection? The killer app for Glass was obvious: A floating bubble above someone’s head, reminding you of their name, employer, and more. Salient facts to spark conversations: How is Joey doing at Cal State? Glad you’re up and about after that skiing accident. Perfect recall for every meeting or conference; mutually creepy speed dating; a policing panopticon.
Google banned facial recognition, and glass went from promise to punchline overnight.
Google might have changed its slogan from “Do No Evil” since the days of Glass but that DNA is still ingrained. I know dozens of really smart, really cautious, really ethical people at the company who are genuinely trying to do the right thing in the face of overwhelming technological change.
Even if isn’t trying to be ethical, Google needs us to trust it, or its entire empire collapses. That means being cautious. The company has learned caution after years of costly litigation, because it’s a lawsuit magnet. It has such a wide range of products, from phones to cloud computing to advertising to office suites, that its legal attack surface is huge.
Google’s mission is to “organize the world's information and make it universally accessible and useful.” By contrast, OpenAI’s mission is to “ensure that artificial general intelligence benefits all of humanity.” Those are markedly different goals.
OpenAI is employing a few tactics to ensure that AI power isn’t developed in secret or exploited by a powerful elite:
Put things out there as soon as possible. This is the tech version of ripping the band-aid off. Paul Kedrosky called ChatGPT a “pocket nuclear bomb” launched “without restrictions into an unprepared society.” You may agree with him, but there’s no denying that most of the tech world spent the holidays thinking about ubiquitous, smart chatbots.
Create an investment ecosystem. OpenAI’s CEO, Sam Altman, ran YCombinator, one of the top accelerators. OpenAI has a VC arm to invest in “startups with big ideas about AI.” Those startups get access to its best, most recent data models, like GPT-4. This is the tech version of letting a thousand flowers bloom, and a way to avoid concentrating power.
Learn what’s working and iterate, fast. ChatGPT was the most successful beta in history. A million people gave OpenAI free labor, testing and prompting it, trying edge cases, and documenting the many ways it can used. My feeds, and those of everyone I talk to, are filled with self-appointed experts stringing together Twitter threads of clever prompts. There was no oversight, just a few disclaimers.
This can be summed up as “letting the Genie out of the bottle because THE OTHER GUYS ALL HAVE GENIES.”
Google, on the other hand, is a big, public, for-profit company. The risks of launching an unfettered chatbot far outweigh the possible advantages of being first to market. People rely on Google for medical information, travel directions, and many other vital parts of their life. While it’s a profitable company, it’s also running a public good. One wrong step in the new world of consumer-facing AI could get it sued for falsehoods in the results AI generates, or for the use of training data it doesn’t own. AI is legal morass. And Google is a company that gets sued by entire continents.
History doesn’t repeat, but it does rhyme. And Google’s chatbot rhymes with Glass.
How do you monetize a consumer AI chatbot?
Clearly businesses will pay for AI, as the use cases are endless. But what about consumers?
In the early days of the Internet, there weren’t any ads, just university servers and well-intentioned sysadmins paying to run blog posts or federated servers. But somewhere along the way, as the bills piled up and companies set up websites, we decided content should be free.
Having learned that TV and radio were ad-driven, we concluded that the Internet should be, too. And now we’re hoping chat will be.
ChatGPT costs OpenAI somewhere between $100K and $3M a day to run (estimates vary widely.) Altman called the costs “eye-watering” and the company has said it plans to terminate the free service at some point—presumably, when we’ve all started to rely on it and tested it for them.
OpenAI is a scrappy nonprofit (albeit an immensely well-funded one.) It needs to meet the goals of its board and benefactors. It can run a big experiment and burn money for a while. Google (well, Alphabet), on the other hand, is a public company with shareholders and investors to appease. It needs a consumer business model.
What might that look like?
First, let’s assume Google waves a magic wand and turns search into a chatbot overnight. This isn’t much of a stretch, since Microsoft’s has an exclusive license to OpenAI’s tech, and its plans to incorporate that tech into Bing are one of the worst-kept secrets in searchland right now.)
Let’s also pretend for a moment that those chat results are perfectly accurate, somehow corrected with search results, or the remains of Metaweb and Google Scholar and other tools.
How do you weave in advertising that doesn’t suck?
An easy (but wrong) answer is to insert hyperlinks into the results it sends back. You’d get text, but certain keywords would be clickable. Consider a question about selecting the right audio mixer. On the left are Google’s current results. The top three results are paid ads; the next two are videos on YouTube, a company Alphabet owns. There’s no informational sidebar. On the right is the response from ChatGPT.
Where do the ads go?
None of the words in the chatbot’s response are obvious calls to action, perhaps because I didn’t ask, “what mixer should I buy?” or “where can I buy a mixer?”
(And let’s be honest, I’m probably going to buy it on Amazon or Shine or Temu. For the first time in a decade, Google and Meta’s ad market share dipped below 50%—because people are searching within online retail sites, rather than asking Google where to find something. So Google is losing queries with purchasing intent anyway.)
Maybe the company can go the way of product placement. Make some of those words clickable: Sell the links to “XLR cable” and “portability” to vendors who want them. How long until this steers the contents of the results?
Let’s say Mackie wants to sell me an Onyx 8 mixer, which is unique because it has a USB audio channel for each physical input. How likely is an ad-driven chatbot to add that as a bullet to the results? Will users trust the response when they know paid product placements are included?
Monetizing a chatbot is even harder when the question is less obviously commercial—say, a baking recipe:
Any of those cookie ingredients could lead to a grocery store. The chatbot might also suggest that rather than baking, I order a delivery of cookies. And sure, with a chatbot, Google’s ad platform might learn from my cookie search that I’m interested in baking, or not very interested in my waistline. Which is great, but doesn’t really help if it doesn’t have ads to sell.
The business model’s still up for grabs
Conversational AI that’s trying to sell you something will be a dreadful experience.
If it’s up front about the product placement, it’ll be the digital equivalent of that friend who’s absolutely, definitely, not part of a multi-level marketing scheme but would you like to try the skin cream please and maybe you want a side gig?
If it tries to hide the fact, the results will suffer, and we’ll stop trusting it.
Back when we used directories to navigate the web, Yahoo was king. Google won by changing the business model.
The company brought out PageRank (both a ranking of web page relevance, and the name of its inventor) which made directories and hierarchies irrelevant: Why file and categorize things when search gave better results? And then it launched AdWords, a real-time ad auction model that worked perfectly with keywords.
These attacked how Yahoo made money at a fundamental level, and helped Google dominate search and online advertising for over a decade. Everything Google has done since then—from Google Analytics to the Google Keyboard to GMail to YouTube to Maps to Android—has been an attempt to remain between a user and their intent.
The breakthrough we need isn’t technological
ChatGPT is an amazing technology, both for the answers it generates and for its ability to understand the question in the first place, but it doesn’t get have a consumer business model yet. Trying to force-fit the current advertising model onto it may not work. Google needed both PageRank and AdWords to win. What’s AdWords for AI Chat?
If the rise of AI chatbots does hurt Google, it won’t be because the company is lagging technologically. It will be because we don’t yet know the consumer AI business model. That’s the real missing innovation. And it’s as likely to be discovered by a business-school dropout or a fledgling startup as it is by one of today’s tech giants.
Postscripts
A couple of thoughts that came up in writing this:
Have you watched broadcast TV lately? It’s a hellscape of loud commercials for things old people think they need, other shows, political ads, and prompts to ask your doctor about some sci-fi sounding medicine. How did we ever put up with that?
An AI-assisted human will be far more productive and effective than a human working alone. If we don’t find a business model that makes AI free to everyone, will we be creating a digital divide that’s completely impassable? Will we at some point demand a constitutional right to a government-supplied AI, as a sort of UBI-of-the-mind?
Great read. On the subject of whether ChatGPT could in fact deliver search results that are accurate (possibly using something like Google Scholar/ Metaweb/other tools as you suggest) it seems the jury is very much still out, or at least this is an area that needs considerable focus and investment if we are going to get beyond the limitations of deep learning models to truly understand our intentions and deliver accurate answers . If you haven’t listened to it yet, I urge you to listen to Gary Marcus on the Ezra Klein podcast, where he enumerates how far we still have to go: https://www.nytimes.com/2023/01/06/opinion/ezra-klein-podcast-gary-marcus.html?smid=nytcore-ios-share&referringSource=articleShare
Such great musings. We are at the limits of advertising, right? So there has to be some thinking at Google about the real profit pools of LLMs. I've been observing and thinking about the combination of Med-PaLM from DeepMind https://arxiv.org/pdf/2212.13138.pdf and Verily's recent deal with SwissRe to play in the re-insurance space - which would mean in health they will be incentivized to lower the cost of care. Combine the two and there is a business model there. "Other Bets" may deliver after all.