At the start of the year, a Google engineer named Kevin Naughton was lamenting that his daily job was clicking endlessly through documents to try and figure out the right code to write.
Emad Mostaque, founder and CEO of Stability.ai, a company building Stablediffusion tools, said what everyone was thinking: “This sounds like a job for AI.”
We’re all so busy sharing the cool results of ChatGPT that we’ve missed a really important part of recent advances in AI: ChatGPT, Midjourney, and their ilk are powerful not just because of their responses but because they can better understand the question.
Getting to correct results will be an even bigger challenge. As anyone who’s played with ChatGPT for even a few minutes knows, the results may be creative, but they’re not trustworthy. No engineer is going to cut and paste a chatbot’s output into a mission-critical system without checking it out first.
Google is arguably the most powerful AI force on the planet. It has the best AI, the most machines, the most data, and acquisitions like Metaweb and Deepmind. And it loves to try things out internally before releasing them to the world. Many of its products (good and failed) began as internal tests.
The fact that Google hasn’t solved the problem of trying to understand linked documents for itself shows just how hard a problem this is to solve, and how much work is left to do.
We got better at understanding questions. We’re a long way from perfect answers. But we will get there, partly because big tech needs to solve this problem for itself, and partly because there is unthinkable profit being the SaaS for a prosthetic brain.
This has some pretty serious ethical consequences, which I’ll tackle in another post.