This trifecta drives the next decade of tech
|Alistair||Oct 15, 2015|
The next decade of technology rests on three big pillars. They look independent today, but they’re inextricably intertwined. And when they finally work in concert, it will be nothing short of transformative for our species.
If that seems a bit breathless and overblown, hear me out: The convergence of big data, smart agents, and new interfaces is coming, and it’ll change how we interact with other people, the world around us, and even ourselves.
Today big data is enterprise technology. It’s used to analyze markets, risk, fraud, consumers, energy sources, and more. But soon, it will be a consumer tool. Every decade, some big technology finds its way from the military-industrial complex to common use, because it’s simply too valuable to pass up. That's how we got the Internet, smartphones, and personal computers.
Already, we have feeds of data—Facebook, photo streams, our email inboxes. And dozens of other data sources that are about us, but not owned by us, paint a hitherto unthinkably precise personal history of each of us. Stitch together phone records, bank transactions, tax filings, doctors’ visits, and even music playlists, and you have a perfect life feed.
But the vast majority of this personal data is something we’ll never look at once it’s saved. How many Facebook posts have you revisited? How many Flickr pictures do you browse? Few of our Tweets, uploads, or messages get a second glance. That means for consumers, big data becomes a life feed we never look at. In fact, Christopher Nguyen (who built GMail) pointed out to me that the sole purpose of Big Data is to give machines something to look at. If it’s to be useful, we need something to chew on it for us, separating the wheat from the chaff.
And that thing is a personal agent.
We’ve abdicated huge swaths of memory to machines already, from mapping cities to remembering birthdays, appointments, and phone numbers. The more we turn over to them, the more they can help. Google Now is able to tell me when to leave for my appointment because it knows where I am, what’s in my calendar, and what road conditions are like. It’s a Stone Soup of knowledge, filled by tidbits from each of us. And we want the machines; help.
“Your mind is for having ideas, not holding ideas,” says David Allen, the founder of the Getting Things Done movement. In this context, smart agents become a specialized form of artificial intelligence that knows us better than we know ourselves, or at least, is able to think about the ideas we aren’t holding at any given time.
They can manage our attention; we’re jungle-surplus hardware, lacking discipline and prone to addiction, wired to reinforce the obvious neural pathways, driven by squirts of dopamine and the firing of nearby neurons. Those agents can go beyond memory, helping us to overcome our biological liabilities, making better, wiser decisions about health, finance, behavioural economics, and more. They’ll have a ton of vital insight to share with us.
But what should they do when they’ve figured out something important?
They should interrupt us.
The way these agents interface with us is going to change dramatically. Already, mobile devices are moving towards voice instructions and spoken feedback. And there’s a lot of talk about immersive environments—augmented and virtual reality*—with companies like Facebook, Google and Microsoft investing heavily in companies like Oculus and Magic Leap as well as home-grown tech like Glass and Hololens. Even with all this investment, the implications aren’t properly understood.
Disney’s Bei Yang thinks we should broaden the definition of virtual reality by realizing that “VR is really about the human body as an input/output mechanism. It’s about spoofing inputs into the human perceptual system to create desired effects.”
I think of interruption as the new interface. Interruption can come in many forms: A tap or buzz on your skin, interrupting normal touch; a sound or voice in your ear, interrupting normal audio; some photons on your retina, interrupting normal sight.
But interruption alone is annoying. It’s interruption with context that matters. That’s why this trifecta is so powerful: The agent measures your reaction to its interruption. It learns. It quickly becomes the most amazing butler ever, an Alfred to your Batman—unfailingly polite, always discreet, even though it knows all your secret identities.
The early versions of this are all around us: One of the reasons Slack is entitled to such a high valuation is that it’s the platform with which specialized AI joins the workforce; But there’s also Apple’s 3D touch on a screen; patterns of haptic touch on a wristwatch, even Bluetooth headset cues.
So new interfaces become smart agents with context.
I love the idea of smart agents, backed by data science, insinuating themselves into our lives like this. To be sure, the ethical dilemmas and security risks are manifold. But ultimately, it's a cognitive upgrade—perhaps one that helps us manage our lives, and the planet, better.
The whole is far greater than the sum of its parts, and when you start looking at tech this way, you see the individual components—a notification screen; an automated financial advisor; a new force-feedback touchscreen—as steps along the path.
The singularity isn't a switch; it's a thousand tiny nudges combining data science, machine learning (or more specifically, narrow-domain AI) and contextual interruption.
*I’m conflating the two terms here, because virtual reality is a superset of augmented reality, in that one of the realities it can render and add to is the real-world one. They’re different, of course; Ori Inbar is adamant that AR “has to emerge from the real world and relate to it, should not distract you from the real world; and must add to it.” Clearly VR goes beyond that definition.