The last millimetre problem
Apple is Ono-Sendai, and we’re pushing beyond real-time to anticipatory interfaces
I first wrote part of this on Twitter last October, but after this week’s Apple news, I feel like it’s worth revisiting and expanding.
As I type this, my Apple Watch is measuring my vitals; my AirPods stream audio to my eardrums, and wait for a command; my phone offers a keyboard and screen to type and read these words. These devices connect me to humans, and extend my senses. Today, for example, they let me know that forest fires are filling my lungs with particles, and that the balm of rain is coming
My watch measures not only movement and heart rate, but also oxygen levels, electrical impulses, gait stability, and more. It just gently reminded me that I’ve been sitting for too long. The green-light sensor beams, unseen, into the red blood cells on my wrist.
The tiny speakers in my AirPods thrust deep into my ears, their magnets vibrating my membrana tympanica in sync with electrical pulses. Sometimes, when a machine-generated voice is reading machine-generated text to me, the only analog part of this whole system is the few millimetres of airgap within my ear canal.
The virtual keyboard registers taps from my thumbs, using delicate capacitive touch circuits embedded just below the screen’s glassy surface. My brain encodes ideas into words, then pushes those ideas through the glass, where they are turned into bits that leave my phone through the ether. From time to time, the phone vibrates, giving me a reassuring—albeit synthetic—sense of physicality.
Ideas flow from the ether into my mind, too. Organic compounds, stimulated by infinitesimally small pulses of electricity, release photons that stream into my retina, where my brain interprets the patterns of light and dark as words, and extracts meaning. Language is an inefficient, clumsy carrier wave, but it is how I sync with the collective.
From last mile to last millimetre
In the early days of the Internet, when dial-up modems were the bottleneck, we spoke of a last-mile problem: The final mile to the end user was slow, often a noisy, analog dialup modem. Today, there is a gap of only a few millimetres between the electrical impulses in my brain and the electrical impulses from my Airpods. We no longer have a last mile problem—we have a last millimetre problem.
Our headphones, watches, and screens are imperfect bridges between the electrical pulses in our brains and the electrical pulses of the Internet. But they are the state of the art of human-machine interface. They are so important to our daily lives that we have literally fired satellites into orbit as part of a system that lets those interfaces find one another when separated, and reassemble.
Other interfaces are on their way, some valuable, and some absurd. The Shiftall Mutalk covers your mouth to capture speech without disturbing those around you.
While it might look like something The Onion dreamed up, it’s apparently very real.
And now Apple has introduced a headset. Its screens are millimetres away from your cornea. It includes dual 4K displays with 64 times the resolution of an iPhone screen. A custom chipset pushes latency between sensor and screen ever lower, so there’s no perceptible delay to remind you that the world beamed into your eyeballs isn’t real.
From realtime to anticipation
While we’re shortening the distance that information has to travel across messy, analog media like corneas and eardrums, we’re also finding ways to peer into the brains of users.
Sterling Crispin worked at Apple as a Neurotechnology Prototyping Researcher (a truly Gibsonian name and job if ever there was one.) I’m going to quote him at length here because his story is, well, absolutely bonkers. You should probably follow him.
Crispin worked on things like “predicting you’ll click on something before you do” for three and a half years. While he’s careful not to violate his NDAs, his work involved “detecting the mental state of users based on data from their body and brain when they were in immersive experiences.” While a user is immersed, machine learning algorithms are constantly “trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state.”
How does a headset do this? Crispin cites data like “eye tracking, electrical activity in the brain, heart beats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc.” He has patents on this sort of prediction, and explains that “your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user’s brain by monitoring their eye behavior, and redesigning the [User Interface] in real time to create more of this anticipatory pupil response. It’s a crude brain computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day.”
This reminded me of another study I’d read, which the always-reliable David McRaney helped me dig up. Back in 2008, a team of researchers ran an experiment in free will. They concluded that “the subjective experience of freedom is no more than an illusion.” I first learned of the research in a documentary that concludes with the science presenter holding his head in his hands on the steps of a university building, grappling with the idea that he may not, in fact, be able to consciously choose.
The paper, entitled Unconscious determinants of free decisions in the human brain and published in Nature, built on an earlier one, Reading Hidden Intentions in the Human Brain, from 2007. It describes a study in which researchers watched subjects’ brains under Functional Magnetic Resonance Imaging, then asked those subjects to make a decision.
They found that “the outcome of a decision can be encoded in brain activity of prefrontal and parietal cortex up to 10 [seconds] before it enters awareness.” In other words, the machine knows what you’re going to do before you’re aware you’ve decided what to do. Often seconds earlier. No matter how hard you try, you can’t make a free decision: Even when you decide to change your mind, the algorithm anticipates that change. It knows what you’re going to do before you do it.
Here’s a BBC video that touches on the subject a bit:
That was fifteen years ago. More than enough time for companies like Apple to ask, “why stop at real-time when you can react before the user even knows what they’re thinking?”
Superorganisms and on-ramps
William Gibson coined the term Cyberspace, a digital version of Teilhard de Chardin’s Noösphere, in his 1982 short story Burning Chrome.
Human society is a superorganism that “thinks” by communicating. The faster and more accurately humans can communicate, the faster our collective consciousness can think. Where once we had culture and literature, now we have cyberspace. Digital communications in general—and the Internet in particular—are an unprecedented collective cognitive upgrade. We’ll spend the next hundred years readjusting to how we organize ourselves as a society now that we are digital creatures, for we are evolving.
Two years after the publication of Burning Chrome, in Neuromancer, Gibson described the Ono-Sendai Cyberspace VII deck, an immersive computing interface with which people connected to the online world.
In the book, Ono-Sendai is the best of the best, one of a few gigantic companies that control the onramps to the digital world. The hardware is glued shut, defended by law and manufacture against curious users and nefarious hackers:
When I first read Neuromancer, I found the idea of an oligopoly of tech firms owning the human-machine interface preposterous. Today, we take it for granted. We are bearing down on the last millimetre, and moving from realtime to anticipatory interfaces, with little regard for who controls the on-ramps.
As a species, we are at the acoustic coupler stage of our connection to the digital realm. And Apple is Ono-Sendai.