Why I believe in technology
A few weeks ago, in the middle of a rather bleak debate on the ills of society, I realized why I was optimistic about technology. It's because, on the whole, we have to believe that humans will do more good than bad with it. Otherwise, we might as well give up.
On the surface, this sounds a bit simplistic. It certainly smacks of Utilitarianism, the moral code about the "greatest good for the greatest number of people" formulated by Jeremy Bentham, and refined by John Stuart Mill.
Utilitarianism is basically the philosophy of maximizing good outcomes.
Early utilitarianism focuses on individual acts, (i.e. "don't kill that person");
Later versions focus on long-term rules, (i.e., "it is wrong to kill people");
These rules are derived from a macroscopic view of outcomes (i.e., "if we randomly kill one another society will fall apart and things will generally suck");
Utilitarianism has plenty of weaknesses and critics, but it's how most Western legal systems function, and (at least in theory) is how judges think about moral dilemmas. Bentham and Mills' moral framework most often gets stuck on the horns of two dilemmas: the definition of "good" and our ability to predict the future.
Who decides what "good" is?
As of January, the UK government will punish the possession of depictions of real or simulated rape with up to three years in jail. Yet in the middle of San Francisco, in the heart of the tech revolution, in a converted armoury, kink.com—one of the largest online porn companies—churns out such content.
Kink.com's management team are vocal advocates of free speech, and consensual sex, and go to great lengths to document the willingness of all participants. James Defranco made a documentary about them. Yet they specialize in the kind of videos that many jurors would immediately agree fits into British PM David Cameron's definition of simulated depictions of rape. There's a reason I didn't include a link to them here, and they're decidedly NSFW.
Who's right? Does Defranco need to scrub his hard drive the next time he visits England?
Utilitarianism's Bentham is also the inventor of the Panopticon, a surveillance-state prison that kept inmates in check with the threat that they might be being watched. He fundamentally believed people weren't trustworthy, and that society would descend into a Tragedy of the Commons without the threat of punishment that surveillance implied. Let's repeat that: Bentham, champion of the greatest good, devised systems of incarceration based on what he felt was right—often at the expense of the individual or the minority.
Would Bentham have believed we should trample on the rights of gays, simply because doing so would make the homophobes who outnumbered them feel happier? Or would he have seen beyond such a short horizon, recognizing that the broader rule is, "the rights of the few must outweigh the whims of the many"?
I visited Bletchley Park last week. It's where a young Alan Turing cracked German encryption, and designed the first working general-purpose computer, Colossus. Many believe access to encrypted German messages turned the tide of the war and saved us from fascism. Colossus was also the basis for modern computing. For his troubles, Turing was given a knighthood.
Turing was also gay. After the war, he was convicted of gross indecency, and committed suicide two years later. At the time, he was writing a treatise on synthetic life. In the 1950s.
What rule would Bentham have had us apply to Turing? In hindsight, clearly his treatment was a gross injustice. The definition of "good" was, by Utilitarian standards, horribly, terribly wrong.
(A couple of notes. First: while Turing was undoubtedly taken from us early, his suicide is in dispute. What isnt, however, is that he had to endure chemical castration and constant policing to retain access to computets and his teaching position. Second: an army of geeks, headed by Dr. Sue Black, extracted a formal pardon for Turing retroactively.)
How do we know outcomes ahead of time?
For Turing, the damage is done, and the world is vastly less well off, because we can't know the outcomes of a rule until long after its effects. Who defines the scope of a rule, particularly when the lines of law can only be well drawn from decades of hindsight—far longer than the fleeting lifespans of many technologies?
Today, most of the moral dilemmas we confront come from such technology. We're colonizing a second, online world, and we don't have a moral or legal framework to deal with the second life it creates. At best, we drag ancient laws into the new world (Canada currently prosecutes some cyber-bullying under telegraph law.) At worst, we deprive our online selves of rights entirely (see pretty much every social network and surveillance program in the world.)
We can't know the future utility of a nascent technology with any degree of certainty. Things have unintended consequences: computers make us consume more paper, not less; efficient steam power increases demand for coal, rather than reducing it.
This happens a lot:
Fritz Haber is known as the father of chemical weapons. Haber also figured out how to synthesize ammonia, leading to modern fertilization. He is credited with saving the lives of over a billion human beings as a result; he has certainly increased the carrying capacity of the planet dramatically.
You might argue that we, as a species, should get our population growth under control. I wholeheartedly agree—but there's no dispute that a big chunk of the world's current food supply comes from Haber's work.
Combined, the bombings of Hiroshima and Nagasaki at the end of World War 2 killed nearly 200,000 people. Yet 12.3 percent of all electricity in the world comes from nuclear reactors, which enable refrigeration, medicine, and myriad other critical, life-saving technologies.
You might argue that the risks of nuclear leakage, as Fukushima has demonstrated, are huge and global. Again, I agree, but as Larry Page points out, the number of scientists worldwide working on geothermal and solar thermal technology is a tiny fraction of those working on squeezing more hydrocarbons from our dwindling reserves at just one big oil company. And fracking isn't exactly known for being clean.
Reductionism can't deal with complex systems
Are nuclear and chemical weapons among the worst things devised by humans? Absolutely. Is abundant food and energy a life-saver? Absolutely. Could we anticipate all these outcomes at the moment of their technology's invention? Probably not.
James Burke reminded Strata attendees of last week in London that the reductionism proposed by philosopher and mathematician René Descartes won't help us deal with the future, because the future is complex. The world is made up of increasingly complex systems that operate in ways we can't foresee, and can't break down into their component pieces to understand.
As an example, think about a car. Today's driver sits astride the shoulders of languages, compilers, microcontrollers, and more. This look underneath Toyota's spontaneous acceleration problem reveals the layers of spaghetti code and the unexpected edge cases to which we entrust our lives every day. In a tech-centric world, everything is a two-edged sword; we can only measure how it cuts, build cautionary scabbards, and reduce our fragility by improving the speed at which we learn and our ability to adapt.
I'm still an optimist
Nevertheless, I'm upbeat about tomorrow, and how we will use what we invent. The best expression of why, I think, came from the book Nexus by Ramez Naam, which Lesley Carmichael excitedly waved in front of me at Foo Camp this summer. She was right to be excited: the book deals with mind control and nanotechnology, and it's decidedly dystopian, but it's also inspiring.
This rest of this post contains a spoiler, so if you're going to read it (you should; it's a great cyberpunk thrill ride that's writing by someone who actually understands the science) maybe come back to this post in a week or so.
In Nexus, the protagonist must decide whether to let governments control a powerful, threatening technology or to share it with the world. He does the latter, spreading it around the global Internet, letting the proverbial genie out of the bottle. When challenged by his peers, he uses the argument that we have to believe we will, ultimately, do more good than bad with something.
I would love for our species to get population under control, and hammer guns into ploughshares, and refocus our efforts on sustainable energy and affordable healthcare. But I don't see that happening under current political systems. Democracies focused on representative self-interest and four-term elections, that abandon science in favor of popular approval and encourage the concentration of capital, won't fix our problems efficiently.
Indeed, I'm bullish on technology perhaps precisely because I'm so bleak about how we govern ourselves. As a human, I have to believe that we will do more good with something than bad. Otherwise, I should just give up.