This Magazine

Progressive politics, ideas & culture

Menu
September-October 2010

Technology, ethics, and the real meaning of the “Rapture of the Nerds”

Keith NorburyWebsite

Illustration by Chris Kim

Illustration by Chris Kim

Aging sucks, says Michael Roy Ames. At 45, he sees signs of his own mortality every time he looks in a mirror—the greying and thinning hair, the creases in his face. Ames doesn’t despair, though. He expects to see the day when scientific advances will reverse his aging process, replace his body parts as they wear out, and allow him to live forever.

“I’d rather live to a million and 45 if I possibly can,” says Ames, a Vancouver resident and president of the Canadian chapter of the U.S.-based Singularity Institute for Artificial Intelligence. “I don’t want to die any time soon.”

“The Singularity” is the name of an event that Ames and others like him predict will happen sometime in the near future: the emergence of a technological, artificial intelligence many times smarter than any human brain. An intelligence whose thought patterns will be as inconceivable to us as our own thoughts are to, say, a lab rat. The awakening of this superhuman consciousness, Singularitarians believe, could happen gradually or quickly, with beneficent or malicious intent. But one way or another, they believe it is inevitable. And imminent.

“It’s going to happen sooner than we think,” Ames says. “A few years ago, I was thinking of 2015 as an optimistic date—so five years from now. I would still hold to that as an optimistic date. Five years from now is looking doable, definitely, from a hardware point of view.”

By profession, Ames is in his last year of an apprenticeship as a linesman with B.C. Hydro. He used to be a computer programmer, but decided he’d rather work outdoors. He’s still interested in computers and programming, but now it’s for fun. And he has an ongoing interest in how accelerating changes in technology will radically alter the structure of human society. “I mean, we’ve seen the internet do things with our society that were unimaginable 30 years ago,” Ames says. The Singularity could occur suddenly, he says, in a “hard landing,” or gradually, a “soft landing.” (There are as many interpretations of the Singularity as there are believers.) But in any case, it would involve a transformation of what it means to be human. It could range from radical life extension to the total merging of human and machine intelligence.

The ramifications of such a future are mind-boggling. Since nobody can predict the shape of things on the other side of that looking glass, the nature of politics, economics, and culture in a post-Singularity world—if those things continue to exist at all—also defy prediction. Will it involve humans surrendering their freedoms to a super-intelligent entity? Will only certain guests be invited to that party, with the vast majority left outside those digital gates? Nobody knows, although Ames notes that previous technological marvels such as colour TVs, microwave ovens, and cellphones were initially available only to wealthy elites but eventually became affordable for the masses. Computer scientist Ray Kurzweil, the most prominent Singularitarian, envisions a future where nanotechnology, fabricating everything from the molecular level, reduces the cost of every object effectively to zero, eradicating poverty, hunger, and disease in the process.

One doesn’t have to buy into such a sea change, though, to acknowledge that technological innovation will continue to be a dominant force in human lives, just as it was in the 19th and 20th centuries. Until the Industrial Revolution, technological changes occurred slowly. One response to the upheaval in the shift from agrarian to industrial society was the Luddite movement, which recognized the threats posed by new machines but underestimated their potential to enhance and improve people’s lives. Out of that, however, grew the modern labour movement, a political force that succeeded, often through costly and bloody struggles, to ensure that the fruits of technology were more equitably distributed. Those struggles led to the five-day workweek, the eight-hour workday and occupational safety standards that workers in the industrialized world now take for granted.

Today, political and regulatory change tends to follow technology because of simple cause and effect; governments couldn’t mandate seat belts until the automakers invented them. In the future, though, waiting for bold new technologies to emerge before taking political or regulatory action might have dire consequences. See, for instance, nuclear weapons or deep-sea oil drilling, the kind of experiment-now-regulate-later technologies that have proved so dangerous. Nanotechnology in particular poses an existential threat, says Oxford University philosopher Nick Bostrom (more on that shortly).

Dealing with such issues, which involve the entire planet, will require international co-operation. But as we’ve seen from the difficulty in driving action on climate change, negotiating effective global agreements isn’t easy. And while climate change is a massive process taking place over decades, risky new technologies are likely to emerge much faster.

The mission of the Singularity Institute is to ensure that nascent super-intelligent technology is friendly and not menacing, and that it helps humans enhance their lives—not destroy them, or the rest of the life on the planet.

Not all people who believe in technology’s power to transform humanity are Singularitarians. Transhumanists, as their name implies, also expect technology to alter the species. “These are two communities that seem to have a connection,” says George Dvorsky, president of the Toronto Transhumanist Association. “It doesn’t necessarily mean that one follows the other. I happen to know many transhumanists who don’t buy into the Singularity at all.”

While both groups believe that rapid technological progress will radically reshape our lives, the Singularitarians believe a unified, superhuman intelligence is a necessary part of that change. Transhumanists believe no such super-intelligent entity is necessary. Either way, both believe that our future will be completely unrecognizable. “We are talking about transforming what it means to be human,” Dvorsky says.

"Combine faster, smarter, and self-improving intelligence. The result is so huge there are no metaphors left." -- Ray Kurzweil

“Combine faster intelligence, smarter intelligence, and recursively self-improving intelligence, and the result is an event so huge that there are no metaphors left,” states the Singularity Institute’s definition. Kurzweil interprets this as a future where humans can upload their minds to a supercomputer system. All he has to do is stay healthy long enough for computer systems to advance to the point where that becomes possible. Kurzweil has become famous for popping up to 200 vitamin pills and supplements a day in his quest to keep pace with advances in life extension. He has even mused on the possibility of bringing his own father back to life in that cyber realm.

Ames and Dvorsky each have a more modest vision of the future that includes radical life-extension without having to upload their minds to a computer or raise the dead. Their respective movements are also very modest, numbering, at most, a couple of dozen each in Canada, although both have thousands of adherents scattered around the world.

One might even observe that the movements have lost steam in recent years. One prominent Canadian transhumanism website now exists only for “archival purposes.” Even the main website of the Singularity Institute isn’t updated as frequently as one might expect of an organization that purports to be on the vanguard of technological change. Viewed on July 1, 2010, the “Latest News” on the site’s Updates & Press page was from 2007, although postings on its blog were current.

Dvorsky concedes that his group, which he has led since its founding in 2002, was more intensely organized at the beginning and would draw 20 to 40 people to its weekly meetings, depending on the topic. (Life extension was a good draw, he says.)

“What we did after that is we lost a little momentum,” Dvorsky says. The audiences were getting smaller and it tended to be the same small group coming out to the events, he says. About two years ago, the meetings were cut back to two or three hours once a month. “There is no interest in having a group dedicated to transhumanism at the chapter level.”

David Coombes, the vice-president of the Canadian chapter of the Singularity Institute, admitted earlier this year that he hadn’t thought about the Singularity for weeks. He was preoccupied with the here-and-now challenges of his work as an immigration consultant. Yet his century-old heritage home in Victoria, B.C., is still the Canadian headquarters of the institute, as it was when Ames was staying there four years ago.

“You don’t bump into a transhumanist on every street corner,” Dvorsky jokes. Today there are just a few handfuls of true believers “who get it,” he says, and Singularitarians are similarly scarce. But regardless of their current numbers, they believe their activities today are key, because if and when the Singularity occurs, it will involve everyone on Earth.

“It’s going to be one for the planet,” Ames says. “You think about the internet. There’s no Canadian internet; there’s no American internet; it’s the internet for the planet.”

"Nanotechnology would allow a destructive force to convert all biological matter on the planet to gray goo." -- Bill Joy

It would be tempting to dismiss transhumanism and Singularitarianism as fringe movements, given their small sizes and outsized ideas. The Singularity, for example, has been called “the Rapture of the Nerds,” even though it’s an entirely secular notion that doesn’t invoke anything supernatural. As Coombes put it four years ago, “I believe there could be a higher power when we make it or when we become it.”

But to dismiss transhumanists and Singularitarians as kooks would be a mistake, says Toronto-based futurist Richard Worzel. “The fact that they don’t share mainstream views may say more about the mainstream than it does about them,” says Worzel, who has speculated about such areas as radical advances in medicine. “Einstein was roundly viewed as a charlatan and a fraud and detracting from the proper study of physics. When he started, he was the only one who held his views. I think it’s fairly clear which way history went.”

That doesn’t mean transhumanists and Singularitarians are modern Einsteins; they could still be proven wrong, Worzel cautions. “But I don’t think there’s any question that what they’re doing is a legitimate pursuit of knowledge,” he says. “And that’s the real test.”

Worzel expects to witness cures for all cancers, the growth of replacement organs, and the making of prosthetic limbs that exceed the capabilities of natural ones. “There’s probably disagreement over how quickly we get there,” he says. “There’s probably disagreement to the extent to which we are going to become transhuman. But yes, they’re headed in a direction that we are going.”

Both Worzel and Victoria, B.C.-based futurist Ken Stratford agree that what transhumanists are contemplating borders on science fiction. “That doesn’t mean to say it can’t become science fact,” Stratford says. “You know, the further you play out a piece of rope, the less control you have over where it goes.”

"It will be considerably easier to create destructive technology than to create effective defences." -- Nick Bostrom

Much of the criticism surrounding transhumanists and Singularitarians isn’t that they’ll be proven wrong, but that they’ll prove to be correct—and with dire consequences. Among those sounding that alarm is computer scientist Bill Joy, the founder of Sun Microsystems. In an oftcited 2000 Wired magazine essay titled “Why the Future Doesn’t Need Us,” Joy outlined the dangers of biotechnology, nanotechnology and super-smart artificial intelligence. Biotechnology would enable a genetically engineered plague. Nanotechnology would allow a destructive force to convert all biological matter on the planet into grey goo. Humanity would be hunted and enslaved by a malevolent machine intelligence à la the Terminator or Matrix movies. (While Kurzweil acknowledges the threats, he disagrees with Joy that preventing them might require restrictions on technological advancements.)

In a 2002 essay on existential risks, Swedish transhumanist Nick Bostrom, an Oxford University philosopher, noted it will be “considerably easier” to create a destructive nanobot system than to create effective defences against one. “It is therefore likely that there will be a period of vulnerability during which this technology must be prevented from coming into the wrong hands,” he wrote. The future convenience of such technology, combined with its capacity for quickly gobbling up the biosphere, makes it potentially even more dangerous than nuclear weapons. Bostrom even argued that to save the planet it will be necessary to launch a pre-emptive strike against any rogue state that doesn’t go along with international monitoring of nanotechnology. Bostrom is best known for his “simulation argument,” in which he posits that the reality we inhabit might be a computer simulation run by an advanced civilization. While the argument itself doesn’t estimate the likelihood that we exist in a simulation, in interviews Bostrom has put the odds at about one in five. He even includes a sudden shutdown of such a simulation as an existential threat, although he offers no ideas on what those trapped inside it might do about it.

Compared with that simulation scenario, Kurzweil’s ideas seem fairly prosaic: he graphs exponential increases in biological evolution, and even the evolution of the universe itself, from the Big Bang through the appearance of the first hydrogen atoms to complex molecules to the formation of life. He takes Moore’s Law (the 45-yearold observation by Intel founder Gordon Moore that computer processing power doubles every 18 months) and applies it to all technology.

Since the creation of the first computer in the 1940s, processing power has doubled about 32 times. In 2005, Kurzweil estimated it will take only about five more doublings for a super-computer to “emulate the human brain.” A decade beyond that, today’s equivalent of a desktop computer will have that capacity. He expects the Singularity to arrive by 2045. By 2080, he says, a $1,000 processor will be able to compute the total sum of human knowledge in a fraction of a second. By then, in Kurzweil’s vision, reality will have merged with virtual reality, enabling super-consciousness to zip across the cosmos at light speed or even faster.

We’re accelerating toward this but simply haven’t noticed, Kurzweil says, because at this stage, the exponential graph is still relatively flat. Once it reaches what he calls the “knee of the curve,” it turns upward sharply. Technology is getting close to the knee now, he says.

Kurzweil’s hypothesis is controversial, to say the least. In a 2005 paper, physicist Jonathan Huebner argued that the exact opposite has been happening. He concluded that technological innovation peaked in 1873, has been decreasing ever since, and by 2024 will be evolving at the same rate as it did during the Dark Ages. Singularitarians have questioned Huebner’s methodology, which examined patent data and about 7,000 subjective “important technological developments.” By that measure, the automobile was an “important technological development,” but every refinement since the Model T didn’t register.

P. Z. Myers, a biology professor at the University of Minnesota, accuses Kurzweil of cheating in his graphs that purport to show biological and technological evolutions increasing exponentially. For example, Myers describes one of Kurzweil’s charts as “an artificial and perhaps even conscious attempt to fit the data to a predetermined conclusion.” Myers succinctly dismisses Kurzweil as a “first-rate bullshit artist.”

Despite this, the idea of the Singularity has friends in high places. PayPal co-founder Peter Thiel counts among the financial backers of the Singularity Institute. Bill Gates praises Kurzweil’s predictions. And motivational speaker Tony Robbins gushes about the tech guru, who also has cred as a computer expert and inventor. (speech recognition and optical-font recognition devices count among his innovations). Kurzweil recently co-founded, and is the first chancellor of, Singularity University at the NASA Ames Research Park at Moffett Field, California, The university’s officers include its co-founder and X Prize CEO Peter Diamandis, internet pioneer Vint Cerf, SimCity creator Will Wright, Nobel-winning physicist George Smoot, and futurist Paul Saffo. (The skew toward maleness is not imaginary; however, the Canadian chapter of the Singularity Institute counts two women on its board of directors, one of whom, Kay Reardon, is a grandmother whose official bio says she is “still kicking ass.”)

While Kurzweil popularized the subject with the publication of his 2005 book, The Singularity Is Near, mathematician and science-fiction writer Vernor Vinge actually coined the idea in a 1993 paper. (In a 2007 presentation, Vinge called post-Singularity events “as unimaginable to us as opera is to a flatworm.”) Kurzweil, though, has become the public face of the Singularity, including starring in a 2009 documentary, Transcendent Man, and producing a 2010 film based on his best-known book. While his fans consider Kurzweil a visionary, University of Waterloo professor Thomas Homer-Dixon calls him “a menace.”

"Kurzweil plays into false optimism, techno-hubris. It will be extremely ironic if he dies of cancer." -- Thomas Homer-Dixon

“I think he plays into this type of false optimism, this kind of techno-hubris or techno-optimism that says we can solve our problems,” says Homer-Dixon, author of The Ingenuity Gap and The Upside of Down. “I can’t tell you how many times, especially in the States, I run into people who say we don’t have to worry about climate change; we’ll just fix that problem when we come to it, when it gets bad.” He acidly observes, “It’s going be extremely ironic if Mr. Kurzweil dies of cancer.”

Homer-Dixon says the obstacles are simply far more complex than technology optimists think. The biggest hurdle is what he calls the “curse of dimensionality.” Parsing the human genome, for example, has revealed that few diseases have a single genetic cause; most are caused by complex genetic interactions combined with environmental factors that are very difficult to model.

“What we’ve found out is that many of the challenges, specifically in human illness, are enormously multi-factorial, and that we actually know very little more now, even knowing the human genome, than we did before. It’s actually interesting to read the stories that are coming out in Nature on this that the scientists are really frustrated. They thought this was going to be a huge breakthrough and it turns out it is just an incremental step in the direction of solving problems.”

The Singularitarians and transhumanists acknowledge current realities but serenely insist their moment is coming. In 2002, 51 “top researchers in the field” of human aging endorsed a statement published in Scientific American that included this unambiguous observation: “there are no lifestyle changes, surgical procedures, vitamins, antioxidants, hormones or techniques of genetic engineering available today that have been demonstrated to influence the processes of aging.” However, the statement also noted: “Most biogerontologists believe that our rapidly expanding scientific knowledge holds the promise that means may eventually be discovered to slow the rate of aging. If successful, these interventions are likely to postpone age-related diseases and disorders and extend the period of healthy life.” One prominent researcher on that list of signatories, Aubrey de Grey, has famously proclaimed that the first person who will live to be 1,000 years old has already been born.

Even before new technologies approach anything remotely resembling the Singularity, they are bound to have impacts on our lives, and those changes will require radical revisions of public policy. From bioethics to civil rights to education and beyond, new technologies require, and will continue to require, difficult choices at every level of government. Plenty of the philosophical questions Singularitarians debate are, for today at least, sheer fantasy. But some of those questions urgently need answering now, today. What does privacy mean in a networked age? Should we keep patients alive just because we can? Do corporations have the right to claim ownership of the building blocks of life? In each of these cases, technological capabilities have already far outrun public policy. And the Singularity, even if it never materializes, provides a useful frame for thinking about how technology and society interact, and what we want our future to be.

Ames knows what he wants from the future: another million years of life. Until then, he’s keeping the faith. “I’m sure there will be a time in my life when I’ll say, ‘Maybe I’ve had enough now and I’ll just end it,’” Ames says. “But until that time comes up, hey, I’m game.”

Show Comments