HOUSE_OVERSIGHT_016241.jpg
Extracted Text (OCR)
powerful room-cleaning robot was a Roomba, which moved around vacuuming at
random and squeaked when it got caught under the couch.
Technological prediction is particularly chancy, given that technologies progress
by a series of refinements, halted by obstacles and overcome by innovation. Many
obstacles and some innovations can be anticipated, but more cannot. In my own work
with experimentalists on building quantum computers, I typically find that some of the
technological steps I expect to be easy turn out to be impossible, whereas some of the
tasks I imagine to be impossible turn out to be easy. You don’t know until you try.
In the 1950s, partly inspired by conversations with Wiener, John von Neumann
introduced the notion of the “technological singularity.” Technologies tend to improve
exponentially, doubling in power or sensitivity over some interval of time. (For
example, since 1950, computer technologies have been doubling in power roughly
every two years, an observation enshrined as Moore’s Law.) Von Neumann
extrapolated from the observed exponential rate of technological improvement to
predict that “technological progress will become incomprehensively rapid and
complicated,” outstripping human capabilities in the not too distant future. Indeed, if
one extrapolates the growth of raw computing power—expressed in terms of bits and
bit flips—into the future at its current rate, computers should match human brains
sometime in the next two to four decades (depending on how one estimates the
information-processing power of human brains).
The failure of the initial overly optimistic predictions of AI dampened talk about
the technological singularity for a few decades, but since the 2005 publication of Ray
Kurzweil’s The Singularity is Near, the idea of technological advance leading to
superintelligence is back in force. Some believers, Kurzweil included, regard this
singularity as an opportunity: Humans can merge their brains with the
superintelligence and thereby live forever. Others, such as Stephen Hawking and Elon
Musk, worried that this superintelligence would prove to be malign and regarded it as
the greatest existing threat to human civilization. Still others, including some of the
contributors to the present volume, think such talk is overblown.
Wiener’s life work and his failure to predict its consequences are intimately
bound up in the idea of an impending technological singularity. His work on
neuroscience and his initial support of McCulloch and Pitts adumbrated the startlingly
effective deep-learning methods of the present day. Over the past decade, and
particularly in the last five years, such deep-learning techniques have finally exhibited
what Wiener liked to call Gestalt—for example, the ability to recognize that a circle is
a circle even if when slanted sideways it looks like an ellipse. His work on control,
combined with his work on neuromuscular feedback, was significant for the
development of robotics and is the inspiration for neural-based human/machine
interfaces. His lapses in technological prediction, however, suggest that we should
take the notion of a technological singularity with a grain of salt. The general
difficulties of technological prediction and the problems specific to the development of
a superintelligence should warn us against overestimating both the power and the
efficacy of information processing.
The Arguments for Singularity Skepticism
No exponential increase lasts forever. An atomic explosion grows exponentially, but
21
HOUSE_OVERSIGHT_016241