HOUSE_OVERSIGHT_016834.jpg
Extracted Text (OCR)
imperfectly specified objectives conflicting with our own—whose motivation to preserve
their existence in order to achieve those objectives may be insuperable.
1001 Reasons to Pay No Attention
Objections have been raised to these arguments, primarily by researchers within the AI
community. The objections reflect a natural defensive reaction, coupled perhaps with a
lack of imagination about what a superintelligent machine could do. None hold water on
closer examination. Here are some of the more common ones:
e Don’t worry, we can just switch it off.? This is often the first thing that pops into a
layperson’s head when considering risks from superintelligent Al—as if a
superintelligent entity would never think of that. This is rather like saying that the
risk of losing to DeepBlue or AlphaGo is negligible—all one has to do is make
the right moves.
e Human-level or superhuman AI is impossible This is an unusual claim for AI
researchers to make, given that, from Turing onward, they have been fending off
such claims from philosophers and mathematicians. The claim, which is backed
by no evidence, appears to concede that if superintelligent AI were possible, it
would be a significant risk. It’s as if a bus driver, with all of humanity as
passengers, said, “Yes, I am driving toward a cliff—in fact, I’m pressing the pedal
to the metal! But trust me, we’ll run out of gas before we get there!” The claim
represents a foolhardy bet against human ingenuity. We have made such bets
before and lost. On September 11, 1933, renowned physicist Ernest Rutherford
stated, with utter confidence, “Anyone who expects a source of power from the
transformation of these atoms is talking moonshine.” On September 12, 1933,
Leo Szilard invented the neutron-induced nuclear chain reaction. A few years
later he demonstrated such a reaction in his laboratory at Columbia University.
As he recalled in a memoir: “We switched everything off and went home. That
night, there was very little doubt in my mind that the world was headed for grief.”
e It’s too soon to worry about it. The right time to worry about a potentially serious
problem for humanity depends not just on when the problem will occur but also
on how much time is needed to devise and implement a solution that avoids the
risk. For example, if we were to detect a large asteroid predicted to collide with
the Earth in 2067, would we say, “It’s too soon to worry”? And if we consider
the global catastrophic risks from climate change predicted to occur later in this
century, is it too soon to take action to prevent them? On the contrary, it may be
too late. The relevant timescale for human-level AI is less predictable, but, like
nuclear fission, it might arrive considerably sooner than expected. One variation
on this argument is Andrew Ng’s statement that it’s “like worrying about
overpopulation on Mars.” This appeals to a convenient analogy: Not only is the
? AI researcher Jeff Hawkins, for example, writes, “Some intelligent machines will be virtual, meaning they
will exist and act solely within computer networks. .. . It is always possible to turn off a computer network,
even if painful.” https:/Awww.recode.net/2015/3/2/11559576/.
3 The AI100 report (Peter Stone et al.), sponsored by Stanford University, includes the following: “Unlike
in the movies, there is no race of superhuman robots on the horizon or probably even possible.”
https://ail00.stanford.edu/20 16-report.
31
HOUSE_OVERSIGHT_016834
Extracted Information
Document Details
| Filename | HOUSE_OVERSIGHT_016834.jpg |
| File Size | 0.0 KB |
| OCR Confidence | 85.0% |
| Has Readable Text | Yes |
| Text Length | 3,485 characters |
| Indexed | 2026-02-04T16:29:17.765285 |