Back to Results

HOUSE_OVERSIGHT_016252.jpg

Source: HOUSE_OVERSIGHT  •  other  •  Size: 0.0 KB  •  OCR Confidence: 85.0%
Download Original Image

Extracted Text (OCR)

risk easily managed and far in the future, but also it’s extremely unlikely that we'd even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever-more-capable AI systems. A more apt analogy would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we'd arrived. e Human-level AI isn’t really imminent, in any case. The AI100 report, for example, assures us, “Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind.” This argument simply misstates the reasons for concern, which are not predicated on imminence. In his 2014 book, Superintelligence: Paths, Dangers, Strategies, Nick Bostrom, for one, writes, “It is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur.” e You're just a Luddite. It’s an odd definition of Luddite that includes Turing, Wiener, Minsky, Musk, and Gates, who rank among the most prominent contributors to technological progress in the 20th and 21st centuries.* Furthermore, the epithet represents a complete misunderstanding of the nature of the concerns raised and the purpose for raising them. It is as if one were to accuse nuclear engineers of Luddism if they pointed out the need for control of the fission reaction. Some objectors also use the term “anti-AI,” which is rather like calling nuclear engineers “anti-physics.” The purpose of understanding and preventing the risks of AI is to ensure that we can realize the benefits. Bostrom, for example, writes that success in controlling AI will result in “a civilizational trajectory that leads to a compassionate and jubilant use of humanity’s cosmic endowment”—hardly a pessimistic prediction. e Any machine intelligent enough to cause trouble will be intelligent enough to have appropriate and altruistic objectives.> (Often, the argument adds the premise that people of greater intelligence tend to have more altruistic objectives, a view that may be related to the self-conception of those making the argument.) This argument is related to Hume’s is-ought problem and G. E. Moore’s naturalistic fallacy, suggesting that somehow the machine, as a result of its intelligence, will simply perceive what is right, given its experience of the world. This is implausible; for example, one cannot perceive, in the design of a chessboard and chess pieces, the goal of checkmate; the same chessboard and pieces can be used for suicide chess, or indeed many other games still to be invented. Put another way: Where Bostrom imagines humans driven extinct by a putative robot that turns the planet into a sea of paper clips, we humans see this outcome as tragic, 4 Elon Musk, Stephen Hawking, and others (including, apparently, the author) received the 2015 Luddite of the Year Award from the Information Technology Innovation Foundation: https:/Atif.org/publications/20 16/0 1/19/artificial-intelligence-alarmists-win-itif/oE2%80%9 9s-annual- luddite-award. > Rodney Brooks, for example, asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.” http://rodneybrooks.com/the-seven-deadly -sins-of-predicting-the-future -of-ai/. 32 HOUSE_OVERSIGHT_016252

Document Preview

HOUSE_OVERSIGHT_016252.jpg

Click to view full size

Extracted Information

People Mentioned

Document Details

Filename HOUSE_OVERSIGHT_016252.jpg
File Size 0.0 KB
OCR Confidence 85.0%
Has Readable Text Yes
Text Length 3,656 characters
Indexed 2026-02-04T16:27:30.570678

Related Documents

Documents connected by shared names, same document type, or nearby in the archive.

Ask the Files