Back to Results

HOUSE_OVERSIGHT_016249.jpg

Source: HOUSE_OVERSIGHT  •  other  •  Size: 0.0 KB  •  OCR Confidence: 85.0%
Download Original Image

Extracted Text (OCR)

THE PURPOSE PUT INTO THE MACHINE Stuart Russell Stuart Russell is a professor of computer science and Smith-Zadeh Professor in Engineering at UC Berkeley. He is the coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Among the many issues raised in Norbert Wiener’s The Human Use of Human Beings (1950) that are currently relevant, the most significant to the AI researcher is the possibility that humanity may cede control over its destiny to machines. Wiener considered the machines of the near future as far too limited to exert global control, imagining instead that machines and machine-like control systems would be wielded by human elites to reduce the great mass of humanity to the status of “cogs and levers and rods.” Looking further ahead, he pointed to the difficulty of correctly specifying objectives for highly capable machines, noting a few of the simpler and more obvious truths of life, such as that when a djinnee is found in a bottle, it had better be left there; that the fisherman who craves a boon from heaven too many times on behalf of his wife will end up exactly where he started; that if you are given three wishes, you must be very careful what you wish for. The dangers are clear enough: Woe to us if we let [the machine] decide our conduct, unless we have previously examined the laws of its action, and know fully that its conduct will be carried out on principles acceptable to us! On the other hand, the machine like the djinnee, which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us. Ten years later, after seeing Arthur Samuel’s checker-playing program learn to play checkers far better than its creator, Wiener published “Some Moral and Technical Consequences of Automation” in Science. In this paper, the message is even clearer: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire. . . . In my view, this is the source of the existential risk from superintelligent AI cited in recent years by such observers as Elon Musk, Bill Gates, Stephen Hawking, and Nick Bostrom. Putting Purposes Into Machines The goal of AI research has been to understand the principles underlying intelligent behavior and to build those principles into machines that can then exhibit such behavior. In the 1960s and 1970s, the prevailing theoretical notion of intelligence was the capacity for logical reasoning, including the ability to derive plans of action guaranteed to achieve a specified goal. More recently, a consensus has emerged around the idea of a rational 29 HOUSE_OVERSIGHT_016249

Document Preview

HOUSE_OVERSIGHT_016249.jpg

Click to view full size

Extracted Information

People Mentioned

Document Details

Filename HOUSE_OVERSIGHT_016249.jpg
File Size 0.0 KB
OCR Confidence 85.0%
Has Readable Text Yes
Text Length 2,829 characters
Indexed 2026-02-04T16:27:27.763483

Related Documents

Documents connected by shared names, same document type, or nearby in the archive.

Ask the Files