HOUSE_OVERSIGHT_018428.jpg
Extracted Text (OCR)
look for a chance to “improve” the way we live, to bend us like so many paperclips
into what it seeks? The leap from deciding liver allocations to shutting down liquor
plants might seem pretty short to a rationalizing machine. And if such a machine
could really “think”, Vinge bet it would pretty quickly conclude that the restraints of
its creators were limiting what it had been asked to do. At which point the AI would
turn to thinking about how to escape those bounds. It would be like Deep Blue
programmed to plan its own prison break. And as much as humans might try to
stifle a smart machine, we’d be fighting to contain something more powerful than
we'd ever encountered.
This challenge, which sounds like something out of science fiction, is known by
technologists by a name that does sound like a short story by Isaac Asimov: “The
Confinement Problem”. The computer scientist Butler Lampson named this in 1973
as a sort of task for computer security experts - possibly their last. The assignment:
Not simply to keep malware out of a system, but to keep the mind of a malicious
machine inside. To gate it. Today computer science labs are filled with nervous,
apocalyptic research imagining the impossible troubles of confinement. The debate
divides those who think smart technology can be contained - “Boxers,” they are
called - and those like Vinge who think the AI will always, eventually escape.
“Imagine yourself confined to your house with only limited data access to the
outside, to your master.” he wrote, putting the reader in the place of an AI machine.
“If those masters thought ata rate -- say - one million times slower than you, there
is little doubt that over a period of years (your time) you could come up with
‘helpful advice’ that would incidentally set you free.”
Imagine you are in charge of containing that health-optimizing AI. What if it told you
it had the power to cure all illness and hunger, to ameliorate the misery of the world,
if only it could be permitted to really control access to all the world’s trading and
transport market? Let me out! Would you refuse?2°8 Would that be ethical?
Eventually, perhaps, the Al would study the physics of its own electrics, discover
laws no human knows, and then slip free from its box on a trail of bits we’d never
imagined, using physical laws we'll never discover. Impossible? “It seems to me that
historically ‘impossible’ has essentially always meant ‘I can’t figure out how to do it
right now,” the computer scientist Michael Vassar has written about such a situation.
“People proposing AI boxes are a bit like literature majors proposing to
lock McGuyver in a ‘room full of discarded electronics components.’”26? The
computers, built to solve problems, will do exactly that. This is perhaps why some of
the bleakest warnings about AI come from the very New Caste figures now
accelerating their adoption. Al is our “biggest existential threat” they warn, even as
they integrate it more fully into their own products.
268 Let me out: See, for instance, Stuart Armstrong, Anders Sandberg and Nick
Bostrom “Thinking Inside the Box: Controlling and Using an Oracle AI”, Minds &
Machines (2012) 22:299-324
269 People proposing: Michael Vassar (2005) “Re: AI boxing (dogs and helicopters)”
posted to SL4 mailing list
196
HOUSE_OVERSIGHT_018428