HOUSE_OVERSIGHT_016970.jpg
Extracted Text (OCR)
division has been critiqued by evolutionary biologist Richard Dawkins, myself, and
others. We can discuss “should” if framed as “we should do X in order to achieve Y.”
Which Y should be a high priority is not necessarily settled by democratic vote but might
be settled by Darwinian vote. Value systems and religions wax and wane, diversify,
diverge, and merge just as living species do: subject to selection. The ultimate “value”
(the “should”’) is survival of genes and memes.
Few religions say that there is no connection between our physical being and the
spiritual world. Miracles are documented. Conflicts between Church doctrine and
Galileo and Darwin are eventually resolved. Faith and ethics are widespread in our
species and can be studied using scientific methods, including but not limited to fMRI,
psychoactive drugs, questionnaires, et cetera.
Very practically, we have to address the ethical rules that should be built in,
learned, or probabilistically chosen for increasingly intelligent and diverse machines. We
have a whole series of trolley problems. At what number of people in line for death
should the computer decide to shift a moving trolley to one person? Ultimately this
might be a deep-learning problem—one in which huge databases of facts and
contingencies can be taken into account, some seemingly far from the ethics at hand.
For example, the computer might infer that the person who would escape death if
the trolley is left alone is a convicted terrorist recidivist loaded up with doomsday
pathogens, or a saintly POTUS—or part of a much more elaborate chain of events in
detailed alternative realities. If one of these problem descriptions seems paradoxical or
illogical, it may be that the authors of the trolley problem have adjusted the weights on
each sides of the balance such that hesitant indecision is inevitable.
Alternatively, one can use misdirection to rig the system, such that the error
modes are not at the level of attention. For example, in the Trolley Problem, the real
ethical decision was made years earlier when pedestrians were given access to the rails—
or even before that, when we voted to spend more on entertainment than on public safety.
Questions that at first seem alien and troubling, like “Who owns the new minds, and who
pays for their mistakes?” are similar to well-established laws about who owns and pays
for the sins of a corporation.
The Slippery Slopes
We can (over)simplify ethics by claiming that certain scenarios won’t happen. The
technical challenges or the bright red lines that cannot be crossed are reassuring, but the
reality is that once the benefits seem to outweigh the risks (even briefly and barely), the
red lines shift. Just before Louise Brown’s birth in 1978, many people were worried that
she “would turn out to be a little monster, in some way, shape or form, deformed,
something wrong with her.”*° Few would hold this view of in-vitro fertilization today.
What technologies are lubricating the slope toward multiplex sentience? It is not
merely deep machine-learning algorithms with Big Iron. We have engineered rodents to
be significantly better at a variety of cognitive tasks as well as to exhibit other relevant
traits, such as persistence and low anxiety. Will this be applicable to animals that are
already at the door of humanlike intelligence? Several show self-recognition in a mirror
test—chimpanzees, bonobos, orangutans, some dolphins and whales, and magpies.
45 “Then, Doctors ‘All Anxious’ About Test-tube Baby”
http://edition.cnn.com/2003/HEALTH/parenting/07/25/cnna.copperman/
167
HOUSE_OVERSIGHT_016970
Related Documents
Documents connected by shared names, same document type, or nearby in the archive.