HOUSE_OVERSIGHT_016865.jpg
Extracted Text (OCR)
I was introduced to Max Tegmark some years ago by his MIT colleague Alan Guth, the
father of the inflationary universe. A distinguished theoretical physicist and cosmologist
himself, Max’s principal concern nowadays is the looming existential risk posed by the
creation of an AGI (artificial general intelligence—that is, one that matches human
intelligence). Four years ago, Max co-founded, with Jaan Tallinn and others, the Future
of Life Institute (FL), which bills itself as “an outreach organization working to ensure
that tomorrow ’s most powerful technologies are beneficial for humanity.” While ona
book tour in London, he was in the midst of planning for FLI, and he admits being driven
to tears in a tube station after a trip to the London Science Museum, with its exhibitions
spanning the gamut of humanity’s technological achievements. Was all that impressive
progress in vain?
FLI’s scientific advisory board includes Elon Musk, Frank Wilczek, George
Church, Stuart Russell, and the Oxford philosopher Nick Bostrom, who dreamed up an
oft-quoted Gedankenexperiment that results in a world full of paper clips and nothing
else, produced by an (apparently) well-meaning AGI who was just following orders. The
Institute sponsors conferences (Puerto Rico 2015, Asilomar 2017) on AI safety issues and
in 2018 instituted a grants competition focusing on research in aid of maximizing the
societal benefits of AGI.
While Max is sometimes listed—by the non-cognoscenti—on the side of the
scaremongers, he believes, like Frank Wilczek, ina future that will immensely benefit
jrom AGI if in the attempt to create it, we can keep the human species from being
sidelined.
62
HOUSE_OVERSIGHT_016865