HOUSE_OVERSIGHT_016987.jpg
Extracted Text (OCR)
von Neumann and others on computers came directly not from Turing Machines but
through this bypath of neural networks.
But simple neural networks didn’t do much. Frank Rosenblatt invented a learning
device he called the perceptron, which was a one-layer neural network. In the late sixties,
Marvin Minsky and Seymour Papert wrote a book titled Perceptrons, in which they
basically proved that perceptrons couldn’t do anything interesting, which is correct.
Perceptrons could only make linear distinctions between things. So the idea was more or
less dropped. People said, “These guys have written a proof that neural networks can’t
do anything interesting, therefore no neural networks can do anything interesting, so let’s
forget about neural networks.” That attitude persisted for some time.
Meanwhile, there were a couple of other approaches to AI. One was based on
understanding, at a formal level, symbolically, how the world works; and the other was
based on doing statistics and probabilistic kinds of things. With regard to symbolic AI,
one of the test cases was, Can we teach a computer to do something like integrals? Can
we teach a computer to do calculus? There were tasks like machine translation, which
people thought would be a good example of what computers could do. The bottom line is
that by the early seventies, that approach had crashed.
Then there was a trend toward devices called expert systems, which arose in the
late seventies and early eighties. The idea was to have a machine learn the rules that an
expert uses and thereby figure out what to do. That petered out. After that, AI became
little more than a crazy pursuit.
I had been interested in how you make an AJI-like machine since I was a kid. I was
interested particularly in how you take the knowledge we humans have accumulated in
our civilization and automate answering questions on the basis of that knowledge. I
thought about how you could do that symbolically, by building a system that could break
down questions into symbolic units and answer them. I worked on neural networks at
that time and didn’t make much progress, so I put it aside for a while.
Back in mid-2002 to 2003, I thought about that question again: What does it take
to make a computational knowledge system? The work I’d done by then pretty much
showed that my original belief about how to do this was completely wrong. My original
belief had been that in order to make a serious computational knowledge system, you first
had to build a brainlike device and then feed it knowledge—just as humans learn in
standard education. Now I realized that there wasn’t a bright line between what is
intelligent and what is simply computational.
I had assumed that there was some magic mechanism that made us vastly more
capable than anything that was just computational. But that assumption was wrong. This
insight is what led to Wolfram|Alpha. What I discovered is that you can take a large
collection of the world’s knowledge and automatically answer questions on the basis of
it, using what are essentially merely computational techniques. It was an alternative way
to do engineering—a way that’s much more analogous to what biology does in evolution.
In effect, what you normally do when you build a program 1s build it step-by-step.
But you can also explore the computational universe and mine technology from that
universe. Typically, the challenge is the same as in physical mining: That is, you find a
supply of, let’s say, iron, or cobalt, or gadolinium, with some special magnetic properties,
184
HOUSE_OVERSIGHT_016987
Extracted Information
People Mentioned
Document Details
| Filename | HOUSE_OVERSIGHT_016987.jpg |
| File Size | 0.0 KB |
| OCR Confidence | 85.0% |
| Has Readable Text | Yes |
| Text Length | 3,587 characters |
| Indexed | 2026-02-04T16:29:52.517555 |
Related Documents
Documents connected by shared names, same document type, or nearby in the archive.