EFTA00656059.pdf
PDF Source (No Download)
Extracted Text (OCR)
From: roger schank
To: Jeffrey Epstein <jeevacation@gmail.com>
Subject: Re: Today's discussion
Date: Fri, 15 Feb 2013 12:54:25 +0000
On Feb 15, 2013, at 7:45 AM, Jeffrey Epstein wrote:
got it , thanks, ok, why don;t you put together a list of people , an agenda, a proposed time,
ok
, it can be at harvard , if you want. origins institue in arizona, . if you thinkg a university setting helps. ,
dont see why it would
just a nice place is good
If story telling is important, should the goal be to tell a funny story, .
the goal of any workshop to be to tell a coherent story of what the plan is; amazing how rarely this happens
why is humor not taught in schools
aha! I finally have you asking good questions about schools
On Fri, Feb 15, 2013 at 7:23 AM, roger schank
> wrote:
in case you asking about the people mentioned
if john anderson is doing cognitive modeling its news to me; my colleague bob abelson used to say john had
200 IQ disease (so msart he couldnt think clearly)
Pat Langley knows noting as far as I can tell
Irene pepperberg used to be hot; isnt now; was never that bright
never met damassio; people say he is very smart
aaron sloman was the wrongest AI person I ever knew
EFTA00656059
put all these people in a room and you will get waht you got at teh last VI thing
academics like to debate angels on teh head of a pin which this seems another one of
put like minded people in a room and ask them to come up with a plan to build something and you ahev a
chance
On Feb 14, 2013, at 11:17 PM, Jeffrey Epstein wrote:
waste?
Forwarded message
From: Joscha Bach <1
Date: Thu, Feb 14, 2013 at 6:42 PM
Subject: Re: Today's discussion
To: "S.M. Kosslyn"
Cc: Jeffrey Epstein <jeevacation@gmail.com>
Dear Stephen,
thank you for your so far unending patience in that discussion. --> http://www.xkcd.com/386/
>> I agree, and yet only a subset of animals can use symbols for communication.
> communication and intelligence are not the same thing (think about bees etc)
>> Of these, only a small subset can make use of negation, conjunctions and disjunctions in symbolic
communication (for instance, Irene Pepperberg's famous grey parrots). It appears that only humans can
learn rich grammatical language, and I suspect that this is the primary enabler of our superior problem
solving capabilities.
> I disagree. Einstein claimed that his greatest discoveries came from mental imagery, and he later
converted those thoughts to verbal expressions only with great difficulty. I think AI has vastly
underestimated the role of "mental simulation/emulation" in thinking and reasoning.
Nonverbal thinking is primary, and is poorly understood, and it has been neglected by what we now often
call "classic Al". (But if we look at the original ideas, that was probably not intentional. Logic based
systems were low hanging fruit. Once you have a paradigm and a community, you end up with a
methodology that is bound to stay, unfortunately.)
But nonverbal thinking is something that I suspect is quite similarly powerful in other primates. I think that
the most interesting difference between chimps and humans is how we can use grammatical language to
"program" and organize our thought processes, and how we can use it to suggest, transmit, create and
manipulate new categories.
I am on your side insofar as I think that the important research needs to be done in mental imagery (or more
accurately: on mental representations and operations that facilitate mental imagery, among other things).
But I think that human intelligence is shaped by the additions of grammars, which happen to be relatively
easy to implement when you look at them in isolation. Only grammar on its own cannot do what Einstein
did.
EFTA00656060
>>> --]]])])]]])]]]]]] So.. what would be wrong with building a machine that could do well on IQ tests?
>> Nothing is wrong with building a machine that excels at playing chess or cooking coffee or scoring that
the Raven test.
> The Raven would be a bad idea -- way too easy. The WAIS has some 11 subtests, which cover a wide
range of underlying abilities (and are much more challenging)
Lets look at them (I have to admit that I am no expert on this, and it is quite some time ago that I looked at
IQ testing):
- The processing speed tests are probably trivial for computers
- The working memory tests are likewise rather simple engineering problems
- Perceptual reasoning is somewhat similar to the Raven (maybe I underestimate them?)
- Verbal comprehension:
- similarities and vocabulary tests are classical Al and computational linguistics
- information is close to IBM's Watson (recognition and inference)
The only thing that looks interesting to me in the WAIS is the comprehension test, because I don't see a
straightforward approach to cheat on them with narrow AI. I would like to expand exactly this domain:
making sense of the world.
We don't have advanced problem solving ("these are the rules for chess. how would you try to beat a
beginner level player, a medium player, a top player most quickly?"). We don't have constructive abilities.
We don't have verbal creativity etc.
Please tell me if my take on the WAIS is wrong!
> How do you know for sure what the "basics" are?
While the literal understanding of the Turing Test leads nowhere (or, well, to the Loebner prize), I think that
he had the right idea. Intelligence is reflected in the ability to participate in meaningful discourse, which
includes interpreting and creatively structuring the world. Many of the things that the WAIS measures, like
recognizing and categorizing shapes, are prerequisites for that. Others might be acquired tastes that emerge
on more basic functionality, like mental arithmetic. But a toolbox is not an architecture. A collection of
tubes, tires, pedals and spokes is not a bicycle.
Some of the basics stem directly from the requirements of producing adequate representations of perceptual
and abstracted content (hybrid hierarchical representations that can do associations, compositional stuff,
grammatical systematicity, learning and categorization, inheritance etc.). Others come from the needs to get
the processes of bottom-up/top-down perception, reflection, memory retrieval, inference, analogy building
etc. to work. And some have to do with the requirements of translating between Mentalese (in Pinker's
sense, not in Fodor's) and natural language.
I may delude myself in thinking that I know what the basics are. In fact, it is extremely likely that I do
(every computer science problem seems to be misconceptualized until it has been properly implemented).
But I would start with mental representation, perceptual processing and motivational relevance, and then go
for language, while revisiting those areas that turn out to fall short.
> Forget about the Raven; it's a non-verbal test of fluid intelligence (which in fact turns out to have, by
accident not design, two different types of items -- solved by spatial vs. analytic strategies). The Raven does
not even begin to characterize all of what is captured by the WAIS
It might well be that I totally misunderestimate the WAIS requirements; I will look at them.
EFTA00656061
>> I agree. But I am not convinced that proper emotions are absolutely necessary for Intelligence
(motivation might suffice to drive some kind of non-emotional, serene Buddha intelligence). I am
nevertheless interested in understanding and modeling them.
> I think Antonio Damasio and his successors have made a very good case that emotion plays a key role in
reasoning. (Not just motivation, actual emotion.)
Most of what I would say against that has been better expressed by Aaron Sloman. For instance, if my
computer is prone to swapping memory content to hard drive and back, and I kill the part of the OS that
coordinates the swapping, my computer is likely to malfunction. But from this I cannot infer that computers
cannot work without swapping.
Damasio's argument does not convince me because he does not elucidate a functional role that would
emotion an absolute requirement for an artificial mind. Lesion studies amount to shutting down parts of an
operating system that has been designed to cope with very specific requirements. I believe that in humans,
emotions structure social interaction, support communication, prime memory and cognitive processing, and
most importantly, allocate the scarce resources of our mind according to the current situation. None of this
is necessary if I remove the resource constraints.
But again, perhaps there are better arguments now than in the original somatic marker hypothesis?
Please do not misunderstand me; I am making a merely philosophical point here, with respect to the basic
requirements for Intelligence. I think that emotion is highly interesting, that Damasio is quite correct with
respect to what emotion does, and that it makes a lot of sense (and is fun) to equip AIs with emotion, mood,
affect and emotional dispositions. But strictly necessary? No.
> Are you involved in BICA? That seems like a natural community for you!
The way I understand it, there are at least four very similar groups now: cognitive modeling (that is where
John Anderson goes), AGI (started by Ben Goertzel as an attempt to revive the original Al), BICA (a
remnant from the failed DARPA proposal of the same name, and later picked up by Alexei Samsonovich as
an alternative to AGI, I suspect because he does not get along with Ben), and Cognitive Systems (Pat
Langley et al.). I basically like them all, and think that they should join forces, while simultaneously raising
the bars against narrow AI and science fiction. Many members of the audience already belong to two or
even three of the groups. Alas, politics, mutual accusations of scruffiness and stuffiness, and so on...
Personally, I have not been to one of the BICA conferences (only a couple planning workshops), and I am
on their roster of reviewers.
Cheers,
Joscha
The information contained in this communication is
confidential, may be attorney-client privileged, may
constitute inside information, and is intended only for
the use of the addressee. It is the property of
Jeffrey Epstein
Unauthorized use, disclosure or copying of this
EFTA00656062
communication or any part thereof is strictly prohibited
and may be unlawful. If you have received this
communication in error, please notify us immediately by
return e-mail or by e-mail to jeevacation@gmail.com, and
destroy this communication and all copies thereof,
including all attachments. copyright -all rights reserved
roger schank
john evans professor emeritus, northwestern university
The information contained in this communication is
confidential, may be attorney-client privileged, may
constitute inside information, and is intended only for
the use of the addressee. It is the property of
Jeffrey Epstein
Unauthorized use, disclosure or copying of this
communication or any part thereof is strictly prohibited
and may be unlawful. If you have received this
communication in error, please notify us immediately by
return e-mail or by e-mail to jeevacation@gmail.com, and
destroy this communication and all copies thereof,
including all attachments. copyright -all rights reserved
roger schank
john evans professor emeritus, northwestern university
EFTA00656063
Document Preview
PDF source document
This document was extracted from a PDF. No image preview is available. The OCR text is shown on the left.
This document was extracted from a PDF. No image preview is available. The OCR text is shown on the left.
Extracted Information
Document Details
| Filename | EFTA00656059.pdf |
| File Size | 357.5 KB |
| OCR Confidence | 85.0% |
| Has Readable Text | Yes |
| Text Length | 11,456 characters |
| Indexed | 2026-02-11T23:20:22.636993 |