HOUSE_OVERSIGHT_012996.jpg
Extracted Text (OCR)
80 4 Brief Survey of Cognitive Architectures
4.5.2 The Society of Mind and the Emotion Machine
In his influential but controversial book The Society of Mind [Min&8], Marvin Minsky described
a model of human intelligence as something that is built up from the interactions of numerous
simple agents. He spells out in great detail how various particular cognitive functions may be
achieved via agents and their interactions. He leaves no room for any central algorithms or
structures of thought, famously arguing: “What magical trick makes us intelligent? The trick
is that there is no trick. The power of intelligence stems from our vast diversity, not from any
single, perfect principle.”
This perspective was extended in the more recent work The Emotion Machine [Min07], where
Minsky argued that emotions are “ways to think” evolved to handle different “problem types”
that exist in the world. The brain is posited to have rule-based mechanisms (selectors) that
turns on emotions to deal with various problems.
Overall, both of these works serve better as works of speculative cognitive science than as
works of AI or cognitive architecture per se. As neurologist Richard Restak said in his review
of Emotion Machine, “Minsky does a marvelous job parsing other complicated mental activities
into simpler elements. ... But he is less effective in relating these emotional functions to what’s
going on in the brain.” As Restak added, he is also not so effective at relating these emotional
functions to straightforwardly implementable algorithms or data structures.
Push Singh, in his PhD thesis and followup work [SBC05], did the best job so far of creating
a concrete AI design based on Minsky’s ideas. While Singh’s system was certainly interesting,
it was also noteworthy for its lack of any learning mechanisms, and its exclusive focus on
explicit rather than implicit knowledge. Due to Singh’s tragic death, his work was never brought
anywhere near completion. It seems fair to say that there has not yet been a serious cognitive
architecture posed based closely on Minsky’s ideas.
4.5.3 DUAL
The closest thing to a Minsky-ish cognitive architecture is probably DUAL, which takes the
Society of Mind concept and adds to it a number of other interesting ideas. DUAL integrates
symbolic and connectionist approaches at a deeper level than CLARION, and has been used
to model various cognitive functions such as perception, analogy and judgment. Computations
in DUAL emerge from the self-organized interaction of many micro-agents, each of which is
a hybrid symbolic/connectionist device. Each DUAL agent plays the role of a neural network
node, with an activation level and activation spreading dynamics; but also plays the role of
a symbol, manipulated using formal rules. The agents exchange messages and activation via
links that can be learned and modified, and they form coalitions which collectively represent
concepts, episodes, and facts.
The structure of the model is sketchily depicted in Figure 4.10, which covers the application
of DUAL to a toy environment called TextWorld. The visual input corresponding to a stim-
ulus is presented on a two-dimensional visual array representing the front end of the system.
Perceptual primitives like blobs and terminations are immediately generated by cheap parallel
computations. Attention is controlled at each time by an object which allocates it selectively
to some area of the stimulus. A detailed symbolic representation is constructed for this area
which tends to fade away as attention is withdrawn from it and allocated to another one. Cate-
HOUSE_OVERSIGHT_012996
Extracted Information
People Mentioned
Document Details
| Filename | HOUSE_OVERSIGHT_012996.jpg |
| File Size | 0.0 KB |
| OCR Confidence | 85.0% |
| Has Readable Text | Yes |
| Text Length | 3,637 characters |
| Indexed | 2026-02-04T16:18:06.953712 |
Related Documents
Documents connected by shared names, same document type, or nearby in the archive.