Back to Results

HOUSE_OVERSIGHT_013024.jpg

Source: HOUSE_OVERSIGHT  •  Size: 0.0 KB  •  OCR Confidence: 85.0%
View Original Image

Extracted Text (OCR)

108 6 A Brief Overview of CogPrime top-level goals will be simple things such as pleasing the teacher, learning new information and skills, and protecting the robot’s body. Figure 6.3 shows part of the architecture via which cognitive processes interact with each other, via commonly acting on the AtomSpace knowledge repository. Comparing these diagrams to the integrative human cognitive architecture diagrams given in Chapter 5, one sees the main difference is that the CogPrime diagrams commit to specific structures (e.g. knowledge representations) and processes, whereas the generic integrative archi- tecture diagram refers merely to types of structures and processes. For instance, the integrative diagram refers generally to declarative knowledge and learning, whereas the CogPrime diagram refers to PLN, as a specific system for reasoning and learning about declarative knowledge. Ta- ble 6.1 articulates the key connections between the components of the CogPrime diagram and those of the integrative diagram, thus indicating the general cognitive functions instantiated by each of the CogPrime components. 6.3 Current and Prior Applications of OpenCog Before digging deeper into the theory, and elaborating some of the dynamics underlying the above diagrams, we pause to briefly discuss some of the practicalities of work done with the OpenCog system currently implementing parts of the CogPrime architecture. OpenCog, the open-source software framework underlying the “OpenCogPrime” (currently partial) implementation of the CogPrime architecture, has been used for commercial applica- tions in the area of natural language processing and data mining; for instance, see [GPPG06] where OpenCogPrime’s PLN reasoning and RelEx language processing are combined to do automated biological hypothesis generation based on information gathered from PubMed ab- stracts. Most relevantly to the present work, it has also been used to control virtual agents in virtual worlds [GEA08]. Prototype work done during 2007-2008 involved using an OpenCog variant called the Open- PetBrain to control virtual dogs in a virtual world (see Figure 6.6 for a screenshot of an OpenPetBrain-controlled virtual dog). While these OpenCog virtual dogs did not display in- telligence closely comparable to that of real dogs (or human children), they did demonstrate a variety of interesting and relevant functionalities including: e learning new behaviors based on imitation and reinforcement e responding to natural language commands and questions, with appropriate actions and natural language replies ® spontaneous exploration of their world, remembering their experiences and using them to bias future learning and linguistic interaction One current OpenCog initiative involves extending the virtual dog work via using OpenCog to control virtual agents in a game world inspired by the game Minecraft. These agents are initially specifically concerned with achieving goals in a game world via constructing structures with blocks and carrying out simple English communications. Representative example tasks would be: e Learning to build steps or ladders to get desired objects that are high up e Learning to build a shelter to protect itself from aggressors HOUSE_OVERSIGHT_013024

Document Preview

HOUSE_OVERSIGHT_013024.jpg

Click to view full size

Document Details

Filename HOUSE_OVERSIGHT_013024.jpg
File Size 0.0 KB
OCR Confidence 85.0%
Has Readable Text Yes
Text Length 3,277 characters
Indexed 2026-02-04T16:18:12.075046