Back to Results

HOUSE_OVERSIGHT_011298.jpg

Source: HOUSE_OVERSIGHT  •  other  •  Size: 0.0 KB  •  OCR Confidence: 85.0%
Download Original Image

Extracted Text (OCR)

Origins February 24 — 26, 2017 PROJECT An Origins Project Scientific Workshop ARIZONA STATE UNIVERSITY Challenges of Artificial Intelligence: Envisioning and Addressing Adverse Outcomes 5) Al, GOALS, AND INADVERTENT SIDE EFFECTS Runaway Resource Monopoly (focus) Self-improvement, Shift of Objectives (Contributions from Shahar Avin, Sean O hEigeartaigh, Greg Cooper, and others) An important result from theoretical consideration of risks from advanced autonomous systems is the combination of two theses: orthogonality, that states that the goal an autonomous system is trying to achieve can be entirely unrelated to its optimization power; and the notion of instrumental goals, that for a large class of goals there is a set of convergent sub-goals (for an agent advanced enough to discover them) that include self- and goal-preservation, resource- and capacity-increase, etc. (e.g., as discussed in Bostrom, 2014). One suggestion for maintaining control over advanced systems that pose risks from the combination of the above considerations is to limit the system's ability to access increasing resources. To make this situation concrete, consider an installation of a reinforcement-learning task scheduler for a group of distributed data centres (e.g. Amazon Web Services). The goal of the algorithm is to minimize time-to-execution of the tasks sent to the system by users. As part of its general scheduling remit, it is also responsible for scheduling its own optimization sub-processes. The system has a clear incentive to control an increasing set of compute resources, both for increasing its optimization power and for achieving its main goal of reducing time-to-execution. Aware of these considerations, the engineers of the system put in place various hard-coded limits on the amount of resources the system can access, but these limits can be subverted through privilege escalation, masquerading as other tasks, manipulation of users, physical control, etc. POSSIBLE TRAJECTORY e Ateam within a large tech corporation that has both ML development capacities and cloud computing capacities is tasked with improving task scheduling on distributed compute resources. e The team decides to deploy an out-of-the-box reinforcement learning package developed in-house by the ML research teams. e The inputs for the system are current loads on the different machines, the incoming tasks queue (including priority for different tasks), and historical data on task runtimes. The output is an assignment of tasks to machines. The loss function is the priority-weighted time-to-execute. e The system performs well in a test environment (where the RL is running on a single cluster of dedicated machines), and is rolled-out. e A few months later, the system starts to run out of memory, and a tech-infrastructure engineer decides to switch the system from a fixed-capacity setting to a load-balanced setting. e Now an output of the system (assignment of the RL task to a machine) is coupled to the objective of the machine (reducing runtime), and the resulting feedback loop drives the RL agent to spawn an increasing amount of RL tasks with very high priority. 15 HOUSE_OVERSIGHT_011298

Document Preview

HOUSE_OVERSIGHT_011298.jpg

Click to view full size

Document Details

Filename HOUSE_OVERSIGHT_011298.jpg
File Size 0.0 KB
OCR Confidence 85.0%
Has Readable Text Yes
Text Length 3,203 characters
Indexed 2026-02-04T16:13:24.602335

Related Documents

Documents connected by shared names, same document type, or nearby in the archive.

Ask the Files