Back to Results

HOUSE_OVERSIGHT_014709.jpg

Source: HOUSE_OVERSIGHT  •  other  •  Size: 0.0 KB  •  OCR Confidence: 85.0%
Download Original Image

Extracted Text (OCR)

Esa Origins 7 February 24 — 26, 2017 PROJECT An Origins Project Scientific Workshop Challenges of Artificial Intelligence: Envisioning and Addressing Adverse Outcomes ARIZONA STATE UNIVERSITY the agent must rely on the data gathered from the sensors (there is no human in the loop to decide this), there can be unexpected situations where the agent would stop some human interaction with the system or interrupt maintenance activities, because it deemed that these actions could harm the system. For example, the system administrator stopping some services during system maintenance, or upgrading to a newer software version. e Replication to third-party systems and collateral damage — Building on the first problem of the agent not having the correct information. If the term friendly network gets misconfigured and the agents have the capability to self-transfer to new friendly hosts, it can happen that the agent would distribute to external networks, start defending it and take responsive actions on third party hosts. Such incidents would make the agents very difficult to halt. e Friendly fire — One agent might consider another agent as an adversary and start trying to eliminate/evade each other. e Silent compromise — If the adversary manages to get access or reverse engineer the agents (without the agent self-destructing), they could potentially trick or reconfigure the agents to turn on themselves. CYBER-OFFENSE Cybercrime is a growth industry, from stolen credit cards to ransomware. Very crudely, it's a two tier system, with a "spray and pray" approach at the low-skill end that targets millions of system in the hope some of them would be vulnerable (through technical or human failing); at the other end are tailor-made attacks that rely on slow progression of escalation and compromise, often requiring advanced technical skills for discovering zero-day vulnerabilities and intimate knowledge of the target. Advanced artificial intelligence may be used to automate some or all of the components of contemporary "elite" cybercrime, such that generic offensive toolkits could become available to small criminal groups, leading to a world where individuals and companies do not feel safe and cannot trust their governments and the police to protect them. At the same time significant wealth could be accumulated by those groups unscrupulous enough to use such tools, transferring significant power to those who put little value in the property rights of others. Such wealth and power could be used to further develop cyber-offensive capabilities, leading to a positive-feedback loop that may outpace similar feedback loops in less harmful industries, e.g. advertising or health where the great short- and mid-term benefits of Al are expected. PERSISTENT CYBERWARFARE? Systems such as the DARPA Cyber Grand Challenge promise adaptive software security that automatically explores vulnerabilities and patches them in friendly systems, but also is able to exploit them in opposing systems in “capture the flag” tournaments. As methods of developing such systems improve, an arms race emerges between actors in the cybersecurity space, dominated by major nation states eager to both improve their own resilience in a scalable way and finding choice zero day exploits suitable for intelligence purposes, supported by national security concerns. Other actors such as corporations and criminal networks also spend effort in building or copying such systems. Meanwhile 13 HOUSE_OVERSIGHT_014709

Document Preview

HOUSE_OVERSIGHT_014709.jpg

Click to view full size

Document Details

Filename HOUSE_OVERSIGHT_014709.jpg
File Size 0.0 KB
OCR Confidence 85.0%
Has Readable Text Yes
Text Length 3,521 characters
Indexed 2026-02-04T16:23:28.519473

Related Documents

Documents connected by shared names, same document type, or nearby in the archive.

Ask the Files