Kaj Sotala
Related Authors
Richard Price
University of Oxford
Steven Pinker
Harvard University
Remo Caponi
University of Cologne
John Johnson
Pennsylvania State University
John Sutton
Macquarie University
Eros Carvalho
Universidade Federal do Rio Grande do Sul
Roshan Chitrakar
Nepal College of Information Technology
Lev Manovich
Graduate Center of the City University of New York
Thomas L Webb
The University of Sheffield
Carlo Semenza
Università degli Studi di Padova
InterestsView All (6)
Uploads
Books by Kaj Sotala
Papers by Kaj Sotala
Approach: We focus on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos.
Findings: Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation, and astronomical trajectories appear possible.
Value: Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue.
My focus is on AI advanced enough to count as an AGI, or artificial general intelligence, rather than risks from “narrow AI,” such as technological unemployment. However, it should be noted that some of the risks discussed—in particular, crucial capabilities related to narrow domains—could arise anywhere on the path from narrow AI systems to superintelligence.
Approach: We focus on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos.
Findings: Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation, and astronomical trajectories appear possible.
Value: Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue.
My focus is on AI advanced enough to count as an AGI, or artificial general intelligence, rather than risks from “narrow AI,” such as technological unemployment. However, it should be noted that some of the risks discussed—in particular, crucial capabilities related to narrow domains—could arise anywhere on the path from narrow AI systems to superintelligence.