Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2024, Philosophy and Technology
https://doi.org/10.1007/s13347-024-00755-7…
18 pages
1 file
Risks connected with AI systems have become a recurrent topic in public and academic debates, and the European proposal for the AI Act explicitly adopts a riskbased tiered approach that associates different levels of regulation with different levels of risk. However, a comprehensive and general framework to think about AI-related risk is still lacking. In this work, we aim to provide an epistemological analysis of such risk building upon the existing literature on disaster risk analysis and reduction. We show how a multi-component analysis of risk, that distinguishes between the dimensions of hazard, exposure, and vulnerability, allows us to better understand the sources of AI-related risks and effectively intervene to mitigate them. This multi-component analysis also turns out to be particularly useful in the case of general-purpose and experimental AI systems, for which it is often hard to perform both ex-ante and ex-post risk analyses.
2023
The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by the Intergovernmental Panel on Climate Change (IPCC) reports and related literature. This approach enables a nuanced analysis of AI risk by exploring the interplay between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We further refine the proposed methodology by applying a proportionality test to balance the competing values involved in AI risk assessment. Finally, we present three uses of this approach under the AIA: to implement the Regulation, to assess the significance of risks, and to develop internal risk management systems for AI deployers.
Social Science Research Network, 2023
The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by the Intergovernmental Panel on Climate Change (IPCC) reports and related literature. This approach enables a nuanced analysis of AI risk by exploring the interplay between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We further refine the proposed methodology by applying a proportionality test to balance the competing values involved in AI risk assessment. Finally, we present three uses of this approach under the AIA: to implement the Regulation, to assess the significance of risks, and to develop internal risk management systems for AI deployers.
AI & SOCIETY
The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between (a) risk determinants, (b) individual driv...
AI advances represent a great technological opportunity, but also possible perils. This paper undertakes an ethical and systematic evaluation of those risks in a pragmatic analytical form of questions, which we term ‘Conceptual AI Risk analysis’. We then look at a topical case example in an actual industrial setting and apply that methodology in outline. The case involves Deep Learning Black-Boxes and their risk issues in an environment that requires compliance with legal rules and industry best practices. We examine a technological means to attempt to solve the Black-box problem for this case, referred to as “Really Useful Machine Learning” ( RUMLSM ). DARPA has identified such cases as being the “Third Wave of AI.” Conclusions to its efficacy are drawn.
AI advances represent a great technological opportunity, but also possible perils. This paper undertakes an ethical and systematic evaluation of those risks in a pragmatic analytical form of questions, which we term 'Conceptual AI Risk analysis'. We then look at a topical case example in an actual industrial setting and apply that methodology in outline. The case involves Deep Learning Black-Boxes and their risk issues in an environment that requires compliance with legal rules and industry best practices. We examine a technological means to attempt to solve the Black-box problem for this case, referred to as "Really Useful Machine Learning" ( RUML SM ). DARPA has identified such cases as being the "Third Wave of AI." Conclusions to its efficacy are drawn. Martin Ciupa is the CTO of calvIO Inc., a company (associated with the Calvary Robotics group of companies) focused on simplifying the cybernetic interaction between man and machine in the industrial setting. Martin has had a career in both technology, general management and commercial roles at senior levels in North America, Europe and Asia. He has an academic background in Physics and Cybernetics. He has applied AI and Machine learning systems to applications in decision support for Telco, Manufacturing and Financial services sectors and published technical articles in Software, Robotics, AI and related disciplines.
Social Science Research Network, 2023
The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We illustrate this model using large language models (LLMs) as an example.
arXiv (Cornell University), 2024
The integration of advanced artificial intelligence (AI) across contemporary sectors and industries is not just a technological upgrade but a transformation with the potential to have profound implications. This paper explores the concept of structural risks associated with the rapid integration of advanced AI across social, economic, and political systems. This framework challenges conventional perspectives that primarily focus on direct AI threats such as accidents and misuse and suggests that these more proximate risks influence and are influenced by larger sociotechnical dynamics. By analyzing the complex interactions between technological advancements and social dynamics, this study identifies three primary categories of structural risks: antecedent structural causes, antecedent AI system causes, and deleterious feedback loops. We present a comprehensive framework to understand the causal chains that drive these risks, highlighting the interdependence between structural forces and the more proximate risks of misuse, system failures, and the diffusion of misaligned systems. The paper articulates how unchecked AI advancement can reshape power dynamics, trust, and incentive structures, with the potential for profound and unpredictable societal shifts. We introduce a methodological research agenda for mapping, simulating, and gaming these dynamics aimed at preparing policymakers and national security professionals for the challenges posed by next-generation AI technologies. The paper concludes with policy recommendations to incorporate a more nuanced understanding of the sociotechnical nexus into international governance and strategy.
European Journal of Risk Regulation, 2021
The issue of super-intelligent artificial intelligence (AI) has begun to attract ever more attention in economics, law, sociology and philosophy studies. A new industrial revolution is being unleashed, and it is vital that lawmakers address the systemic challenges it is bringing while regulating its economic and social consequences. This paper sets out recommendations to ensure informed regulatory intervention covering potential uncontemplated AI-related risks. If AI evolves in ways unintended by its designers, the judgment-proof problem of existing legal persons engaged with AI might undermine the deterrence and insurance goals of classic tort law, which consequently might fail to ensure optimal risk internalisation and precaution. This paper also argues that, due to identified shortcomings, the debate on the different approaches to controlling hazardous activities boils down to a question of efficient ex ante safety regulation. In addition, it is suggested that it is better to pla...
arXiv (Cornell University), 2023
Given rapid progress toward advanced AI and risks from frontier AI systems-advanced AI systems pushing the boundaries of the AI capabilities frontier-the creation and implementation of AI governance and regulatory schemes deserves prioritization and substantial investment. However, the status quo is untenable and, frankly, dangerous. A regulatory gap has permitted AI labs to conduct research, development, and deployment activities with minimal oversight. In response, frontier AI system evaluations have been proposed as a way of assessing risks from the development and deployment of frontier AI systems. Yet, the budding AI risk evaluation ecosystem faces significant coordination challenges, such as limited diversity and independence of evaluators, suboptimal allocation of effort, and perverse incentives. This paper proposes a solution in the form of an international consortium for AI risk evaluations, comprising both AI developers and third-party AI risk evaluators. Such a consortium could play a critical role in international efforts to mitigate societal-scale risks from advanced AI, including in managing responsible scaling policies and coordinated evaluation-based risk response. In this paper, we discuss the current evaluation ecosystem and its shortcomings, propose an international consortium for advanced AI risk evaluations, discuss issues regarding its implementation, discuss lessons that can be learned from previous international institutions and existing proposals for international AI governance institutions, and finally, we recommend concrete steps to advance the establishment of the proposed consortium: solicit feedback from stakeholders, conduct additional research, conduct a workshop(s) for stakeholders, create a final proposal and solicit funding, and create a consortium.
2022
The chapter on takeover scenarios (§8) was partially inspired by similar work by Kaj Sotala [1]. All errors are our own. Note: while this report is very long, and the chapters do reference each other, they are also largely selfcontained. Chapters can be read in whatever order you prefer, and you don't need to read the whole thing to make sense of parts that you're interested in reading.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
AI Safety and Security, 2018
Futures
Responsible AI for DRM working group summary, 2021
AI & Society, 2024
International journal of research publications, 2020
Texila International Journal of Academic Research, 2019
arXiv (Cornell University), 2024
AI & SOCIETY, 2019
PAAKAT: Revista de Tecnología y Sociedad, 2022
Oxford University Press eBooks, 2022
European Journal of Risk Regulation, 2023
DRR Dynamics, 2023
Front. Comput. Sci., 2024
The Next Wave of Sociotechnical Design, 2021
Big Data and Cognitive Computing, 2019
Proceedings of the 40th Annual ARCOM Conference, 2024
BioLaw Journal – Rivista di BioDiritto, 2023
Harvard Data Science Review, 2024