Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2022
…
8 pages
1 file
The adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people's lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in order to protect the fundamental interests and rights of those affected. This will increase the level to which these systems become ethically acceptable, legally permissible, and socially sustainable. In this paper, we first discuss the necessity of AI risk management in the health domain from the ethical, legal, and societal perspectives. We then present a taxonomy of risks associated with the use of AI systems in the health domain called HART, accessible online at https://w3id.org/hart. HART mirrors the risks of a variety of different real-world incidents caused by use of AI in the health sector. Lastly, we discuss the implications of the taxonomy for different stakeholder groups and further research. Index Terms-risk, AI systems, health, AI regulation, ethics of AI, AI public policy, taxonomy This contribution is going to be published in the proceedings of the Fourth International Conference on Transdisciplinary AI (TransAI 2022).
Health systems worldwide are facing unprecedented pressure as the needs and expectations of patients increase and get ever more complicated. The global health system is thus,forced to leverage on every opportunity, including artificial intelligence (AI), to provide care that is consistent with patients’ needs. Meanwhile, there are serious concerns about how AI tools could threaten patients’ rights and safety. Therefore, this study maps available evidence,between January 1, 2010 to September 30, 2023, on the perceived threats posed by the usage of AI tools in healthcare on patients’ rights and safety. We deployed guidelines based on that of Tricco et al. to conduct a comprehensive search of literature from Nature, PubMed, Scopus, ScienceDirect, Dimensions, Ebsco Host, ProQuest, JStore, Semantic Scholar, Taylor & Francis, Emeralds, World Health Organisation, and Google Scholar. In keeping with the inclusion and exclusions thresholds, 14 peer reviewed articles were included in this stu...
2024
Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it will be ethically justifiable. Just as we know that AI can be used to diagnose disease, predict risk, develop personalized treatment plans, monitor patients remotely, or automate triage, we also know that it can pose significant threats to patient safety and the reliability (or trustworthiness) of the healthcare sector as a whole. These ethical risks arise from (a) flaws in the evidence base of healthcare AI (epistemic concerns); (b) the potential of AI to transform fundamentally the meaning of health, the nature of healthcare, and the practice of medicine (normative concerns); and (c) the 'black box' nature of the AI development pipeline, which undermines the effectiveness of existing accountability mechanisms (traceability concerns). In this chapter, we systematically map (a)-(c) to six different levels of abstraction: individual, interpersonal, group, institutional, sectoral, and societal. The aim is to help policymakers, regulators, and other high-level stakeholders delineate the scope of regulation and other 'softer' Governing measures for AI in healthcare. We hope that by doing so, we may enable global healthcare systems to capitalize safely and reliably on the many life-saving and improving benefits of healthcare AI.
2023
This study aims to discuss key issues in the health sector in Turkey, focusing on a large-scale reform, evidence-based decision-making regarding innovative technologies and AI, stakeholder coordination, and monitoring of potential diseases and pandemics. The goal is to provide practical suggestions for AI usage in the health sector in Turkey. The researchers conducted literature reviews and searched for AI products/services online. AI offers innovative solutions across various sectors, including health. Advanced health technologies, such as gene-based, immune-based, and stem cell regeneration therapies and synthetic nano-biology, can lead to more predictive, preventive, corrective, personalized, and remotely collaborative health solution systems. The R&D process of health-related products and methods contains uncertainties, risks, and opportunities. Ethical concerns, social relations, and psychological and legal compliance levels are significant considerations for future AI applications. The importance of establishing long-term AI research and policies is emphasized while acknowledging the societal benefits provided by AI. The study suggests that Turkish authorities on integrating technological innovations into the health sector.
Social Science & Medicine, 2020
This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched to support the following research question: how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be 'ethically mindful? A series of screening stages were carried out-for example, removing articles that focused on digital health in general (e.g. data sharing, data access, data privacy, surveillance/nudging, consent, ownership of health data, evidence of efficacy)-yielding a total of 156 papers that were included in the review. We find that ethical issues can be (a) epistemic, related to misguided, inconclusive or inscrutable evidence; (b) normative, related to unfair outcomes and transformative effectives; or (c) related to traceability. We further find that these ethical issues arise at six levels of abstraction: individual, interpersonal, group, institutional, and societal or sectoral. Finally, we outline a number of considerations for policymakers and regulators, mapping these to existing literature, and categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms. We argue that if action is not swiftly taken in this regard, a new 'AI winter' could occur due to chilling effects related to a loss of public trust in the benefits of AI for health care.
The Review of Socionetwork Strategies
The Artificial Intelligence Act is the first EU proposal to impose binding provisions for AI Systems. As AI is now used in several different fields, those provisions setting AI legal standards might interact with other Regulations, especially in risky sectors, i.e. healthcare. This paper offers a brief analysis of what standards are to be considered if an AI system was intended to be used as a medical device. It shows the intersection between the Medical Device Regulation and the Artificial Intelligence Act, presenting which opportunities and challenges the new proposal might bring. Three case studies will test the intersection and report the practical outcomes in terms of classifications and obligations.
American Journal of Bioethics, 2023
[01] A model of a person’s health condition does not of itself entail the analyst’s lack of moral concern for the person modeled. A competent physician can both treat the body as an object of technical analysis and at the same time grasp that it is always also a morally relevant human subject. [02] Identifying often subtle social determinants of an individual’s health is a technical task that requires interdisciplinary analysis combining, say, sociological scholarship with clinical insights. Scientists and engineers may unknowingly embed social prejudices in developing AI-systems. But the goal of bias-free AI need not be defeated by this enduring danger. Researchers are well able to identify the biases where they manifest themselves and to make corrections in design or programming. [03] A medical professional who deploys a digital model of a patient need not confuse the model with reality. She need not displace bioethical principles of clinical practice with patient data. [04] The cognitive act of representing does not require the analyst to exclude alternative acts of representation, nor does it constrain her to represent in only certain ways. [05] Digital medical solutions pose no inherent threat to patients because social bias is not primarily a computational or algorithmic phenomenon. It is a product of institutions, cultural inheritances, poverty, and other environments that produce and perpetuate social inequities as well as some health disparities. AI-based medical practices can threaten physicians’ ethical obligations only if allowed to do so. [06] While AI may generate unwanted, unintended consequences, the potential moral and legal challenges that AI poses derive from inadequate precautionary measures by humans, not from features of AI as such. [07] Responsibility for failures of AI to meet normative standards for the treatment of human beings resides with human beings. [08] The moral capacity of human cognition is the capacity for a mutual attribution of responsibility among members of political community. Outsourcing, to AI, moral and legal responsibility for social conditions that affect citizens adversely would undermine the politics of mutual responsibility. [09] The project of identifying AI-based medical solutions that threaten physicians’ ethical obligations should ask: How are real bodies to be digitally represented such that all members of the population benefit from these rapidly developing technologies equitably? This question is not about the nature of AI-based representation but about the just distribution, within a political community, of the health benefits that medical digital solutions may offer.
Recent advancements in artificial intelligence (AI) technologies have shown promising success in optimizing health-care processes and improvising health services research and practice leading to better health outcomes. However, the role of public health ethics in the era of AI is not widely evaluated. This article aims to describe the responsible approach to AI design, development, and use from a public health perspective. This responsible approach should focus on the collective well-being of humankind and incorporate ethical principles and societal values. Such approaches are important because AI concerns and impacts the health and well-being of all of us collectively. Rather than limiting such discourses at the individual level, ethical considerations regarding AI systems should be analyzed enlarge, considering the complex socio-technological reality around the world.
Artificial Intelligence in Medicine, 2021
Frontiers in Surgery, 2022
The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment. Concerns about newer digital technologies becoming a new source of inaccuracy and data breaches have arisen as a result of its use. Mistakes in the procedure or protocol in the field of healthcare can have devastating consequences for the patient who is the victim of the error. Because patients come into contact with physicians at moments in their lives when they are most vulnerable, it is crucial to remember this. Currently, there are no well-defined regulations in place to address the legal and ethical issues that may arise due to the use of artificial intelligence in healthcare settings. This review attempts to address these pertinent issues highlighting the need for algorithmic transparency, privacy, and protection of all the beneficiaries involved and cybersecur...
Science of Law, 2024
In this study, the researchers aim to establish how Artificial Intelligence (AI) has revolutionized the health care industry and the ethical and legal issues pertaining to the use of such technology in this organization. The study provides recommendations for implementing value-adding measures to ensure the safe, secure, and ethical use of AI in healthcare, as well as addressing important concerns and providing solutions to effectively implement AI. Using a quantitative research design, the study uses primary and secondary data to critically analyze relevant literature and existing information. It highlights key challenges that come about because of the current boundaries of regulating AI in healthcare, including but not limited to informed consent, transparency, privacy, data protection, and fairness. The study is fundamentally important to the theory and practice of the implementation of AI technologies, as it illustrates the high potential of using them in the sphere of patient care and, at the same time, cites significant ethical and legal issues in their application. To fully achieve the rightly hailed benefits of AI in health care, we must address these issues. To use the AI components responsibly, rules and regulations of ethical and legal standards must change to accommodate key concerns such as consent, ownership, disclosure, and bias. These measures are critically important to centralize patient rights protection and build confidence in health care organizations. Consequently, this study offers practical policy implications that policymakers, healthcare practitioners, and technologists should consider when implementing regulatory policies. Thus, on one hand, such frameworks allow bringing innovation into the field of healthcare by AI while, on the other hand, maintaining compliance to guarantee that such solutions will be both effective and fair.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Frontiers in Pharmacology, 2023
Philosophy & Technology, 2022
Telehealth and Medicine Today, 2021
Journal of Science, Technology and Innovation Policy
United Nations Interregional Crime and Justice Research Institute (UNICRI), "Special Collection on Artificial Intelligence", 2020
SSRN Electronic Journal, 2021
Bulletin of the World Health Organization, 2020
Journal of Legal Medicine, 2020
NEJM Catalyst
AI and ethics, 2022
Journal of the American Medical Informaics Association , 2019
Systematic Reviews