Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021, Artificial Intelligence in Medicine
…
6 pages
1 file
AI-generated Abstract
Artificial intelligence (AI) has the potential to significantly enhance healthcare efficacy and patient outcomes, yet its integration poses substantial ethical challenges. The paper outlines these challenges through the lens of four core ethical principles: respect for autonomy, beneficence, nonmaleficence, and justice. Special focus is given to the implications of informed consent in the context of AI's decision-making, the transformative effect on the physician-patient relationship, and the necessity for clinicians to maintain a critical view of AI outputs to safeguard patient trust and care quality.
Pre-Print, 2019
Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be 'Artificial Intelligence' (AI)-particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by "robot doctors." Instead, it is an argument that rests on the classic counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients' health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted.
2024
Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it will be ethically justifiable. Just as we know that AI can be used to diagnose disease, predict risk, develop personalized treatment plans, monitor patients remotely, or automate triage, we also know that it can pose significant threats to patient safety and the reliability (or trustworthiness) of the healthcare sector as a whole. These ethical risks arise from (a) flaws in the evidence base of healthcare AI (epistemic concerns); (b) the potential of AI to transform fundamentally the meaning of health, the nature of healthcare, and the practice of medicine (normative concerns); and (c) the 'black box' nature of the AI development pipeline, which undermines the effectiveness of existing accountability mechanisms (traceability concerns). In this chapter, we systematically map (a)-(c) to six different levels of abstraction: individual, interpersonal, group, institutional, sectoral, and societal. The aim is to help policymakers, regulators, and other high-level stakeholders delineate the scope of regulation and other 'softer' Governing measures for AI in healthcare. We hope that by doing so, we may enable global healthcare systems to capitalize safely and reliably on the many life-saving and improving benefits of healthcare AI.
Science of Law, 2024
In this study, the researchers aim to establish how Artificial Intelligence (AI) has revolutionized the health care industry and the ethical and legal issues pertaining to the use of such technology in this organization. The study provides recommendations for implementing value-adding measures to ensure the safe, secure, and ethical use of AI in healthcare, as well as addressing important concerns and providing solutions to effectively implement AI. Using a quantitative research design, the study uses primary and secondary data to critically analyze relevant literature and existing information. It highlights key challenges that come about because of the current boundaries of regulating AI in healthcare, including but not limited to informed consent, transparency, privacy, data protection, and fairness. The study is fundamentally important to the theory and practice of the implementation of AI technologies, as it illustrates the high potential of using them in the sphere of patient care and, at the same time, cites significant ethical and legal issues in their application. To fully achieve the rightly hailed benefits of AI in health care, we must address these issues. To use the AI components responsibly, rules and regulations of ethical and legal standards must change to accommodate key concerns such as consent, ownership, disclosure, and bias. These measures are critically important to centralize patient rights protection and build confidence in health care organizations. Consequently, this study offers practical policy implications that policymakers, healthcare practitioners, and technologists should consider when implementing regulatory policies. Thus, on one hand, such frameworks allow bringing innovation into the field of healthcare by AI while, on the other hand, maintaining compliance to guarantee that such solutions will be both effective and fair.
Social Science & Medicine, 2020
This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched to support the following research question: how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be 'ethically mindful? A series of screening stages were carried out-for example, removing articles that focused on digital health in general (e.g. data sharing, data access, data privacy, surveillance/nudging, consent, ownership of health data, evidence of efficacy)-yielding a total of 156 papers that were included in the review. We find that ethical issues can be (a) epistemic, related to misguided, inconclusive or inscrutable evidence; (b) normative, related to unfair outcomes and transformative effectives; or (c) related to traceability. We further find that these ethical issues arise at six levels of abstraction: individual, interpersonal, group, institutional, and societal or sectoral. Finally, we outline a number of considerations for policymakers and regulators, mapping these to existing literature, and categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms. We argue that if action is not swiftly taken in this regard, a new 'AI winter' could occur due to chilling effects related to a loss of public trust in the benefits of AI for health care.
Frontiers in Surgery, 2022
The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment. Concerns about newer digital technologies becoming a new source of inaccuracy and data breaches have arisen as a result of its use. Mistakes in the procedure or protocol in the field of healthcare can have devastating consequences for the patient who is the victim of the error. Because patients come into contact with physicians at moments in their lives when they are most vulnerable, it is crucial to remember this. Currently, there are no well-defined regulations in place to address the legal and ethical issues that may arise due to the use of artificial intelligence in healthcare settings. This review attempts to address these pertinent issues highlighting the need for algorithmic transparency, privacy, and protection of all the beneficiaries involved and cybersecur...
Journal of Science, Technology and Innovation Policy
The idea of integrating ethics into artificial intelligence (AI) increased globally, and it became an important policy objective in many countries. The ethics of AI has seen significant press coverage in recent years, which supports related research, but also may end up undermining it. The issues under discussion were just predictions of what future technology will bring, and we already know what would be most ethical and how to achieve that. This paper is a literature review in nature; it analyzes previous studies related to implementation of ethics in AI. The literature results indicate that between 2010 and 2021, there were 150 AI ethical incidents; including data privacy and security risks, safety concerns, bias diagnosis, the possibility of hostile entities taking control of AI, a lack of interpersonal communication or a humanistic perspective, wealth concentration around an AI business and job losses. The findings obtained from this literature review can help to propose metho...
2020
With the rise of artificial intelligence (AI) and its application within industries, there is no doubt that someday AI will be one of the key players in medical diagnoses, assessments and treatments. With the involvement of AI in health care and medicine comes concerns pertaining to its application, more specifically its impact on both patients and medical professionals. To further expand on the discussion, using ethics of care, literature and a systematic review, we will address the impact of allowing AI to guide clinicians with medical procedures and decisions. We will then argue that the impact of allowing AI to guide clinicians with medical procedures and decisions can hinder patient-clinician relationships, concluding with a discussion on the future of patient care and how ethics of care can be used to investigate issues within AI in medicine.
Journal of Bioethical Inquiry, 2021
The rapid adoption and implementation of artificial intelligence in medicine creates an ontologically distinct situation from prior care models. There are both potential advantages and disadvantages with such technology in advancing the interests of patients, with resultant ontological and epistemic concerns for physicians and patients relating to the instatiation of AI as a dependent, semi-or fully-autonomous agent in the encounter. The concept of libertarian paternalism potentially exercised by AI (and those who control it) has created challenges to conventional assessments of patient and physician autonomy. The unclear legal relationship between AI and its users cannot be settled presently, an progress in AI and its implementation in patient care will necessitate an iterative discourse to preserve humanitarian concerns in future models of care. This paper proposes that physicians should neither uncritically accept nor unreasonably resist developments in AI but must actively engage and contribute to the discourse, since AI will affect their roles and the nature of their work. One's moral imaginative capacity must be engaged in the questions of beneficence, autonomy, and justice of AI and whether its integration in healthcare has the potential to augment or interfere with the ends of medical practice.
Bioethics, 2021
Medical AI is increasingly being developed and tested to improve medical diagno- sis, prediction and treatment of a wide array of medical conditions. Despite worries about the explainability and accuracy of such medical AI systems, it is reasonable to assume that they will be increasingly implemented in medical practice. Current ethical debates focus mainly on design requirements and suggest embedding certain values such as transparency, fairness, and explainability in the design of medical AI systems. Aside from concerns about their design, medical AI systems also raise ques- tions with regard to physicians’ responsibilities once these technologies are being implemented and used. How do physicians’ responsibilities change with the imple- mentation of medical AI? Which set of competencies do physicians have to learn to responsibly interact with medical AI? In the present article, we will introduce the notion of forward-looking responsibility and enumerate through this conceptual lens a number of competencies and duties that physicians ought to employ to responsibly utilize medical AI in practice. Those include amongst others understanding the range of reasonable outputs, being aware of own experience and skill decline, and monitor- ing potential accuracy decline of the AI systems.
AI and ethics, 2022
Artificial intelligence (AI) is being increasingly applied in healthcare. The expansion of AI in healthcare necessitates AIrelated ethical issues to be studied and addressed. This systematic scoping review was conducted to identify the ethical issues of AI application in healthcare, to highlight gaps, and to propose steps to move towards an evidence-informed approach for addressing them. A systematic search was conducted to retrieve all articles examining the ethical aspects of AI application in healthcare from Medline (PubMed) and Embase (OVID), published between 2010 and July 21, 2020. The search terms were "artificial intelligence" or "machine learning" or "deep learning" in combination with "ethics" or "bioethics". The studies were selected utilizing a PRISMA flowchart and predefined inclusion criteria. Ethical principles of respect for human autonomy, prevention of harm, fairness, explicability, and privacy were charted. The search yielded 2166 articles, of which 18 articles were selected for data charting on the basis of the predefined inclusion criteria. The focus of many articles was a general discussion about ethics and AI. Nevertheless, there was limited examination of ethical principles in terms of consideration for design or deployment of AI in most retrieved studies. In the few instances where ethical principles were considered, fairness, preservation of human autonomy, explicability and privacy were equally discussed. The principle of prevention of harm was the least explored topic. Practical tools for testing and upholding ethical requirements across the lifecycle of AI-based technologies are largely absent from the body of reported evidence. In addition, the perspective of different stakeholders is largely missing.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
American Journal of Bioethics, 2023
IJLAI Transactions on Science and Engineering , 2024
AMA Journal of Ethics, 2019
Journal of Internal Medicine and Emergency Research, 2022
Journal of Legal Medicine, 2020
Medical Teacher, 2023
Journal of Personalized Medicine, 2022
Asian Horizons, 2020
Systematic Reviews
Journal of Medical Ethics
Journal Article, 2022
AI and Ethics
Journal of Practical Nurse Education and Practice
Clinical Ethics
Multidisciplinary Perspectives on Artificial Intelligence and the Law , 2024
arXiv (Cornell University), 2023
Journal Of World Science, 2023