The Pace of AI Technological Advancement Requires Equal Pace of Standardisation, Regulatory and Ethical Considerations
Artificial intelligence and machine learning technologies is transforming our lives and have the potential to revolutionise many aspects healthcare by deriving new and important insights from the vast amount of data generated during their application for improved healthcare delivery and clinical outcomes every day. Medical device manufacturers are using these health technologies to innovate their products to better assist healthcare providers and improve patient care.
There has been so much in the news of late to regulate AI as the technology and its applications have significant implications on the lives of many. In a number of regulatory jurisdictions, AI regulation is featuring in the many legal frameworks and it is referenced in the development of public sector policies and laws for promoting and regulating technologies that utilises AI. The most prominent example of AI regulation law is the AI Act, a proposed European law on AI that focuses on data quality, transparency, human oversight and accountability. The AI Act would ban, regulate or leave unregulated AI systems based on their risk level. The AI Act would also impose substantial fines on developers for violations.
Regulatory Responses to Development in AI
On 21 April 2021, the European Commission (EC) published a proposal for a regulation governing artificial intelligence (AI) and it Annexes . Following this, the European Union (EU) issued a new legal framework to significantly bolster regulations on the development and use of artificial intelligence. The Artificial Intelligence Act is to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.
A key part of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited and minimal. AI systems with limited and minimal riskâlike spam filters or video gamesâare allowed to be used with little requirements other than transparency obligations. Systems deemed to pose an unacceptable riskâlike government social scoring and real-time biometric identification systems in public spacesâare prohibited with little exception.
EU AI Act: different rules for different risk levels
The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.
Unacceptable risk
Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
Some exceptions may be allowed: For instance, âpostâ remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.
High risk
AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
Generative AI
Generative AI, like ChatGPT, would have to comply with transparency requirements:
Limited risk
Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.
The legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.
Next steps
On 14 June 2023, MEPs adopted Parliaments negotiating position on the AI Act . The talks will now begin with EU countries in the Council on the final form of the law. The aim is to reach an agreement by the end of this year.
Recommended by LinkedIn
On 29 March 2023, the UK Government launched AI white paper to guide the use of artificial intelligence to drive responsible innovation and maintain public trust in this revolutionary technology.
22 September 2021 the U.S. Food and Drug Administration (FDA) issued the âArtificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Planâ from the Centre for Devices and Radiological Healthâs Digital Health Centre of Excellence. The Action Plan was a direct response to stakeholder feedback to the April 2019 discussion paper, âProposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Deviceâ and outlines five actions the FDA intends to take.
In addition, on 30March, 2023 CDRH the US FDA issued draft guidance on predetermined Change Control Plans for Artificial Intelligence/Machine Learning-Enabled Medical Devices . And in April 2023,further  information was released on the same guide as marketing submission recommendations ." This draft guidance proposes a science-based approach to ensuring that AI/ML-enabled devices can be safely, effectively, and rapidly modified, updated, and improved in response to new data. The FDA proposes in this draft guidance that it would put safe and effective advancements in the hands of health care providers and users faster, increasing the pace of medical device innovation in the United States and enabling more personalised medicine. In addition to across-the-board, or "global," device updates, under the proposed approach, AI/ML-enabled devices could be more extensively and rapidly modified to learn and adapt to local conditions. This means, for example, that diagnostic devices could be built to adapt to the data and needs of individual health care facilities and that therapeutic devices could be built to learn and adapt to deliver treatments according to individual users' particular characteristics and needs.
Standardisation of AI and it Applications
The collaborative effort of AAMI and BSI plan to develop TIR34971 as an international standard via the ISO Technical Committee (TC) 210 is a positive step forward. The following are a handful of ISO/IEC Standards and Technical Reports that are published and available for the respective industries to use as applicable:
With consideration to technological developments in medical device industries and regulatory practice I proposed that the next revision of the following ISO standards should consider the inclusion of symbol(s) for AI either as shown below or similar, if a medical employs the application of Artificial Intelligence technology. Symbol (a) has been proposed in my previous publications, however other similar ones such as (b) and (c) are also popular choices according to professionals consulted:
In greater detail, medical device utilising specific aspect of AI such as ChatGPT can be symbolised with any of the following (a to f):
These symbols are to be used with medical device labels, labelling and information are part of the information to be supplied by the manufacturer to help inform the end-users of the applied technologies utilised and their respective cautionary information to be considered.
It is strongly advised that the ISO Technical Committee ISO/TC 210 to consider the updating the following standards with the symbol for AI:
·        ISO 15223-1:2021 Medical devices â Symbols to be used with medical device labels, labelling and information to be supplied â Part 1: General requirements
·        ISO 20417:2021 Medical devices â Information to be supplied by the manufacturer
·        ISO 7000:2019 Graphical symbols for use on equipment â Registered symbols
This humble request resonates with similar publication campaign I championed to advise the ISO on inclusion of Distributor and Importer symbols i.e. ISO 7000-3724, ISO 7000-3725 respectively as well as others such as symbols utilising tissues of animal origin, nanotechnology which were successfully published in the ISO standards. For further background reading: LinkedIn and JMDR
The well-experienced Technology and Regulatory teams at Triune Technologies Limited or Triune Technologies are well-placed to help with any project within the Digital Health space that employs AI. Contact and engage us to help you from any stage of your project to the Product or Service Realisation.