The Pace of AI Technological Advancement Requires Equal Pace of Standardisation, Regulatory and Ethical Considerations

The Pace of AI Technological Advancement Requires Equal Pace of Standardisation, Regulatory and Ethical Considerations

Artificial intelligence and machine learning technologies is transforming our lives and have the potential to revolutionise many aspects healthcare by deriving new and important insights from the vast amount of data generated during their application for improved healthcare delivery and clinical outcomes every day. Medical device manufacturers are using these health technologies to innovate their products to better assist healthcare providers and improve patient care.

There has been so much in the news of late to regulate AI as the technology and its applications have significant implications on the lives of many. In a number of regulatory jurisdictions, AI regulation is featuring in the many legal frameworks and it is referenced in the development of public sector policies and laws for promoting and regulating technologies that utilises AI. The most prominent example of AI regulation law is the AI Act, a proposed European law on AI that focuses on data quality, transparency, human oversight and accountability. The AI Act would ban, regulate or leave unregulated AI systems based on their risk level. The AI Act would also impose substantial fines on developers for violations.

Regulatory Responses to Development in AI

On 21 April 2021, the European Commission (EC) published a proposal for a regulation governing artificial intelligence (AI) and it Annexes . Following this, the European Union (EU) issued a new legal framework to significantly bolster regulations on the development and use of artificial intelligence. The Artificial Intelligence Act is to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.

A key part of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited and minimal. AI systems with limited and minimal risk—like spam filters or video games—are allowed to be used with little requirements other than transparency obligations. Systems deemed to pose an unacceptable risk—like government social scoring and real-time biometric identification systems in public spaces—are prohibited with little exception.

EU AI Act: different rules for different risk levels

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.

Unacceptable risk

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Real-time and remote biometric identification systems, such as facial recognition

Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.

High risk

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

  1. AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
  2. AI systems falling into eight specific areas that will have to be registered in an EU database:

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.

Generative AI

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

Limited risk

Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.

The legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.

Next steps

On 14 June 2023, MEPs adopted Parliaments negotiating position on the AI Act . The talks will now begin with EU countries in the Council on the final form of the law. The aim is to reach an agreement by the end of this year.

On 29 March 2023, the UK Government launched AI white paper to guide the use of artificial intelligence to drive responsible innovation and maintain public trust in this revolutionary technology.

22 September 2021 the U.S. Food and Drug Administration (FDA) issued the “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan” from the Centre for Devices and Radiological Health’s Digital Health Centre of Excellence. The Action Plan was a direct response to stakeholder feedback to the April 2019 discussion paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device” and outlines five actions the FDA intends to take.

In addition, on 30March, 2023 CDRH the US FDA issued draft guidance on predetermined Change Control Plans for Artificial Intelligence/Machine Learning-Enabled Medical Devices . And in April 2023,further  information was released on the same guide as marketing submission recommendations ." This draft guidance proposes a science-based approach to ensuring that AI/ML-enabled devices can be safely, effectively, and rapidly modified, updated, and improved in response to new data. The FDA proposes in this draft guidance that it would put safe and effective advancements in the hands of health care providers and users faster, increasing the pace of medical device innovation in the United States and enabling more personalised medicine. In addition to across-the-board, or "global," device updates, under the proposed approach, AI/ML-enabled devices could be more extensively and rapidly modified to learn and adapt to local conditions. This means, for example, that diagnostic devices could be built to adapt to the data and needs of individual health care facilities and that therapeutic devices could be built to learn and adapt to deliver treatments according to individual users' particular characteristics and needs.

Standardisation of AI and it Applications

The collaborative effort of AAMI and BSI plan to develop TIR34971 as an international standard via the ISO Technical Committee (TC) 210 is a positive step forward. The following are a handful of ISO/IEC Standards and Technical Reports that are published and available for the respective industries to use as applicable:

  • ISO/IEC FDIS 42001 Information technology — Artificial intelligence — Management system
  • ISO/IEC 38507:2022 Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organisations
  • ISO/IEC 23894:2023 Information technology — Artificial intelligence — Guidance on risk management
  • ISO/IEC TR 24027:2021 Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making

With consideration to technological developments in medical device industries and regulatory practice I proposed that the next revision of the following ISO standards should consider the inclusion of symbol(s) for AI either as shown below or similar, if a medical employs the application of Artificial Intelligence technology. Symbol (a) has been proposed in my previous publications, however other similar ones such as (b) and (c) are also popular choices according to professionals consulted:


No alt text provided for this image



In greater detail, medical device utilising specific aspect of AI such as ChatGPT can be symbolised with any of the following (a to f):

No alt text provided for this image

These symbols are to be used with medical device labels, labelling and information are part of the information to be supplied by the manufacturer to help inform the end-users of the applied technologies utilised and their respective cautionary information to be considered.

It is strongly advised that the ISO Technical Committee ISO/TC 210 to consider the updating the following standards with the symbol for AI:

·        ISO 15223-1:2021 Medical devices — Symbols to be used with medical device labels, labelling and information to be supplied — Part 1: General requirements

·        ISO 20417:2021 Medical devices — Information to be supplied by the manufacturer

·        ISO 7000:2019 Graphical symbols for use on equipment — Registered symbols

This humble request resonates with similar publication campaign I championed to advise the ISO on inclusion of Distributor and Importer symbols i.e. ISO 7000-3724, ISO 7000-3725 respectively as well as others such as symbols utilising tissues of animal origin, nanotechnology which were successfully published in the ISO standards. For further background reading: LinkedIn and JMDR

The well-experienced Technology and Regulatory teams at Triune Technologies Limited or Triune Technologies are well-placed to help with any project within the Digital Health space that employs AI. Contact and engage us to help you from any stage of your project to the Product or Service Realisation.

No alt text provided for this image



To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics