How Will AI Be Regulated?
Photo: wildpixel / iStock

How Will AI Be Regulated?

In March 2023, over 1,000 tech leaders around the world signed an open letter urging artificial intelligence (AI) labs to hit pause, warning that “A.I. tools present ‘profound risks to society and humanity’”, reported The New York Times.

Although ChatGPT, launched on November 30, 2022, had only been on the market for four months, this warning letter from some of the biggest names in the tech world demonstrated how developers globally were already highly concerned that AI tools were developing at a rate that no one, not even their creators, could “understand, predict or reliably control”.

Our society faces profound risks if AI remains unchecked. Without regulation, AI threatens consumer privacy, fairness, and even human rights. Deepfakes and biased programming are not just hypothetical risks. They are already manifesting, posing significant security threats and potential misuse by authorities, which we can ill afford.

In response, governments worldwide moved to draft regulations to mitigate these risks, with the European Union (EU), the United States (US), and China leading these efforts. In March 2024, the EU approved an AI Act, categorising companies into four risk levels, each with specific regulatory requirements. Meanwhile, the US issued an executive order directing federal agencies to create guidance and standards for AI, leaning heavily on industry self-regulation. China implemented “Interim Measures for the Management of Generative Artificial Intelligence Services,” focusing on training data and company accountability.

While these initiatives reflect their respective market realities, the global market demands more cohesive and robust regulatory frameworks. Regulations must protect consumers, curtail rogue activity, and importantly, be enforceable.

To this end, the EU, US, and China’s regulatory models offer insightful strategies toward limiting AI’s risks.

 

The EU Approach: Product-specific

First, the EU’s AI Act stands out for its product-specific approach. This legislation categorises AI systems into four risk levels: unacceptable, high, limited, and minimal. AI tools posing unacceptable risks – such as social scoring systems and manipulative AI – are outright banned. Limited-risk applications, like deepfake generators, come with transparency obligations.

Europe’s regulation focus on product is partly driven by the fact that many of the biggest AI developers are not located in Europe but do sell products into the market. The Act sets forth detailed requirements for each risk level. For instance, high-risk AI systems, such as those used in critical infrastructures or for recruitment processes, must meet strict requirements before they can be deployed. These include rigorous risk assessments, documentation protocols, and human oversight mechanisms. The consumer-focused legislation also mandates transparency, ensuring users are aware when they are interacting with AI systems.

 

The US: Self-regulation

The US adopts a different strategy, steering clear of comprehensive legislation. Instead, it opts for an executive order mandating federal agencies to develop AI guidelines. This emphasises flexibility and adaptability that will help agencies to tailor regulations to the challenges presented by AI.

In addition, the White House released a Blueprint for an AI Bill of Rights. This outlines key principles, such as the right to be protected from unsafe or ineffective systems, the right to be free from discrimination caused by algorithms, and the right to know when an AI system is being used. This blueprint serves as a guide for federal agencies, developers, and other stakeholders in creating AI systems that respect human rights and societal values.

Moreover, major AI developers in the US, like OpenAI and Microsoft, must share safety test results and critical information with the government. This aims to ensure that AI systems undergo rigorous testing to minimize risks before they are deployed. While the US model leans towards industry self-regulation and is developer- and industry-focused, it also incorporates government oversight to ensure that compliance with safety standards.

 

China: Top-down Control

Lastly, China’s approach is sector-specific, targeting areas of significant societal impact. The “Interim Measures for the Management of Generative Artificial Intelligence Service” was introduced in 2023 and outlines requirements for AI developers. They must comply with existing laws, ensuring data labelling, training requirements and regular reporting measures including whistleblowing systems are met.

For example, AI developers in China must ensure that their training data are sourced and labelled in compliance with national laws. This includes maintaining detailed records and providing these records to regulatory authorities upon request.

Furthermore, the regulations stipulate that AI systems must undergo rigorous testing for security vulnerabilities before deployment. The government has also introduced mandatory AI ethics training programs for developers to ensure that they are aware of the potential societal impacts of their technologies. By directly regulating the companies that develop and deploy AI technologies, China’s regulations are compliance-focused.

 

Converging Towards Comprehensive Regulation

The diversity of these solutions underscores the inherent difficulties in crafting a balanced regulatory approach in regulating a rapidly evolving industry. While each model seeks to curtail risk within distinct frameworks, effectiveness remains an open question.

AI, by its nature, transcends national boundaries. Despite each jurisdiction's individual efforts, a globally cohesive regulatory framework is an urgent need, as AI’s potential risks grow as rapidly as its advancements.

What will future AI regulations look like? For starters, it might draw upon the regulatory models of industries requiring high public trust and significant oversight, such as finance or healthcare. This entails a dual-focus strategy: rigorous product classification coupled with stringent developer accountability and transparency. Low-risk activities might operate under minimal oversight, while high-risk activities would be subject to direct regulatory measures and continuous scrutiny.

Regulatory frameworks must be agile, anticipatory, and robust. Waiting until a catastrophic event forces reactive legislation is a gamble society cannot afford to take.

To view or add a comment, sign in

More articles by TSMP Law Corporation

Insights from the community

Others also viewed

Explore topics