Published on: 14th August 2023
Regulating Artificial Intelligence: A Multipolar Perspective
With the rapid pace of AI development and its impact, the need for appropriate regulation has become urgent. In this first of a series of articles, Time&Place’s AI Task Force explores the nature of AI, challenges states face with regulation, and the contrasting stances taken by major actors like the EU, US, and China.
Understanding AI and the Need for Regulation
AI's journey began as a concept rooted in mathematics and computer science. Early pioneers like Alan Turing (creator of the famous “Turing” test) laid the groundwork by envisioning machines capable of intelligent behaviour. Over the decades, advancements in computing power and algorithms have propelled AI from theoretical musings to practical implementations.
AI now encompasses various types, such as machine learning, natural language processing, and computer vision. In healthcare, AI aids in disease diagnosis and drug discovery. In finance, it optimizes trading strategies and detects fraud. Smart assistants like Siri and Alexa use AI to understand and respond to human language. AI powers recommendation systems on e-commerce platforms like Amazon, Alibaba and Zalando, but also on Netflix and Disney+, tailoring experiences to individual preferences.
AI's transformative potential in every aspect of our life has raised concerns about its ethical use. Additionally, there's a growing debate about the impact of AI on the workforce (jobs of traders, cashiers, customer support representatives, assembly line workers and even drivers are equally at risk of displacement). Indeed, as AI applications continue to proliferate, the need for regulation becomes crucial to ensure responsible AI development.
Examples of Disruptive AI
AI has displayed its revolutionary potential across all human actions, leading to remarkable advancements. Examples of disruptive AI applications include autonomous vehicles, healthcare diagnostics (even predicting Parkinson`s disease), archaeology (discovering new sites, translating ancient languages), legal conclusions, and smart assistants among othersiii. These innovations have the power to revolutionize industries and enhance lives. However, the other side of the coin brings forth complex ethical dilemmas and challenges that require regulatory scrutiny.
The EU proposed the "EU AI Act", the world’s first AI law, has the potential to become a global standard, like in the GDPR case. Through the AI Act, the EU aims to create a comprehensive regulatory framework that emphasizes transparency, accountability, and human-centric AI systems. In addition, it also introduces different risk levels that each require different degrees of rigorous conformity assessments and strict controls.
Limited risk level is assigned to AI systems related to chatbots and deep fakes, and only apply minimal transparency requirements. High risk level is assigned to AI systems that negatively impact safety or fundamental rights. Those systems will first have to be assigned to a specific category, allowing authorities to assess each of them before they are put on the European market. Finally, unacceptable risk level is assigned to AI systems that represent an active threat to people, such as AI systems that encourage segregation, violence, etc… AI systems falling into that category will immediately be banned.
While the Act awaits approval from the informal negotiations between the three main EU institutions (Parliament, Commission and Council of the EU) the Union clearly seeks to become the regulatory leader for AI by displaying its proactive approach to address AI challenges and protect societal valuesiv.
The United States lacks a centralized federal AI regulation, resulting in a more fragmented approach with limited nationwide guidelines. Instead, the US focuses on sector-specific regulations and voluntary frameworks for ethical AI practicesv. Indeed, as of July 2023, the only form of American AI regulation stems from voluntary AI safety commitments presented by seven tech companiesvi, including Amazon, Google, and Metavii.
On the other hand, China has made considerable progress in AI regulation, as it competes with the EU and the US to become a global AI leaderviii. The Chinese government has introduced AI-related laws, like the Interim Measures for the Management of Generative Artificial Intelligent Services, which lay out rules concerning data security, consumer protection, and national security. However, in addition to those targets, China has also uniquely implemented rules pushing generative AI providers to promote adhering values of Chinese socialism and prohibiting them to criticize the Chinese state. As such, concerns have been raised in the West about the potential Chinese misuse of AI for surveillance and privacy violationsix.
Regulating AI is essential to strike a balance between fostering innovation and safeguarding human interestsx. On one hand, the EU's proactive approach with the AI Act demonstrates its recognition of the urgency to address AI's challenges. On the other, the US and China, though opting for different regulatory paths, also seek to find their own ways to effectively balance innovation and regulation.
As AI technology continues to evolve, regulators must remain adaptable to address new challenges and seize opportunities for future improvements.
The regulatory journey has only just begun. And, Time&Place Consulting has built an AI Task Force to work within and around the tech in public affairs and public relations with a view to our activities and those of our clients. #StayTuned for the upcoming official launch of the Task Force. #StayInformed as the next articles will explore regulation in more depth, including international cooperation such as under the EU-US TTC, as well as assessing practical uses-cases with a view to societal and business implications.
And, in the meantime, take our LinkedIN poll on how you believe AI should be addressed at political/regulatory level. Closing date: 21 August 2023