This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
AI and the law: A tale of three markets
Over the past year, the arrival of generative AI and large language models (LLM), as embodied by ChatGPT from Open AI, has raised public awareness of AI to a new level, bringing with it both hopes for the advances this discipline could bring, and fears about the harms it could cause.
Emre Kazim, co-founder of Holistic AI, a consultancy that helps enterprises to manage AI governance, risk and compliance, notes that if you look at how AI has been depicted in popular fiction and media over the years, there has been “a lot of sensationalism around AI and the risks of AI”.
“It’s always killer robots!”, he exclaims, pointing to well-known dystopian films such as the Terminator franchise. This in turn leads to what he describes as “moral panic, which creates a kind of cack-handed approach to managing the risks”.
Increasingly, he says, “there is a sense of stepping back and saying, okay, let’s think about this in a mature sense of, ‘What is the state of play?’ ‘Where can these technologies positively intervene?’ and ‘How can we make sure it’s not like juggling with a knife?’”
In other words, “it’s a bit like most technologies we’ve seen: if they’re not appropriately governed, then they’re going to be abused. But it doesn’t mean there’s something inherent to technology that’s going to result in that kind of abuse,” Kazim reasons.
Meanwhile, governments around the world are looking at how AI should be regulated to ensure it is beneficial and does not cause harm to consumers, businesses or society.
Just to state the obvious, says Michael Natusch, a former chief science officer at Prudential and the founder and head of Prudential’s global Centre of Excellence for Artificial Intelligence (AI CoE), “regulation is absolutely fundamentally necessary. It’s really, really important, but it’s also important to get it right.”
Below, we’ll take a look at AI laws that are under development in the European Union, the United Kingdom and the United States, with some insights from Kazim and Natusch on their wider implications for businesses and more.
The EU AI Act
The European Commission tabled a proposal for an EU regulatory framework on AI in April 2021. It hailed the draft EU AI Act as the “first ever attempt to enact a horizontal regulation for AI” and said the proposed legal framework focuses on the specific utilisation of AI systems and associated risks.
In June 2023, the European Parliament voted in favour of moving the act onto the next stage, with talks now beginning with EU member countries on the final form of the law. The aim is to reach an agreement by the end of this year.
The objective of the EU AI Act is to establish a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a “risk-based approach”.
For example, “unacceptable risk” AI systems are systems considered a threat to people, such as facial recognition, and will be banned.
There are also high-risk applications, including AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment. AI systems used to influence voters in elections are considered to be high-risk, for example.
Limited risk AI systems meanwhile, should comply with minimal transparency requirements that would allow users to make informed decisions. For instance a customer interacting with a chatbot must be informed that they are engaging with a machine so they can decide whether to proceed (or request to speak with a human instead).
Substantial amendments and tweaks to the Commission’s original proposal have already been made, including revising the definition of AI systems, broadening the list of prohibited AI systems, and imposing obligations on general purpose AI and generative AI models such as ChatGPT.
For example, generative AI would have to comply with transparency requirements, such as disclosing that the content was generated by AI.
Holistic AI describes the act as “arguably the widest-reaching AI regulations yet announced and predicted to become the gold standard, in the same way as GDPR did [for data privacy] in 2016”.
As with GDPR, penalties for non-compliance are fairly punitive for enterprises, ranging from €10 million to €40 million or 2% to 7% of the global annual turnover, depending on the severity of the infringement.
Consultants at EY, Madan Sathe and Karl Ruloff stress that “it is essential for stakeholders to make sure they understand the AI Act fully and comply with its provisions”.
Holistic AI also notes that the EU AI Act will work in tandem with two other important pieces of EU digital markets and services regulation: the Digital Markets Act (DMA) and the Digital Services Act (DSA).
“The collective impact of the EU AI Act, the DMA and the DSA is likely to be significant.
“The three pieces of robust legislation will work in tandem to ensure companies are not misusing AI or leveraging innovative technology unchecked to (knowingly or unknowingly) promote harm while also standardising a risk management approach in AI governance,” Kazim observed.
Pros and cons
According to Kazim, the advantages of the EU AI Act are that it is “very mature, it’s serious, it’s detailed, it really is setting the standard. And I think it’s going to be the gold standard globally”.
However, the primary criticism being levelled at the EU’s proposal is that it is “stifling innovation” within the bloc.
As Kazim notes, larger companies will have the resources to ensure they are compliant, but smaller start-ups “are really going to be in trouble”.
“If you’re having to spend a significant amount of capital just being compliant, you can ask yourself a lot of serious questions about whether or not you want to operate in the European ecosystem”, Kazim says.
Indeed, a paper by CECIMO, which represents the machine tool industry and related manufacturing technologies, notes that according to the EC Impact Assessment of the AI Act, compliance costs for manufacturers deploying high-risk AI systems are estimated at around €6,000 or €7,000, with additional conformity assessment costs estimated at around €3,500 to €7,500.
“This calculation excludes all additional expenditures arising from external consultancy plus internal costs… and the setting up of a new Quality Management System (Article 17), with total estimated costs of €193,000 to €330,000,” the paper observes.
According to Natusch, “the good thing about the EU is that it started early. It continuously involved the outside world in consultation… and those documents have been shaped not by those kind of faceless, unaccountable Brussels bureaucrats, but… by a lot of input.”
The problem, he says, “is that the input has come a lot from people who look at AI from this very blinkered model and technology perspective”.
“We should not be having this conversation, we should not be talking about models and technologies, we should be talking about the consumer”, Natusch remarks.
Concerns about the proposed EU AI Act also prompted over 150 political and industry leaders from companies such as Airbus, Cellnex, Deutsche Telekom, Orange, and Siemens to write an open letter to the European Commission, Council and Parliament, calling for the latest version of the act to be revised.
In the letter, the executives warned of the potential to “jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing”.
The signatories added that this was “especially true regarding generative AI”.
While acknowledging the need for regulation, the executives said that wanting to “anchor the regulation of generative AI in law and proceeding with a rigid compliance logic is as bureaucratic of an approach as it is ineffective in fulfilling its purpose”.
“We are convinced that our future significantly depends on Europe becoming part of the technological avant-garde, especially in such an important field as (generative) artificial intelligence”, they added.
United States: A fragmented approach
Natusch quips that while the EU “offers protection but closes off a lot of opportunities, the United States offers lots of opportunities but doesn’t offer any protection” in terms of AI regulation.
There is of course plenty going on in the US, but, as is traditionally the case with regulation, the EU looks set to impose the strictest AI rules while the US is likely to be the most lenient.
Holistic AI nonetheless comments that 2022 “marked the 117th Congress as the most AI-focused Congress in history”.
Last year saw the publication of a non-binding Blueprint for an AI Bill of Rights to guide the design, deployment, and development of AI systems based on five principles: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; human alternatives, consideration and fallback.
However, while a significant signpost underpinned by commonly accepted AI governance principles, Holistic AI says it is unlikely that the AI Bill of Rights will become more than a voluntary framework.
Elsewhere, the National Institute of Standards and Technology (NIST) will be responsible for re-evaluating and assessing any AI that has been deployed or is in use by federal agencies.
Meanwhile, the Federal Trade Commission (FTC) is “shaping up to be the body with the most hunger to regulate AI in the US”, Holistic AI comments.
Also worth noting is the Algorithmic Accountability Act of 2022, first introduced in 2019 and then brought back into both houses of Congress in February 2022.
If passed, the Act would be binding and require companies to assess the impact of the automated systems they use and sell in terms of bias and effectiveness.
As things stand, Holistic AI says the Act has yet to win support in the House or the Senate and is not expected to pass.
According to Kazim, while the United States has not passed legislation at federal level, “and there doesn’t seem to be anything in the pipeline”, there is plenty going on at state level, such as the enacting of new AI bias laws.
Indeed, US law firm Morgan Lewis noted there was a 46% increase in AI-related bills between 2021–2022, noting that hot topics in state-level AI regulation include predictive policing technologies; consumer-focused rights; employment; insurance; and healthcare.
For example, New York City implemented its AI Bias Law in July 2023 that makes it an unlawful employment practice for employers to use automated employment decision tools to screen candidates and employees unless certain bias audit and notice requirements are met.
According to legal firm Morgan Lewis lawyers Ronald Del Sesto and Trina Kwon, other enacted legislation addressing AI includes the Artificial Intelligence Video Interview Act in Illinois, that applies to all employers and requires disclosure of the use of an AI tool to analyse video interviews of applicants for positions based in Illinois.
The law firm partner and associate also note that Vermont has created an Artificial Intelligence Commission while the SB 5693 Bill in Washington appropriated funds for an automated decision-making working group.
The Morgan Lewis lawyers also cite AI legislation that is pending in California, Colorado (mandating auditing of algorithms in insurance), Connecticut, Washington DC, and Texas.
The United Kingdom: Sector specific rules
The current UK Prime Minister Rishi Sunak wastes few opportunities to promote the idea of positioning the UK as the leading authority on the governance of AI.
On the 1st and 2nd of November this year, the UK will play host to an AI Safety Summit at which international governments, AI companies and research experts will meet to “consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action”.
According to the government, the UK “boasts strong credentials” to become an international hub for AI. It notes that the technology employs over 50,000 people, contributes £3.7 billion to the economy, “and is the birthplace of leading AI companies such as Google DeepMind”.
The UK government claims that it has also “invested more on AI safety research than any other nation, backing the creation of the Foundation Model Taskforce [now the Frontier AI Taskforce] with an initial £100 million”, it says.
Kazim and Natusch suggest that while the UK does indeed have a solid basis of educational and research institutions to underpin its AI aspirations, the nation is standing at something of a crossroads when it comes to its role in global AI regulation.
“The natural position for the UK would have been to [sit between the EU and the US], offering protection while not yet closing off opportunities”, Natusch observes.
However, his view is that the UK is neither taking the intellectual lead on AI regulation, nor putting the right money in place to create the type of centralised institution that would be required.
Kazim notes that the UK already has an advanced AI regime in two key sectors: financial services and insurance.
The question is, he says, can the UK find a middle ground in terms of a broader, less sectorial approach? What’s more, is the market big enough to encourage companies to invest in UK-specific compliance obligations?
“I think the UK needs to be very intelligent in the way it approaches this,” he says.
In general, Holistic AI rates the overall trajectory of the UK’s regulatory activities as positive, balancing encouragement of innovation and a push for transparency with consumer protection.
“While it is likely that the EU AI Act will become the global gold standard for AI, the UK’s approach towards AI regulation may remain independent, reflecting current efforts to regulate by industry rather than adopt a centralised approach,” the company says.
In terms of concrete activity to date, the UK has not yet proposed specific legislation to regulate the use of AI. However, the government has demonstrated its support for the regulation of AI systems through a series of policy papers, frameworks, and strategies.
It has also created the Office for Artificial Intelligence, a unit within the Department for Science, Innovation and Technology that is responsible for overseeing implementation of the National AI Strategy.
In other activity, in March 2023 the government published an AI white paper as part of a “new national blueprint” for regulators “to drive responsible innovation and maintain public trust in this revolutionary technology”.
The governmenet also provided £2 million to fund a new sandbox, a trial environment where businesses can test how regulation could be applied to AI products and services.
“Over the next 12 months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors.
“When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently,” the government stated.
To read more stories on AI click here
#BeInformed
Subscribe to our Editor's weekly newsletter