The Biggest AI Policy Developments of 2023

In November 2022, OpenAI launched ChatGPT. Within five days, it had over a million users. Six months later, the CEOs of the world’s leading AI companies, and hundreds of researchers and experts, signed a short statement warning that mitigating the risk of extinction from AI should be a global priority on the scale of preventing nuclear war.

AI’s rapid technological progress and the dire warnings from its creators provoked a reaction in capitals around the world. But as lawmakers and regulators rushed to write the rules charting AI’s future, many warned their efforts were insufficient to mitigate the risks from, and capitalize on the benefits of AI.

Here are the three most important milestones in AI policy this year.

President Biden’s Executive Order on AI

On Oct. 30, U.S. President Joe Biden signed a long-awaited executive order on artificial intelligence. The 63-page document tried to address multiple concerns, including the risks posed by the most powerful AI systems, the impacts of automation on the labor market, and the ways in which AI could encroach upon civil rights and privacy

Many of the provisions within the order instruct various U.S. government departments and agencies to come up with further guidance or write reports. As such, the full impact of the executive order will take time to be felt, and will depend on how stridently it is implemented.

Experts that TIME spoke with shortly after the publication of the order said that the President has reached the limits of his powers to act on AI, meaning more forceful regulations will require Congress to pass a law. 

Read More: Why Biden’s AI Executive Order Only Goes So Far

The reaction to the executive order was largely favorable, with many commentators praising the Administration for its response to the complex and rapidly advancing technology. Economic freedom-promoting commentators, however, warned the order was the first step on the path to overregulation.

The U.K. AI Safety Summit

Two days after Biden signed the White House Executive Order, officials from 28 countries joined leading AI executives and researchers at the U.K.’s Bletchley Park estate, the birthplace of computing, for the first AI Safety Summit. British Prime Minister Rishi Sunak has pushed for the U.K. to lead the world in efforts to promote AI safety—convening the summit and establishing a task force led by former tech investor Ian Hogarth that has evolved into the AI Safety Institute, a U.K. government body that will develop methods for safety testing the most powerful AI models.

Read More: U.K.’s AI Safety Summit Ends With Limited, but Meaningful, Progress

The countries all signed a declaration that emphasized the risks posed by powerful AI, underscored the responsibility of those creating AI to ensure their systems are safe, and committed to international cooperation to mitigate those risks. The summit also secured descriptions of safety policies from the companies developing the most powerful AI models, although researchers found that many of these policies did not meet the best practices laid out by the U.K. government ahead of the summit. For example, none of the companies allowed safety testers to check the data their models were trained on or provided the results of data checks.

Prime Minister Rishi Sunak meets with President of the European Commission Ursula von der Leyen during the AI safety summit, the first global summit on the safe use of artificial intelligence, at Bletchley Park on Nov. 2, 2023 in Bletchley, England.  (Joe Giddens—WPA Pool/Getty Images)

Prime Minister Rishi Sunak meets with President of the European Commission Ursula von der Leyen during the AI safety summit, the first global summit on the safe use of artificial intelligence, at Bletchley Park on Nov. 2, 2023 in Bletchley, England.

Joe Giddens—WPA Pool/Getty Images

Sunak ended the summit by interviewing billionaire entrepreneur Elon Musk, who launched new company xAI this year. Participant countries agreed to reconvene in South Korea in May 2024 and in France in November 2024.

Reactions to the summit were largely positive. Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace, called the summit a “major diplomatic achievement.” Seán Ó hÉigeartaigh, an AI expert at Cambridge University said the summit was “promising,” but that momentum must be maintained. 

However, some commentators criticized the lack of civil society inclusion and called for more diverse voices at future AI Safety Summits. And although the world’s two AI superpowers—the U.S. and China—both participated in the U.K. AI Safety Summit, further cooperation is far from guaranteed. The U.S. may impose further chip export restrictions on China as it tries to maintain its lead in AI development, and experts warn that cooperation between the two countries, even on shared issues, has been limited in recent years.

The E.U. AI Act

The E.U. announced its intention to pass a comprehensive AI law in April 2021. Following more than two years of development and negotiating, the E.U. AI Act entered the final stage of the legislative process in June.

In November, negotiations stalled after France, Germany, and Italy pushed for lighter regulation of foundation models—AI systems trained on huge amounts of data that are able to perform a wide range of tasks, such as OpenAI’s GPT models or Google DeepMind’s Gemini family.

Read More: E.U.’s AI Regulation Could Be Softened After Pushback From Biggest Members

Those in Europe in favor of stricter regulation won out, and an agreement was announced on Dec. 8. Foundation models trained using a sufficiently large amount of computational power will be defined as systemically important, and will face additional regulation such as transparency and cybersecurity requirements. Additionally, the act as provisionally agreed would ban certain uses of AI such as emotion recognition systems used in workplaces or schools, and would impose strict regulation on AI used in “high risk” areas such as law enforcement and education. Fines of up to 7% of global revenue could be levied for noncompliance.

Lower level technical negotiations will continue into 2024 to settle the final details of the act. If member states continue to push to water them down—French President Emmanuel Macron warned the proposed law could stifle European innovation just days after the agreement was reached—they could yet change. Further, the act won’t come into force until two years after it’s passed.

Looking ahead to 2024

In June, U.S. Senate Majority Leader Chuck Schumer announced he would push Congress to pass comprehensive AI legislation. Since then, Schumer has been holding “AI Insight Forums,” to educate politicians and staffers about AI and build legislative momentum. Congress would need to move fast to pass AI laws before the 2024 presidential election takes over Washington.

U.N. Secretary-General António Guterres announced the membership of an AI advisory body in October 2023. The body will publish two reports—the first by the end of 2023 and the second by the end of August 2024—that could determine the shape of a new U.N. AI agency. In September 2024, the U.N. will host its Summit of the Future. U.N. tech envoy Amandeep Gill told TIME that, if the political will exists, the Summit would be the “right moment” for such an agency’s creation.

More Must-Reads From TIME


Write to Will Henshall at will.henshall@time.com.

Leave a Reply

Your email address will not be published. Required fields are marked *