Discover the latest trends

Want to know what's happening in the GovTech ecosystem? This is your place.

1 0

Top 3 AI Policy Highlights of 2023

Gisela Montes | GLASS | 01.19.2024

In the ever-evolving realm of artificial intelligence (AI), 2023 marked a pivotal year for the development and implementation of policies that shape the ethical, legal, and societal frameworks surrounding AI technologies. As governments, organizations, and the global community grappled with the profound impact of AI, significant milestones emerged, reflecting a collective effort to navigate the challenges and opportunities presented by this transformative technology.

On November 30th, 2022, OpenAI, a U.S.-based AI research organization founded in December 2015, introduced ChatGPT, an extensively developed and trained virtual assistant, also known as a chatbot, powered by AI. This undeniably set the course for AI in 2023.

A few months later, CEOs from the world's leading AI companies, alongside numerous AI scientists and experts, made their most united statement yet regarding the existential risks that technology poses to humanity. It's true that these two realms operate on different timelines. While AI makes remarkable strides, the policies and regulations surrounding it often take several months to catch up. This dynamic means that as consensus is reached on one aspect of AI, there's a need to swiftly address newer issues that emerge after the debate has commenced.

Nevertheless, let's delve into some of the AI policies that received approval in 2023:

🦾 The AI Executive Order by President Biden

On October 30th, , U.S. President Joe Biden finally put pen to paper, signing a much-anticipated executive order focused on AI. This document aimed to tackle a myriad of concerns, ranging from the risks associated with powerful AI systems to the potential impacts of automation on the job market, and the looming threats AI poses to civil rights and privacy.

Embedded within the order are numerous directives compelling various U.S. government departments and agencies to formulate additional guidance or produce reports. Consequently, the true impact of this executive order won't be immediate; rather, it hinges on the vigor with which it is enforced.

Some observers contend that the President has hit the boundaries of his authority in addressing AI issues, implying that more robust regulations would necessitate Congressional action. Despite generally positive reactions to the executive order, with many celebrating the Administration for its responsive stance towards the intricate and swiftly evolving technology, there are those who perceive it as merely the initial stride down the road to excessive regulation.

🦾 The AI Safety Summit in the U.K.

On November 1st and 2nd, just two days after President Biden inked the Executive Order, officials from 28 countries, along with top-notch AI executives and researchers, gathered at the historic Bletchley Park estate in the U.K. – the very birthplace of computing – for the inaugural AI Safety Summit.

British Prime Minister Rishi Sunak, advocating for the U.K. to take the lead in global initiatives promoting AI safety, spearheaded the summit. He not only convened the event but also set up a task force led by former tech investor Ian Hogarth, which has since evolved into the AI Safety Institute – the first state-backed organization in the U.K. dedicated to advanced AI safety for the public interest.

During the summit, all participating countries endorsed a declaration that highlighted the risks associated with powerful AI, emphasized the duty of those developing AI to ensure the safety of their systems, and committed to international collaboration to mitigate these risks. The event also secured safety policy descriptions from companies working on the most potent AI models. However, researchers discovered that many of these policies fell short of the best practices outlined by the U.K. government before the summit took place. 

🦾 The AI Act in the European Union

In April 2021, the European Commission laid out the initial proposal for the world's inaugural regulatory framework on AI within the EU. Following over two years of development and negotiation, the Parliament struck a tentative agreement with the Council on the AI Act in December 2023. For this agreed-upon text to officially become EU law, both the Parliament and Council need to give it the formal nod.

Yet, the journey to consensus and the parliamentary path proved far from straightforward. Negotiations hit a roadblock when France, Germany, and Italy advocated for less stringent regulations on foundation models – AI systems trained on massive datasets with the capability to perform diverse tasks, think OpenAI's GPT models or Google DeepMind's Gemini family.

Those in Europe advocating for stricter regulations ultimately prevailed. Foundation models trained with substantial computational power will be deemed systemically important and subjected to extra regulations, including transparency and cybersecurity requirements. Additionally, the provisionally agreed-upon act seeks to ban specific uses of AI, such as emotion recognition systems in workplaces or schools, and proposes stringent regulations for AI in "high risk" sectors like law enforcement and education.

Further technical negotiations will extend into 2024 to iron out the final details of the act. If member states persist in attempting to dilute its provisions, changes may still occur. Moreover, the act won't take effect until two years after its passage.

🦾 What’s next for AI regulations in 2024?

If 2023 was the year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action,” the MIT Technology Institute said. Below are the MIT Technology Institute's predictions regarding AI regulation:

  • Many items detailed in Biden’s executive order will be enacted. 

  • We’ll be hearing a lot about the new US AI Safety Institute, which will be responsible for executing most of the policies called for in the order. 

  • New laws may be coming in addition to the executive order that will touch multiple  aspects of AI, such as transparency, deep fakes, and platform accountability. 

  • An approach that grades types and uses of AI by how much risk they pose will be introduced. 

  • The U.S. presidential election will color much of the discussion on AI regulation.

  • During the first half of 2024, the EU AI Act will kick in fairly quickly.

  • Other AI uses will be entirely banned in the EU, such as creating facial recognition databases like Clearview AI’s or using emotion recognition technology at work or in schools. 

  • Negotiations for the new EU bill called  the AI Liability Directive will likely pick up this year. 

  • AI regulations will likely be introduced in other parts of the world.

  • The African Union will probably release an AI strategy for the continent.

  • Global institutions like the UN, OECD, G20, and regional alliances have started to create working groups, advisory boards, principles, standards, and statements about AI.

In conclusion, 2023 was undeniably a groundbreaking year for AI policy, witnessing significant strides in shaping the ethical, legal, and societal frameworks surrounding AI. The executive order by President Biden, the AI Safety Summit in the U.K., and the AI Act in the European Union represent key milestones that illuminate the global efforts to navigate the complex landscape of AI regulations.

As we gaze into the future, one cannot help but wonder: How will these developments shape the trajectory of artificial intelligence on a global scale?