Governments are scrambling to control AI

By: Verbit Editorial

AI government header
Filters

Filters

Popular posts

Adding Subtitles in DaVinci Resolve Adding Subtitles in DaVinci Resolve
instagram-logo-1
Adding Captions To Instagram Reels & Videos Adding Captions To Instagram Reels & Videos

Related posts

student at laptop
Hear how colleges and universities are preparing for new Title II rules and enhancing digital accessibility Hear how colleges and universities are preparing for new Title II rules and enhancing digital accessibility
Fast Company logo
Verbit recognized on Fast Company’s 2024 list of the ‘Next Big Things in Tech’ Verbit recognized on Fast Company’s 2024 list of the ‘Next Big Things in Tech’
Share
Copied!
Copied!

In the 2004 film “I, Robot,” a detective interrogates a robot in the year 2035, asking it if it could write a symphony or turn a canvas into a beautiful masterpiece. The questions were rhetorical ─ the answer was clearly no ─ yet here in 2024, machines have proven themselves more than capable of producing art indistinguishable from that created by humans.

The rapid development of artificial intelligence has regulators concerned about the social and possibly existential risks the technology poses, and governments around the world are racing to keep up. Verbit analyzed the news and government reports to see what laws are being passed regulating AI.

The European Union in March passed its AI Act, the world’s first comprehensive set of laws guiding AI. The legislation creates four tiers of risk with which to classify different AI systems, ranging from “minimal risk” to “unacceptable.” Except for carve-outs for law enforcement, AI systems that pose “unacceptable” risks ─ including those manipulating people’s thoughts, identifying people based on their demographic traits or personal beliefs or identifying emotions at workplaces and schools ─ have been banned. Meanwhile, “high-risk” systems, such as those that harm people or essential infrastructure, will require disclosures of risk and human oversight.

World leaders posed in front of an AI Safety Summit backdrop

More laws are coming

Two major camps call for AI legislation: researchers and tech companies.

Researchers have become increasingly convinced that AI poses a serious existential risk to human civilization, while technology companies actively lobby Washington in hopes of shaping the inevitable wave of legislation. More than 350 organizations spent a combined $569 million to influence AI policy in the first three quarters of 2023, according to OpenSecrets, a nonprofit that tracks lobbying money.

At least two dozen states across the U.S. are working on legislation to regulate AI. At the federal level, the Biden administration last October issued an executive order to establish certain standards for the development and use of AI. The White House now requires companies building powerful AI systems to share safety data with the government before the tech is made public. Tests must also be maintained to ensure the technology does not pose major risks to national public health and safety, security or economic security.

The executive order also addressed more widespread (if less existential) concerns, such as the use of AI to exacerbate discrimination. An Artificial Intelligence Safety Institute was established in November 2023 to guide the government in its regulatory efforts. Per the executive order, a host of government agencies will begin forming plans and regulating AI this year.

President Joe Biden and Vice President Kamala Harris stand before the press during the signing of AI legislation

Courts will have to intervene, too

Researchers still do not fully understand how the latest wave of AI models work. As a result, governments are contemplating a wide range of policy ideas.

Both the Biden executive order and EU legislation add new rules governing the development of the most powerful models. Measuring how capable AI systems are can be difficult, so regulators have taken to judging models by how much computing power it took to train them. The EU AI Act, for instance, defines models as systemic risks if they took more than 10^25 floating-point operations to train. This category includes at least OpenAI’s GPT-4, which powers ChatGPT, and possibly Google’s Gemini. Companies building these models will have to provide officials with reports about risk assessments, notify them of serious incidents, and information on energy consumption if they want to operate in the EU.

Courts will have a big role in shaping how AI evolves, too. One of the most immediate issues to reconcile is what makes for a copyright violation. The EU AI Act requires companies to respect copyright laws when training their models, but it’s not entirely clear at this point what compliance looks like. It’s fairly easy today to prompt AI systems to create images of copyrighted characters, or to draw in the style of a famous artist. That’s because companies trained many of the leading AI models by scanning through the entire open internet and downloading text, audio, images and video.

Many artists and writers argue that this amounts to theft of their work, but AI’s defenders say that this should fall under fair use ─ after all, it is not illegal for an artist to hone their craft by studying others. If courts decide that the major AI systems violate copyright, then the companies that built them will have a lot of work ahead of them.

By Wade Zhou

Integrate captions that support ADA guidelines

Get real time solutions with Verbit’s live captioning and transcription.

Learn more