The Biden Administration recently announced an executive order on AI with two main goals. First, it serves to boost government understanding and use of this evolving technology. Second, it calls for more formal regulation of private tech companies that are the major players in generative AI development.
The Administration is showing that it recognizes that AI can be both a threat and an effective tool for the government. It’s also becoming clear that it presents some tricky ethical challenges for agencies that plan to use AI for anything from hiring to investigating crime.
As a company dedicated to developing responsible AI that promotes accessibility and productivity, Verbit is following efforts to create policies surrounding this technology. Here are some of our key takeaways from the executive order, and insights on how it may influence government agencies in the near future.
How AI is both a threat and a defense
The potential threat of AI is one of the reasons that the Biden Administration is working to develop a quick response and better understanding of this revolutionary technology. The order considers the possibility that bad actors could tap into AI’s capabilities to create chemical, biological, radioactive or nuclear weapons. In response to that danger, it relies on the Defense Production Act, which will require companies involved with relevant AI to report when their actions present any such risks.
The Department of Homeland Security and the Department of Energy will also play roles in determining potential threats, as will the Department of Commerce and the Department of Defense. The Administration will require DHS to create an AI Safety and Security Board, which will call on both private and public sector AI experts to advise the government on matters related to critical infrastructure.
Additionally, the order considers some less catastrophic, although still extremely harmful, dangers from AI. For instance, the use of deepfakes to scam people by making them think their loved ones are in dire need of money. The White House even considered responses to robocalls and texts, which are automized calls and messages often used by scammers.
Although the Administration clearly outlines many potential risks that AI could create for national and personal security, it acknowledges that the same technology may be the best way to prevent or thwart attacks. In fact, the White House launched an ongoing cyber challenge involving leading tech companies to develop AI that can protect America’s most important software.
However, to truly combat AI-related threats, the government will need to hire more experts in the field. Even that step might involve the use of AI.
An efficiency booster the government can’t overlook, but with risky biases
From identifying new talent to facial recognition for law enforcement, AI can help government agencies work much faster and more efficiently. However, AI has proven to be potentially biased, creating risks of unfairly weeding out job candidates based on improper and potentially illegal discriminatory factors or even falsely identifying a suspect.
Despite possible risks, recruiters for government positions will be turning to AI tools to make finding and hiring talent faster and more efficient while, hopefully, creating better experiences for candidates. Police departments are also tapping into AI’s facial recognition capabilities to find suspects, even though such efforts sometimes lead them to the wrong people. Problematically, the tool appears to be less accurate when identifying Black individuals, creating the potential for discrimination and racial profiling.
Although it addresses them, according to some analysts, Biden’s order doesn’t do enough to prevent these risks.
The government needs to understand big tech more than ever
Several major players in the generative AI game have already voluntarily agreed to certain conditions. Still, given the immense power of AI, the government needs to take a more aggressive approach than voluntary self-regulation.
The order requires that AI developers share the results of safety tests and potential impacts on the labor market with the government. Also, they must study options for supporting employees facing disruptions in their industry.
However, an executive order is just one step. The only way to provide more comprehensive government regulation of AI would be to enact new legislation. In this area, the US is behind the European Union, which is reportedly close to passing comprehensive legislation on AI. The EU AI Act will address the risk of anything from chemical weapons to copyright violations and includes provisions regarding AI-related energy consumption.
The complex future of AI in the government
The Biden Administration’s focus on AI underscores the dual nature of this transformative technology. It addresses the need for government agencies to adopt this new technology to meet the efficiency standards of the modern world. At the same time, it recognizes the possible threats and ethical challenges AI poses. It’s clear that the journey toward responsible AI adoption has only just begun.
Verbit’s AI-powered transcription and captioning solutions and new generative AI offering, Gen.V, are supporting government agencies, top universities, leading companies and more. Contact us to learn how our solutions can help your organization promote accessibility and create more inclusive environments.