Biased Bots: Are You Using AI That Violates the UK Equality Act?

By: Verbit Editorial

Filters

Filters

Popular posts

Adding Subtitles in DaVinci Resolve Adding Subtitles in DaVinci Resolve
instagram-logo-1
Adding Captions To Instagram Reels & Videos Adding Captions To Instagram Reels & Videos

Related posts

education event 3
AI in higher education: Empowering educators, enhancing learning and adapting for the future AI in higher education: Empowering educators, enhancing learning and adapting for the future
Television Captions
From convenience to necessity: How captions and subtitles are redefining video experiences for all From convenience to necessity: How captions and subtitles are redefining video experiences for all
Share
Copied!
Copied!

Incorporating Artificial Intelligence into the workplace can come with certain risks. AI is making headlines lately after proving to be capable of discrimination. This reality puts companies in an awkward spot. Professionals want to use more technology to create efficiencies, but they also don’t want to risk violating important anti-discrimination legislation like the UK Equality Act.  
 
So, what happens if your business uses AI that ends up violating the law? How can you be aware? The UK government composed a white paper recently to address the potential setbacks related to biased AI. The letter, and responses from watchdog groups, showcase the challenges involved with regulating AI. 
 
To put your company in the best position, learn more about the UK Equality Act itself below. Boost your awareness of how the AI you’re using or considering might be putting your company at risk of violating it.  

What is the UK Equality Act?  

The UK Equality Act makes it unlawful to discriminate against people in protected classes- including individuals with disabilities. Legislators created the UK Equality Act to combine numerous earlier anti-discrimination laws into one law. Before the Act, there were over 100 separate laws related to different types of discrimination. Now, the UK Equality Act includes provisions to prevent discrimination based on sex, race, religion, sexual orientation, age and disability status. The Equality and Human Rights Commission (EHRC) is responsible for enforcing this law.  

Examples of responsibilities the UK Equality Act creates include providing accommodations like captions for students, preventing age-based discrimination and reporting on internal diversity, among other things. However, the Act doesn’t consider the possibility of biased AI. Rather, the Act covers discrimination at the hands of human employers, educators or officials.  

Some suggested that this new technology meant that legislators would need to create a new regulatory body to monitor and enforce violations of the law related to AI. However, the UK opted to avoid creating any new governing body. Instead, existing regulators, including the EHRC will need to look at ways that AI’s biases and discrimination could be violating the law. 

woman sitting on a sofa with her laptop

Potential biases of AI to be aware of 

An Amazon AI recruiting algorithm was recently detected as discriminating against women. The AI noticed that men held more technical roles, and therefore concluded that the employer preferred male candidates over female ones, even if they had the appropriate skill set. Eventually, Amazon abandoned the program.  

In another example, researchers at Johns Hopkins used robots to sort blocks with faces of people on them into different groups. The robots classified black men as criminals 10% more often than white men. It named Hispanic men as janitors 10% more often than white men.  

Part of the problem is that AI makes decisions based on the data sets that were used to train it. If there’s bias in the data, the AI will provide biased results. Since biases exist in the world, biases will also exist in the data. This is what happened with Amazon. Men were overrepresented in the existing data sets of people in technical roles. That data taught the AI that when the company is looking for people to fill technical positions, men are the better candidates. Real-world data is full of similar biases, which in turn, teach the AI to continue to make these discriminatory decisions. 

AI can also cause “screen-outs.” A screen-out occurs when AI has certain criteria that it uses to screen out candidates for a job. The algorithm can discriminate against people with disabilities. For example, it might exclude candidates who can’t perform a certain test that requires sight, hearing or the physical ability to use a mouse. Also, if a candidate’s disability caused a gap in their resume, the AI might screen them out. However, by discriminating against a person for having a disability-related employment gap the company might be violating the law.  

Keeping AI violations of the UK Equality Act in mind 

Using AI to make employment-related decisions does not take responsibilities away from employers. Employers and potential employers using AI for hiring and otherwise must make an effort to remove the disadvantages that people with disabilities, among others, face when applying to their companies. Employers should be wary of choosing to use AI alone to recruit new employees. Taking this shortcut could put their business at risk of violating the UK Equality Act.  

How can your company avoid AI-related discrimination? 

Here are a few tips for avoiding potential AI-related discrimination in the workplace. 

Test the AI you’re using 

You’ll need to be able to show that you’ve tested any algorithm you use or selected one that underwent testing for biases.  

Be transparent and accountable 

Disclose your use of AI both internally and with applicants. In fact, a new law in New York City requires that companies using AI in hiring decisions disclose this information to candidates. This regulation may signal more oversight of AI in hiring and other business use cases. You can preempt this trend by taking accountability and ownership over how you’re using these tools. For instance, is the AI providing “top candidates” from a larger pool? Is it grouping people into buckets based on their education, professional certificates or work experiences? Explain what you’re using AI to accomplish and how you’re preventing it from improper discrimination.  

Establish touchpoints for human intervention 

Your team must still be involved in the recruiting, hiring and management process. AI should be a tool that’s helping them, but not replacing important processes where human touch and eyes are needed. To give candidates an equal shot, establish a system of human checks and touchpoints to avoid situations that can be deemed as unfair, unethical or illegal. As you develop your processes of where to implement AI, do the same for areas where your actual team should be responsible or take charge. 

Provide a clear point of contact to troubleshoot questions and issues 

If an applicant wants to apply, but has questions or finds that the process isn’t accessible, who can they contact? Make it easy for applicants to contact you. If you automate everything using too much AI, it will surely harm candidates’ experiences with your company. 

Continue to monitor your AI’s results 

AI learns over time. Unless the algorithm undergoes regular assessments, it could start including biases even if it wasn’t initially. It’s important to be proactive about the performance of the AI you’re using and continue to monitor it. 

Use AI tools for workplace efficiency not as replacements  

When it comes to AI in the workplace, business leaders in the UK need to remember that they can’t shift the responsibility of avoiding and preventing discrimination to technology. AI is ultimately a tool. If that tool is violating the UK Equality Act, so is the business using it. 

Verbit is partnering with businesses across the globe to support their need to offer more equitable workplaces. To learn more about our technologies or insights on other accessibility-related laws, visit our knowledge hub. We can also provide helpful tools in the recruitment process, such as transcripts of interviews for your team to reference and much more.