Tuesday, 9 May 2023

The ethics of AI and the problem of bias

Artificial intelligence and machine learning technologies are entering the mainstream – but are we ready for them?

Artificial intelligence and machine learning technologies are entering the mainstream – but are we ready for them?

 

While the technology offers tantalising possibilities for organisations of all kinds, it also raises challenges for governments, regulators and organisations – one of which is the problem of bias. 

A machine doesn’t act subjectively; it is programmed to behave in a certain way to fulfil a specific task. Given this, it’s tempting to believe that AI has the potential to identify or reduce human bias. 

However, as AI solutions have been deployed it has become apparent that AI also has the potential to make the problem of bias worse – by baking in biases and deploying them at scale.

 

The ways in which AI bias can be introduced

There are different ways in which bias can be introduced to an AI system:

•    Algorithm bias: a problem within the algorithm that performs the calculations.

•    Sample bias: a problem with the data used to train the machine learning model. 

•    Prejudice bias: where the data used to train the system reflects existing prejudices, stereotypes and/or faulty societal assumptions, thereby introducing those same real-world biases into the machine learning itself. 

•    Measurement bias: arises due to underlying problems with the accuracy of the data and how it was measured or assessed.

•    Exclusion bias: occurs when an important data point is left out of the data being used, for example, when those training the AI don't recognise the data point as consequential.

Even when we are cognizant of these risks, bias can still be introduced to AI systems. The UK’s National Cyber Security Centre (NCSC) warns, “Even when we trust the data provenance, it can be difficult to predict whether the features, intricacies and biases in a dataset could affect your model’s behaviour in a way you hadn’t considered.”

 

The risks of biased AI models

Leaving aside the undesirability of baking in biases into opaque systems and what that might mean for subsequent systems which may draw on these systems, these biases present an immediate risk to anyone deploying the technology. 

These biases could result in some sections of your customers and stakeholders being treated in unfair ways and could have seriously negative impacts on the trust in both the technology and your brand. 

For example, in 2021, Forbes reported that AI bias caused 80% of Black mortgage applicants to be denied. Furthermore, with AI involved, minority borrowers who get approved online are typically paying more under algorithmic lending. That’s significant because 45% of the largest mortgage lenders in the USA now offer online or app-based loan origination. 

“It’s a very general technology that’s going to be used for so many things,” Katja Grace, an A.I. safety researcher, told the New York Times. “So it’s much harder to anticipate all the ways that you might be training it to do something that could be harmful.”

 

How to address the risks of bias in AI models

Organisations must put in place a clear strategy that aligns its use of AI with corporate values, warns the Boston Consulting Group. This will mean articulating how the company will operationalise responsible AI across all aspects of the organisation, including governance, processes, tools and culture.

The Harvard Business Review calls on business leaders to commit to six essential steps to tackle AI bias:

•    Business leaders need to stay up to date on this fast-moving field of research

•    When you deploy AI, establish responsible processes that can mitigate bias

•    Engage in fact-based conversations around human biases. Use “explainability techniques” to understand what led a model to reach a decision.

•    Consider how humans and machines can work together to mitigate bias and how to keep “a human in the loop”.

•    Take a multi-disciplinary approach to continue advancing the field of bias research; invest more and provide more data (while respecting privacy).

•    Invest more in diversity and inclusion in AI; a more diverse AI community would be better equipped to anticipate and review bias and engage with the communities affected.

This latter point is particularly important. The US National Institute of Standards & Technology (NIST) argues that, “Organisations often default to overly technical solutions for AI bias issues, but these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates.”

 

The regulatory response

Regulators are finally catching up with the risks presented by AI. The EU’s pending Artificial Intelligence Act proposes fines of up to six percent of global turnover for organisations which violate its guidelines around responsible AI.

Of course, different AI and ML models and technologies present different risks. The newest, shiniest AI on the block is currently large language models – those behind OpenAI’s Chat GPT and Google’s Bard. We’ll take a look at the specific risks of these new technologies in a forthcoming blog. 

 

What next?

If you’d like to discuss any of the technologies or cyber risks discussed in this article, please reach out to the Grant McGregor team. 

Call us: 0808 164 4142

Message us: https://www.grantmcgregor.co.uk/contact-us 

Further reading

Find more insights about the latest technologies and cyber security issues on our blog:
•    AI’s new role in cyber security

•    Criminals are exploiting AI to create more convincing scams  

•    What is Cloud Native? Is it something my business should be thinking about?

•    Is your organisation doing enough on supply chain security?