Gender and racial bias in AI
It can be hard to comprehend the idea that something as emotionless as a computer, made up of mechanical parts and coded programmes, could be the main source of gender and racial bias across some of our most frequent day-to-day activities and processes.
The idea sounds like something out of a dystopian novel, but it isn’t fictional in the slightest the frequency in which these biases are affecting our society is alarming. From our criminal justice system to healthcare, the jobs market to the way we communicate online, persistent and damaging bias is everywhere.
The bias, however, comes primarily from the humans behind the screen. Computers can only function when told what to do and how to do it, and for the most part, this information is currently coming from a particularly non-diverse set of people.
Recent studies have found that the AI Education sector is an overwhelmingly male field, with 80 per cent of professors being men. The disparity continues across the professional tech-giants of the world too, with only 15 per cent of Facebook’s workforce being women, and even lower numbers within Google (10 per cent).
The outlook becomes even bleaker for those within diverse ethnic minority groups. Google’s workforce has only 2.5 per cent representation of black people and 4 per cent at Microsoft.
Looking at these statistics, it should therefore come as no surprise that there is bias programmed in. The datasets created with our AI systems are developed through a very narrow prism of views, experiences and ideologies, all of which are then used as a basis for creating the algorithms that permeate every aspect of our lives.
Furthermore, when bias is present within AI systems, the algorithms created can then amplify this bias and so, between the humans working behind the screen and the algorithms that are built, it is a never-ending vicious cycle.
Two key examples highlight the current one-sided nature of AI, across both gender and race, both of which kick-started global discussions on how we can combat such seemingly built-in discrimination:
Amazon
What was meant to be the world’s best-kept secret ended up being one of the retail giant's biggest regrets. Thinking outside the box in an attempt to make the hiring process fairer and more efficient, the team at Amazon turned to AI to review applications. The main goal was for the system to pick out the top talent based on algorithms.
One year into the project, senior teams began to note that this process was favouring male applications, creating an extremely gender-biased selection process. This happened because the internal algorithm was trained to vet applicants based on the CVs the company had received over the previous 10-year period which, because of the times, was very male-heavy.
Zoom
Zoom’s latest issue was noted at the beginning of the pandemic when employees were asked to use personalised backgrounds during client calls to help create a more professional image. White employees had no issues, whereas black employees’ faces and limbs were suddenly removed as Zoom was unable to register them because the algorithm hadn’t been taught to recognise people of colour.
Most worryingly, these types of algorithms are not just erasing black people from Zoom calls or omitting women from potential interviews, they are leading to incorrect, biased arrests, the mislabelling of black people on Google and putting people’s lives at risk.
Sadly, this issue has been present for many years and yet, it appears to be getting worse instead of improving, and this seems to be because of a lack of education around the solution. We all know it’s happening, but what is being done to resolve the problem?
Here are a few starters for ten:
More frequent discussions about the potential of human bias, with an underlying foundation of the fact that humans are responsible for discrimination in AI.
Investing more into the research behind AI systems, advancing our knowledge of the field further.
Not relying on tech to resolve ingrained societal issues.
Making a better effort at diversifying the field overall.
All these steps are ways in which we can better educate ourselves and our data science students to make our technological advancement fairer, more sophisticated and far more efficient.
As Olga Russakovsky, Computer Science Researcher and Assistant Professor of Computer Science at Princeton University, said, “I don’t think it’s possible to have an unbiased human, so I don’t see how we can build an unbiased A.I. system. But we can certainly do a lot better than we’re doing.”
Join our community on LinkedIn here for more content #fraxcorp.