Further attention has been paid to AI (Artificial Intelligence) in the past 30 or so years, inspiring popular movies like iRobot or philosophical narratives about limits of consciousness and personhood. However, many people forget that AI is already all around us; it is built in the algorithms that show us adverts of things we end up buying, of what content comes up on Instagram and TikTok front pages, and makes up the online world you are shown. Timnit Gebru’s firing is a significant insight and wake up call to how women of colour in STEM and AI are treated, and how this translates across AI products. Gebru co-authored a paper which focused on how AI language models have a structural bias against women and ethnic minorities; Google took issue with this and has since said the company would investigate this matter, but did not apologise.
Timnit Gebru, a prominent AI researcher, had recently left Google, noting that she was fired for ‘criticising the companies lack of commitment to diversity’. Ever since the Black Lives Matter global protests in the summer of 2020, there has been a sharp increase in businesses attention to issues around Equality, Diversity and Inclusion. Critics of this have compared this to a similar vein of ‘pride capitalism’, which uses support of the LGBT plus community to appear more ethical and therefore, more attractive to consumers, without actually being a safe space for these communities. Gebru’s experience as a Black woman in STEM is an insight into the widespread experiences in this area, both as a researcher/student/staff, or as a consumer using products that implicitly discriminate against people of colour and other communities.
For example, a Google service that performs automated image labelling, produced racist results when comparing an image of a Black hand holding a thermometer device and a white hand. The automated labels that appeared for the Black hand were ‘hand’ and ‘gun’, whereas the labels associated with the white hand were ‘hand’ and ‘monocular’. Google has since apologised.
In 2017 a video that depicted a Black man trying to get soap from a dispenser went viral; the video was tweeted by Chukwuemeka Afigbo, who said ‘if you have ever had a problem grasping the importance of diversity in tech and its impact on society, watch this video’. Many people had responded with similar experiences of machines not recognising different skin colours, and attribute this to lack of diversity in staffing, as well as creation and testing stages.
Recently, author of ‘How To Be Alone’, Chidera Eggerue, AKA, The Slumflower, has highlighted a marketing inequality in publishing, after noting that a fellow author had allegedly plagiarized large sections of her book, whitewashed it and resold it without recognition. When searching for the book title, many people have found that it comes up with Florence Givens ‘Women Don’t Owe You Pretty’, which has been accused of plagiarism and whitewashing.
This is not dissimilar to Instagram’s policies and algorithms, which have been heavily criticised as marginalising and discriminating against women of colour, fat and plus size women, sex workers and educators, and disabled people. One campaign, titled #IwanttoseeNyome, trended on Instagram after model Nyome Nicholas-Williams photo was taken down for going against community guidelines, which depicted her covering her breasts in a photoshoot. This comes at the same time people, particularly women, are reporting the many, many times they are sent unsolicited pictures from men, or threatening and hateful messages from others, but have been told they ‘do not go against community guidelines’. This is something many women experience purely by existing on a social media platform. CEO Adam Mosseri recognised the need to ‘look at algorithmic bias’, and committed to a review of policies. However, recent announcements have included more of a ‘crackdown’ on sexual content, which has had a major impact on sex educators, for example, rather than those misusing the platform for sexual misconduct.
Finally, the emergence of AI’s such as Alexa and Siri have prompted feminist debates as to why largely male creators opted for a female voice as its default, which reinforces stereotypes about the ‘serving’ woman trope; a trope that is reinforced and deliberately highlighted through movies such as Ex Machina, or the extremely popular manufacturing of female sex dolls and robots. The Business Insider considered that Amazon gave Alexa a female voice to appear more ‘caring’, as well as ‘sympathetic and agreeable’. People on social media have highlighted this as another opportunity to reinforce attitudes that already exist on the appropriate way to speak to a woman who can’t say no, whether you can see her or not. The BI said that when AI voices are associated with ‘care’, then customer satisfaction is increased. It’s not surprising then to know that at launch, only 23 percent of Amazon Echo revenue came from women. Though this is gradually increasing, it is clear who Amazon is catering for. On top of this, a YouGov survey reported that two thirds of female owners said their device failed to respond to them. When testing and speech creation is dominated by male voices, the woman user will be affected.
There are huge amounts of reports, data and experience that point to a lack of diversity in AI through its entire process. These extend beyond just women and race, however, as it also affects people who speak different languages, different facial structures, have different inflections, people with speech disabilities and are all around deviating from the typical white, European/British male of whom tends to dominate testing, creation and user spaces with technology. In an interview conducted by the BBC with Gebru, she noted how she thinks ‘most, if not all tech companies are institutionally racist’, meaning, set up in its foundations to marginalise people of colour, and said it is ‘not surprising that Black women have one of the lowest retention rates in the technology industry’.
Finishing with a call to action, she notes that ‘Unless there is some sort of shift of power, where people who are most affected by these technologies are allowed to shape them as well as be able to imagine what these technologies should look like from the ground up, and build them according to that, unless we move towards that kind of future,I am really worried that these tools are going to be used for more harm than good.”