AI is Not as Harmless as You Think
The opinions expressed in this article are the writer’s own and do not reflect the views of Her Campus.
This article is written by a student writer from the Her Campus at Nottingham chapter.
AI usage has skyrocketed in the past couple of years. I remember watching Spielberg’s A.I.
Artificial Intelligence as a child and being horrified by the concept of a sentient robot. But
what once seemed like a thing of the future, something I was sure I’d never see in my
lifetime, has become a normal feature of everyday experience for me. Scroll TikTok for a
couple of minutes and you’ll see a creator using an AI ‘filter’ to see what she would look like
as a cartoon character, scroll further and you’ll come across a Subway Surfers video with an
AI voice overlay spouting some nonsense ‘AITA’ Reddit story. Long story short, AI is
everywhere and it’s unavoidable. OpenAI have reported that their chatbot, ChatGPT, hit
reached 1 million users a week in late 2022, and by early 2023, they had accumulated over
100 million monthly users. However, underlying the fun and exciting possibilities AI has to
offer, it also poses a multitude of costs, and we should be scared of the consequences.
In August, The Guardian published an article called ‘How TikTok bots and AI have powered
a resurgence in UK far-right violence’. It recounts the events of the Southport stabbings,
starting with an AI-generated image shared on X of ‘bearded men in traditional Muslim dress
outside the Houses of Parliament, one waving a knife, behind a crying child in a union jack
T-shirt’. This tweet was viewed over 900k times, captioned: “We must protect our children!”.
AI is being exploited to share extremist propaganda, leading to violence, rioting and hate.
Funnelling harmful content through the means of generative AI, namely chatbots and
imaging, allows this bigoted ideology to reach wider audiences on social media platforms
under the guise of being a ‘meme’ without individuals understanding what they’re engaging
with. Slowly, AI has come to play a significant role in content creation and, consequently,
social, political, and cultural conversations. However, research shown that AI has began
revealing their racial biases and lack of diversity and cultural sensitivity. Relying on data to
function, when the data is not comprehensive enough, not diverse enough, bias can occur.
For example, some AI systems having recently been singled out due to an inability to
generate images of interracial couples. When a system is fundamentally biased, it is
symptomatic as a much bigger problem with AI. In a world where biased AI is everywhere,
on our screens and on our children’s screens, it is no wonder that there has been a rise in
extreme right-wing ideology amongst children. In fact, MI5 boss Ken McCallum said it is
online propaganda to blame for the rise in children being investigated for possible
involvement in terror activities. You would think such a hazardous tool would be
safeguarded, and yet AI is still readily available.
Dubbed “the 21 st century’s answer to Photoshopping” by The Guardian, deepfakes are
videos, pictures or audio clips of fake events generated by AI to look real. Earlier this year,
explicit AI-generated images of singer Taylor Swift went viral on X (formerly Twitter),
receiving over 47 million views under 24 hours. Although the account was shortly
suspended, the damage has been done and the images still circulate social media today.
Deepfake pornography has affected many, from Hollywood celebrities, to journalists, to
politicians, many of whom are women. A study conducted by Deeptrace Labs found that
96% of pornographic deepfakes are of women. It’s not just celebrities affected, too. In
Almendralejo, Spain, more than twenty girls, aged eleven to seventeen, came forward as
victims of AI-generated pornographic images. It’s not just cyberbullying; AI is being utilised
as just another vessel for misogyny. Being of the subject of a deepfake is humiliating,
belittling, and fundamentally, a form of gendered punishment. As Aja Romano succinctly
puts it: “the patriarchy has another weapon to wield”.
Aside from its cultural impact, AI is an ecological time bomb. Unknown to many, they are
huge consumers of waters. Housed in data servers, they generate massive amounts of heat
and often it is water-based cooling systems used to keep them at a decent temperature.
Microsoft alone used more than 2,500 Olympic-sized swimming pools of water in its data
centres last year (The Independent). Waste Free Planet reported that by 2027, AI usage is
predicted to use as much water as all of New Zealand. AI is environmentally devastating.
When one single AI prompt is equivalent to pouring out an entire bottle of water, it begs the
question: is this really necessary? Does my need to see myself in a 90s yearbook really
outweigh the aquatic cost? I don’t feel the need to explain what the loss of so much water
will do to our planet: it’s the same talking points used for climate change, and we’ve been
having that conversation over and over for many years now. But since the environmental
consequences of AI are not spoken about – I wasn’t even aware of them until actor Ayo
Edebiri reposted an infographic on her Instagram about it – how long will it take for
preventative measure to take place, and, as alarming as it sounds, will they ever take place?
Although companies like Microsoft, Google, and Meta have pledged to counteract their
environmental impact by replenishing more water than they consume by 2030, it is unclear
how they will do so. Only time will tell.
Ultimately, it’s difficult to come to the conclusion that the benefits of AI usage outweigh the
cons, at least as a student. Do I need help with my essay introduction badly enough that I
am willing to fuel a natural resource hungry, culturally biased, misogynistic machine with my
own data? I find it hard to believe so.