December 21, 2024

An Artificial Intelligence (AI) algorithm is wading through the oceans of content social media users are posting about the Olympics with a singular mission: neutralising online abuse.

Advertisement

The 2024 Summer Olympics will generate more than half a billion social media posts, according to an estimate by the International Olympic Committee (IOC).

That doesn’t even include the comments. Assuming the average post is 10 words, to be conservative, that’s a body of text around 638 times longer than the King James Bible.

It would take close to 16 years to read if you gave each post one second of your time.

Latex Foam

A lot of those posts will include people’s names, and upwards of 15,000 athletes and 2,000 officials will be subject to a mountain of attention right when they’re navigating some of the most high-pressure chapters of their careers.

Cheering and expressions of national pride will come side by side with hate, coordinated abuse, harassment and even threats of violence.

Elite athletes face appalling online abuse. This Games, the Paris Olympics is trying to shield them from it

This poses a serious risk for Olympians’ mental health, not to mention their performance at the games. But the IOC is exploring a new solution.

The next time you post about the Olympics in the coming weeks, an AI-powered system will review your words to keep athletes safe from cyberbullying and abuse.

Waylead  real estate

Online abuse has become a growing issue within elite sport, with many high profile sportsmen and women calling for more to be done to protect them.

American tennis player Sloane Stephens, for example, revealed she received more than 2,000 abusive messages after one match.

England footballer Jude Bellingham has also spoken out over the racist abuse that he and other players receive on a regular basis.

The English Football Association recently announced it is funding a police unit to prosecute people who abuse players.

Feed The Future

In recent years the Olympics has placed a growing focus on mental health in its efforts to protect athlete wellbeing, and the world of sport is reckoning with the role social media plays in that equation, says Kirsty Burrows, head of the Safe Sport Unit at the IOC.

“Interpersonal violence is something that can be perpetuated in physical form, but it can also be perpetuated online, and what we’re seeing across society is online abuse and vitriol is getting higher,” Burrows says. AI isn’t a cure-all, but it’s a crucial part of the effort to fight back.

“It would be impossible to deal with the volume of data without the AI system.”
Reading comprehension

It’s easy to set up a search filter that picks certain noxious words or phrases, such as curses or racial slurs.

The Threat Matrix algorithm will churn though hundreds of millions of posts to weed out attacks during the games (Credit: Getty Images)

But language is often more subtle than that. When you’re dealing with an ocean of content, you need a tool that can sort through meaning. That’s where the latest generation of AI comes in.

Large language models – which learn to process and generate language by analysing patterns in huge swaths of text (think of tools like ChatGPT) – can help to identify the feelings and intentions behind a piece of text even if tell-tale words aren’t present.

The system the IOC is harnessing to weed out online abuse, known as Threat Matrix, is trained for exactly that purpose.

Tools like Threat Matrix take a multifaceted approach, says Qublai Ali Mirza, a senior lecturer at the University of Gloucestershire who has studied the intersection of AI and cyberbullying.

Advertisement

One aspect of this is sentiment analysis, which extracts the attitudes woven into a piece of text. Take sarcasm, for example – something a human might have no trouble spotting but computers are only just starting to comprehend. More advanced systems such as
Threat Matrix can also process the ways that images and emojis can shift the meaning of a piece of text – and do it all while understanding the varying meanings and nuances in different languages and regions.

It would take an army of human beings to do the same kind of work, one prohibitively large even for an organisation with extensive resources like the IOC.

“The important thing here is it allows for an automated response,” Ali Mirza says. “It’s critical to nip this in the bud right at the beginning, and address it before it comes through to the recipient.”

The most effective response starts before the abuse has reached an athlete, and they’re put in the position of needing to ask for help, he says.

Advertisement

During the games, Threat Matrix will scan social media posts in over 35 languages in partnership with Facebook, Instagram, TikTok and X.

The system will identify abusive comments directed at athletes, their entourages and officials at the Olympic and Paralympic Games – though people can opt out if they prefer. It will then categorise different types of abuse and flag posts to a team of human reviewers.

“The AI does most of the heavy lifting, but human triage is critical,” Burrows says.

When the system spots a problem, a rapid response team will look over the posts for context that the AI might have missed, and then take steps to mitigate substantial problems.

Advertisement

That could mean reaching out to the victim to offer support, issuing takedown requests for posts that violate social media policies or even contacting law enforcement about more serious threats.

Often, this will happen before an athlete even has a chance to see the offending content, according to the IOC.

Be nice

Professional athletes are what Burrows calls “highly exposed” to cyberbullying, meaning they’re far more likely to face attacks than the average person.

This can have serious ramifications, especially when the hate is coordinated among factions of multiple abusers.
Recent history is littered with examples. During the 2022 Olympic Games, Chinese figure skater Zhu Yi faced a torrent of abuse after falling down during her events.

An abusive hashtag reportedly generated over 200 million views in just hours on the Chinese social media platform Weibo. “What everyone said on the internet really affected me,” she told the Xinhua News Agency.

“The problem now is psychological.”

Canadian tennis star Rebecca Marino told the New York Times that online bullying played a major role in her decision to take a years-long hiatus from professional sport in 2013.

The abuse was so severe for Shireen Limaye, captain of India’s women’s basketball team, that it drove her into a depression she kept secret from her family.

“I started having self-doubt and started hating my body,” she said in an interview with Germany’s DW News. Rugby player Owen Farrell stepped aside from this year’s Six Nations Championship to focus on his mental health.

Though he didn’t pin the decision entirely on internet comments, Farrell admitted he had to delete social media to protect himself during a recent tournament.

“The abuse of athletes has never been greater,” says Emily Hayday, a senior lecturer in sport business at Loughborough University in the UK.

Hayday led a recent study of the toxic social media practices endured by athletes, which collected more than 240,000 tweets posted within 72 hours of events that triggered online harassment during previous Olympic and Paralympic games.

Threat Matrix can process text in over 35 languages, read into sarcasm and even unpack the ways that images and emojis change the context of a post

These trigger events, which included everything from missed penalties during a football match to one Olympian kissing his partner, sparked a wide variety of abuse.

The study then followed athletes’ reactions to online vitriol through interviews with players and people who know them.

Many suffered both psychological and physical harm, including an athlete who took their own life in response to the abuse.

“One athlete who experienced abuse online 10 years ago was still dealing with the fallout today,” Hayday says.

New solutions

Different kinds of abuse pose different challenges. Some cyber harassment clearly violates social media policies, such as hate speech around race, sexuality or nationality.

That’s easier to deal with in one sense, because the material is so clearly objectionable that platforms can just take it down, Hayday says.

But it can be more complicated for tech companies to moderate emotional abuse that isn’t tied to a person’s background.

In cases where the content is negative but less extreme, social media platforms’ desire to protect free speech means they can’t always delete the material, she says.

According to the study, that’s when professional sport organisations, coaches and others need to ensure athletes have the support they need.

One of the biggest problems Hayday saw in her research was the fact that some athletic organisations didn’t have any systems in place to deal with online abuse whatsoever.

That’s still a problem, though the situation is improving, she says.

“Even now there’s a lot of ambiguity. Athletes who faced abuse didn’t know where to go. Often there’s no one clearly responsible for it,” Hayday says.

“In some organisations it would be the communications department. But we’re seeing the likes of the IOC dipping their toes in the water to provide guidance and support and stronger systems for athletes, especially during game time.”

Attitudes around social media are shifting in the world of sport, and Threat Matrix is a solution many originations are trying.

The various governing bodies of tennis adopted the AI tool to monitor threats and abuse during events at the beginning of 2024.

In the US, the National Collegiate Athletic Association (NCAA) launched an initiative to examine how Threat Matrix can be used to protect student athletes, coaches and officials at college sporting events.

The system has even been used to study the ways that AI can combat negativity in video games. Threat Matrix was piloted at the recent Olympics eSports tournament to great success, according to the IOC.

But even with the power that AI affords, online vitriol is a societal problem, and technology can’t solve societal problems alone, Burrows says.

Threat Matrix is just one part of a more comprehensive ongoing effort to protect athletes.

In general, social media companies have been experimenting with ways to tone down the negativity on their platforms.

Both Instagram and TikTok have introduced notifications that nudge users to choose kinder words when the platforms detect that people are about to post harsh or overly critical comments.

But in the world of sport where “highly exposed” individuals are more likely to face online attacks, part of that solution involves changing norms, attitudes and expectations among the athletes themselves.

That includes the IOC’s mindful social media course, which is meant to help athletes understand how both positive and negative content online can affect their mental health, teach a range of strategies and coping techniques and identify resources for additional support.

“It’s about taking a holistic approach to athlete wellbeing, and we’re committed to doing everything we can within sport to foster psychologically safe environments and destigmatise conversations around mental health,” Burrows says.

 

See also  Kumasi is now a big village – Catholic Priest

Leave a Reply