The chair of the anti-discrimination body Kick It Out has voiced fears that Twitter will be unable to cope with online abuse during the football World Cup, following a wave of job losses at the social media platform.
Sanjay Bhandari said he was “deeply concerned” by reports of a reduction in the trust and safety team at Twitter, as well as the departure of the executive in charge of the department.
“I am deeply concerned that the reduction in the trust and safety team and the departure of the leader of that team will be taken as a bright green light for hate,” said Bhandari. “I fear that industrial-scale levels of hate during the World Cup will go unchecked by Twitter.”
Elon Musk, the new owner of Twitter, axed approximately 50% of Twitter’s 7,500-strong workforce this month. In the wake of the firings, Twitter’s head of trust and safety, Yoel Roth, said 15% of his team had been let go.
Roth left the company soon after. Last weekend, more than 4,000 Twitter contractors, including people who worked on content moderation, reportedly had their roles terminated.
Overnight, there were reports of widespread resignations among the remaining 3,700 staff at Twitter after Musk set a 10pm GMT deadline for workers to commit to being “extremely hardcore” or else leave with three months’ severance pay.
Bhandari added that moderation on Twitter had been “been opaque, inconsistent and understaffed at the best of times”, and he was concerned that the platform would struggle to cope with a rise in user engagement among football fans after the World Cup begins on Sunday.
Before Roth departed, he said Twitter had been subjected to a coordinated trolling campaign that bombarded the platform with abusive content in an apparent attempt to convince users that Twitter had relaxed content guidelines.
A recent study revealed that more than 300 abusive tweets are sent to Premier League footballers every day and nearly seven in 10 players receive abuse on Twitter. The research by the Alan Turing Institute, the national institute for data science and artificial intelligence, found that 60,000 abusive tweets were directed towards Premier League players in the first half of last season.
One of the authors of the report said Twitter’s ability to deal with abuse of footballers could be affected by the jobs cuts.
“We are aware that Twitter are working with a smaller workforce,” said Pica Johansson, a researcher in the online safety team at the institute. “And there might be, for that reason, less ability for them to respond quickly to some of this type of abuse that we do see.”
The institute’s research found that less than 10% of the abusive tweets were identity attacks that referred to a protected characteristic such as race, gender or sexuality. However, Hannah Kirk, an online safety researcher at the institute, added that racist or nationality-based abuse might be more prevalent at the World Cup.
“I envisage the big difference between the Premier League and the World Cup is global attention and also heightened awareness of nationalism, which potentially intensifies the stereotypical links between race and nation. We might then see a little bit more racism or nationality-directed abuse than we would in the Premier League,” said Kirk.
Nonetheless, the Football Association is confident it will be able to act if Twitter becomes a focus for abuse of its players, as it did during last year’s European championships.
Football bodies within England established a fas- track reporting system last year and the FA has confirmed with Twitter that the same support will be available in the coming month and that resources will be made available for moderation by the social media platform.
The FA also use third-party agencies to monitor for abuse and report on their behalf. This week, Fifa and the international players’ union FIFPRO announced a similar scheme, a “social media protection service” (SMPS) that would be available to players in all 32 nations competing at the World Cup.
Allowing for the scanning and reporting of offensive content, the SMPS will also let players with social media accounts automatically hide comments that are judged offensive. This service will apply only to posts on Facebook, Instagram and YouTube, with Twitter understood to have been excluded from the process due to technical issues.
Twitter has been approached for comment.