The internet has never been a particularly safe space for women and girls, but in the past decade or so, it’s taken a turn for the worst.
We’ve all felt it. Last year, Cambridge academic Dr Ally Louks was bombarded with online abuse, including rape and death threats. Her crime? Posting a selfie with her dissertation on X. “You are the dumbest f*cking bitch I have ever seen on the internet,” one user commented. “Imagine thinking you deserve taxpayer money for writing up that useless piece of sh*t thesis that nobody will ever read.”
Once the preserve of fringe online forums, incel rhetoric has been steadily bleeding into the mainstream, emboldened by the rise of misogyny influencers like Andrew Tate (not to mention the return of President Donald Trump) and the deregulation of tech giants like Meta and X.
Online abuse against women is not inevitable. Since the Online Safety Act became law in 2023, tech companies (from search services to social media apps) have a responsibility to protect people in the UK from online harms, including those that disproportionately impact women and girls, such as domestic abuse, deepfake abuse, and online misogyny. And it’s up to Ofcom, the UK’s communications regulator, to publish guidance and hold tech companies accountable.
Earlier this week, Ofcom released guidance on how tech companies can protect women and girls online. Right on cue, the internet was awash with angry and abusive men describing the guidance as sexist.
One wrote, “Only focusing on misogyny is disgraceful sexism,” while another went as far as saying, “Ban women and children from accessing digital networks, allowing access only through a a fully certified, accredited male gatekeeper. Simple.” As broadcaster and campaigner, Jess Davies points out, “Women’s ‘mean’ comments about men do not even begin to skim the surface of the vitriol and deep-rooted misogyny which exists online that is putting women in real harms way.”
The guidance identifies nine areas where tech firms must do more to improve women and girls’ online safety, which promote a “safety-by-design approach” and incorporate safety into the “operation and design of their services, as well as their features and functionalities.”
Ofcom has also set out a range of practical measures that tech companies can make, including embracing technology to prevent intimate image abuse, training moderation teams to deal with domestic abuse, and adding user prompts to encourage people to reconsider before posting harmful content.
For image-based abuse specifically, Ofcom’s Jessica Smith, says, “Tech firms should sign up to a technology called hash-matching, which is basically a database of images which enables any image to be identified at scale wherever it is shared on a platform.
“It is really innovative technology. What that means is it does not have to be reported every time it is uploaded. It means it is reported once and wherever it exists it is identified.”
Cally Jane Beech, a survivor of deepfake abuse and GLAMOUR’s Activist of the Year, supported the guidance, noting, “I want things to be better, for my daughter, and for women and girls all over the UK. We should all be in control of our own online experience so we can enjoy the good things about it. Tech companies need to be made more accountable for things being hosted on their sites.”