[imagesource: AFP/Saul Loeb]
Another day, another damning indictment of Facebook and its internal decision-making processes, which consistently priorities growth over the safety of its users.
Thousands of pages worth of internal documents known as “The Facebook Papers” have been obtained by 17 news organisations, with journalists poring over them in great detail.
What we have already learnt points to a cruel, profit-focused machine that is currently facing its gravest crisis since the Cambridge Analytica scandal.
One of those news organisations, The Financial Times, has outlined a number of revelations that have come to light.
Firstly, Facebook has a serious language problem:
Facebook is often accused of failing to moderate hate-speech on its English-language sites but the problem is much worse in countries that speak other languages, even after it promised to invest more after being blamed for its role in facilitating genocide in Myanmar in 2017…
According to one document, the company allocated 87 per cent of its budget for developing its misinformation detection algorithms to the US in 2020, versus 13 per cent to the rest of the world.
Even when Facebook’s internal research marked certain countries as “high risk” with regards to hate speech and political instability, scant resources were allocated to combat the spread of dangerous content.
Then there’s the matter of its algorithms, which even Facebook itself doesn’t understand at times:
One September 2019 memo found that men were being served up 64 per cent more political posts than women in “nearly every country”, with the issue being particularly large in African and Asian countries…
A memo from June 2020 found it was “virtually guaranteed” that Facebook’s “major systems do show systemic biases based on the race of the affected user”.
With billions of users around the world, Facebook’s moderators cannot keep up.
This had led to an increase in the importance of artificial intelligence programmes set up to spot and take down hate speech and abuse.
Again, that system is flawed:
According to a March 2021 note by a group of researchers, the company takes action on only as little as 3 to 5 per cent of hate speech and 0.6 per cent of violent content. Another memo suggests that it may never manage to get beyond 10 to 20 per cent, because it is “extraordinarily challenging” for AI to understand the context in which language is used.
Nevertheless, Facebook had already decided to rely more on AI and to cut the money it was spending on human moderation in 2019 when it came to hate speech.
Facebook said in May of this year that the company’s revenue rose 48% to $26,2 billion in the first three months of the year, while profits nearly doubled to $9,5 billion.
Yet it is cutting down on the money it spends on employing moderators?
One Facebook researcher created an account to experience the social media site as a person living in Kerala, India. The account operated on a simple premise – follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos, and explore new pages.
The researcher wrote, “I have seen more images of dead people in the past three weeks than I’ve seen in my entire life total.”
The Daily Beast has also listed a few revelations that stand out, based on reporting from numerous media outlets:
Zuckerberg opted to allow Vietnam’s ruling Communist Party to censor “anti-state” posts, effectively handing over control of the platform to the government…
That decision was reportedly made after the Vietnamese government threatened to kick Facebook off its web.
Facebook said it made the move “to ensure our services remain available for millions of people who rely on them every day.”
The social media giant has also been criticised for aiding and abetting human trafficking:
…internal Facebook communications described how women were trafficked on its platform, some of them enduring sexual abuse and kept from escaping while going without food or pay.
In 2018, Facebook employees flagged Instagram profiles that appeared to sell domestic laborers, but internal documents reviewed by the outlet from September 2019 showed little effort by the company to address the problem.
Only a threat from Apple to pull the Facebook and Instagram apps from the App Store spurred the company to take any sort of meaningful action.
These revelations are far from the final word, as reporters continue to work their way through the massive trove of information.
The worst may still be to come.
Hey Guys - thought I’d just give a quick reach-around and say a big thank you to our rea...
[imagesource:CapeRacing] For a unique breakfast experience combining the thrill of hors...
[imagesource:howler] If you're still stumped about what to do to ring in the new year -...
[imagesource:maxandeli/facebook] It's not just in corporate that staff parties get a li...
[imagesource:here] Imagine being born with the weight of your parents’ version of per...