Hate Speech findings Led to Deeper Analysis: Facebook Tells Indian Media on 2019 Election Manipulation Allegations
By: Deepshikha Soni
Facebook, under allegations of manipulating the 2019 general election from a New York Times report, responded to the queries from The Indian Express, saying the hate speech findings led to “deeper, more rigorous analysis” of their algorithmic recommendation system in India.
Two years ago, on February 4 2019, a Facebook researcher in Kerala created a new user account in an experiment to understand how Facebook’s recommendation system affects and influences the content consumed by people. The experiment was designed to follow a simple rule of following all the Facebook generated recommendations to join groups, watch videos and explore new pages.
According to The Indian Express, the result after three weeks left the researcher stunned. The test account feed was filled with fake news, hate speech, misinformation, and incendiary images within three weeks.
“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the Facebook researcher wrote. The feed was flooded with graphic content, including a post showing the beheading of Pakistani nationals and dead bodies in white sheets.
“The report was one of the dozens of studies and memos written by Facebook employees grappling with the effects of the platform on India. They provide stark evidence of one of the most serious criticisms levied by human rights activists and politicians against the world-spanning company: It moves into a country without fully understanding its potential impact on local culture and politics and fails to deploy the resources to act on issues once they occur”, the New York Times had reported.
The NYT report also raised concerns over Facebook's lack of resources and expertise in India's 22 officially recognised languages.
A Facebook spokesperson told The Indian Express, “This exploratory effort of one hypothetical test account inspired deeper, more rigorous analysis of our recommendation systems, and contributed to product changes to improve them. Product changes from subsequent, more rigorous research included things like the removal of borderline content and civic and political groups from our recommendation systems.”
Addressing the language barrier, the spokesperson said, “Separately, our work on curbing hate speech continues, and we have further strengthened our hate classifiers, to include four Indian languages,”
“The internal documents, obtained by a consortium of news organisations that included The New York Times, are part of a larger cache of material called The Facebook Papers. They were collected by Frances Haugen, a former Facebook product manager who became a whistle-blower and recently testified before a Senate subcommittee about the company and its social media platforms. References to India were scattered among documents filed by Ms Haugen to the Securities and Exchange Commission in a complaint earlier this month.” it said.
Frances Haugen worked with the company for two years before leaving it in May this year. In an episode of the show ‘60 Minutes’ on the CBS television network, she said, “There were conflicts of interest between what was good for the public and what was good for Facebook”. She also said, “And Facebook over and over again chose to optimise for its own interests like making more money.”
Meanwhile, in response to Haugen's comment, Facebook spokesperson Lena Pietsch denied the allegations. She said that the company was focused on making significant improvements “to tackle the spread of misinformation and harmful content”.
“Every day our teams have to balance protecting the ability of billions of people to express themselves openly with the need to keep our platform a safe and positive place,” Pietsch told CNN. “To suggest we encourage bad content and do nothing is just not true,” The Indian Express reported.
Another employee turned whistle-blower, Sophie Zhang, a data scientist who worked with Facebook for three years, alleged that Facebook selectively took action against fake accounts during the 2019 general elections. Among the fake accounts created to influence polls by various political parties, the accounts directly linked to a BJP lawmaker were not removed by Facebook, and irrespective of constant reminders, Facebook did not address the problem.
Zhang told NDTV, “We took down three of the networks, two INC, one BJP network. We were about to take down the last network, but suddenly, they stopped because they realised that the fourth network was directly, personally run by the BJP politician, and that was the network that I was unable to do anything about. The only case in which we knew who was responsible was a BJP politician that I was not able to take down,”
However, a spokesperson from Facebook disaffirmed Zhang’s claims and said, “We've already taken down more than 150 networks of coordinated inauthentic behaviour. Around half of them were domestic networks that operated in countries around the world, including those in India. Combatting coordinated inauthentic behaviour is our priority. We're also addressing the problems of spam and fake engagement. We investigate each issue before taking action or making public claims about them,”
Meanwhile, as a measure of damage control, Facebook is undergoing a major rebrand. Its parent company has changed its name to Meta. However, individual apps such as Facebook, Instagram and WhatsApp will retain their names. Meta is creating an integrated virtual reality open-world that will take social media to the next level by introducing face to face holographic interaction.
Join India’s only non-profit Student Journalism platform and put your students ahead in the race. For more information, write to us at email@example.com.