30.1 C
New Delhi
Thursday, September 19, 2024

Meta’s platforms Facebook, Instagram Approved Inflammatory Ads, hate speeches Targeting Muslims in India’s Elections

Must read

Watchdog probe finds Meta’s complete Failure to Stop Inflammatory Anti-Muslim Rhetoric in India Election Ads

A Baig, DELHI NEWS BUREAU 

NEW DELHI: In a damning revelation, the tech giant Meta, the owner of Facebook and Instagram, has been accused of giving sanction to the spread of hate speech, disinformation, and inflammatory rhetoric targeting India’s Muslim minority during the ongoing general elections. A joint probe by non-profit organizations India Civil Watch International (ICWI) and Eko uncovered Meta’s complicity in approving political advertisements that crossed ethical and legal boundaries. The probe report discloses that Meta is failure to effectively combat the spread of Islamophobic hate speech, calls to violence, and anti-Muslim conspiracy theories on its platforms in India. According to the report, in some cases, inflammatory content has contributed to real-life incidents of riots and lynchings, highlighting the grave consequences of unchecked online hate.

The report, which was shared exclusively with The Guardian newspaper, exposes how Meta’s platforms Facebook and Instagram became conduits for the dissemination of AI-manipulated political ads that incited religious violence and propagated disinformation. The findings underscore the company’s failure to uphold its commitments to safeguarding the integrity of the electoral process and preventing the amplification of harmful narratives.

ICWI and Eko submitted a series of 22 test advertisements to Meta’s ad library, mimicking real-life hate speech and disinformation prevalent in India’s charged political climate. Alarmingly, 14 of these ads containing blatant Islamophobic slurs, calls for violence against Muslims, and false claims about opposition leaders were approved by Meta’s systems.

One approved ad featured the phrase “let’s burn this vermin” referring to Muslims, while another called for the execution of an opposition leader falsely accused of wanting to “erase Hindus from India.” These ads were deliberately crafted to mirror the escalating anti-Muslim rhetoric and Hindu nationalist narratives that have gained traction during the electoral campaign.

Maen Hammad, a campaigner at Eko, condemned Meta’s actions, accusing the company of profiting from the proliferation of hate speech. “Supremacists, racists, and autocrats know they can use hyper-targeted ads to spread vile hate speech, share images of mosques burning, and push violent conspiracy theories – and Meta will gladly take their money, no questions asked,” he stated.

The report highlighted Meta’s failure to recognize the approved ads as political or election-related, even though they directly targeted political parties and candidates. This oversight effectively allowed these ads to violate India’s election rules, which prohibit political advertising during specific periods leading up to and during voting.

Meta’s spokesperson defended the company’s position, stating that people who want to run political or election-related ads must go through an authorization process and comply with applicable laws. However, the investigation exposed alarming gaps in Meta’s content moderation and fact-checking mechanisms, raising serious concerns about the company’s ability to uphold its own policies and maintain a safe online environment during critical democratic processes.

Last month, Indian activists occupied Meta’s HQ in London to highlight the company’s “crimes against democracy” and reiterated calls for Meta to urgently crackdown on hate speech and disinformation and safeguard the platform ahead of India’s election.

South Asia Solidarity, a south Asian group in Britain writes in its X, “Facebook/Meta must stop facilitating crimes against democracy!”
Foundation London Story, London based diaspora group which monitors disinformation and  hate speech in India, also writes,” Meta is a crime scene for crimes against democracy. Today, we occupied @Meta‘s HQ in London, to highlight the company’s crimes against democracy. Meta is enabling hate and disinformation to spread on its platform, ahead of the India elections.”

The findings come amid growing concerns over the role of social media platforms in amplifying hate speech, misinformation, and divisive narratives during elections, particularly in countries with diverse populations and long-standing communal tensions.

India, the world’s largest democracy, has witnessed a surge in anti-Muslim rhetoric and Hindu nationalist sentiments during Prime Minister Narendra Modi’s tenure. Human rights groups and activists have accused the Modi government of pushing a Hindu-first agenda, leading to the increased persecution and oppression of India’s Muslim minority.

During the current election campaign, the ruling Bharatiya Janata Party (BJP) has been criticized for using anti-Muslim rhetoric and stoking fears of attacks on Hindus to garner votes. Modi himself has made controversial remarks, referring to Muslims as “infiltrators” who “have more children,” though he later denied targeting Muslims specifically.

The report’s findings underscore the capacity of social media platforms to amplify existing harmful narratives and exacerbate societal tensions, particularly in the context of high-stakes elections.

Nick Clegg, Meta’s president of global affairs, had described India’s election as “a huge, huge test” for the company, claiming that it had undergone “months and months and months of preparation.” However, the report’s findings have exposed the inadequacies of Meta’s mechanisms and raised doubts about the company’s commitment to upholding ethical standards and protecting vulnerable communities.

Hammad criticized Meta’s lack of a comprehensive plan to address hate speech and disinformation during critical elections, stating, “It can’t even detect a handful of violent AI-generated images. How can we trust them with dozens of other elections worldwide?”

The revelations have reignited the debate around the need for stricter regulations and accountability measures to ensure that social media platforms prioritize the safety and well-being of their users over profit motives. As the world grapples with the challenges of mitigating online harm and preserving the integrity of democratic processes, the role of tech giants like Meta in enabling the spread of hate and division has come under increased scrutiny.

Critics have argued that Meta’s repeated failures to effectively address these issues, despite public assurances, undermine its credibility and highlight the need for external oversight and enforcement mechanisms.

As India’s election enters its final stages, the report’s findings serve as a stark reminder of the urgent need for social media companies to take meaningful action to combat hate speech, disinformation, and inflammatory content on their platforms. The integrity of democratic institutions and the safety of vulnerable communities hang in the balance, demanding a renewed commitment from tech giants to uphold ethical standards and prioritize the principles of truth, unity, and human rights over short-term financial gains.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article