15th March 2019 arrived as the day of cries, casualties and the immortal list of hate crimes in the island nation in the city of Christchurch. The Friday prayer offerers at Al Noon mosque never returned to their homes after becoming the prey of white supremacists exerting their undesired control with an excuse of far-right politics. But the one platform far sighting affected the execution of such hate crimes are the ones which are short of regulating extremist content on their platforms, tech giants such as Facebook and YouTube, “They should offer countering or alternative viewpoints”, advises Jack McDevitt, director of Northeastern University’s Institute on Race and Justice.
McDevitt, who has studied hate crimes since the 1990s, advocated that social media has become a breeding ground for people like the suspected perpetrators of the shooting during a prayer service Friday afternoon at two mosques in Christchurch, New Zealand, in which 50 people were killed and more than 30 were injured. The attack was reportedly forewarned on Twitter and 8chan, live streamed on Facebook, and shared widely on YouTube, and Reddit. As videos, posts, and snapshots of the massacre proliferated, the social media giants hosting the content came under scrutiny for not acting more quickly to stop it from spreading.
In an increasingly globalised world, the internet makes it child’s play for people to find camaraderie in others who share their views. In the case of perpetrators such as those reportedly behind the Christchurch attacks, social media enables them to meet others who reinforce their biases, and it allows them to spread their ideas, no matter how radical, with only the click of a mouse.
Harminder Singh
Comments