Published: 21st April 2020
This AI can help reduce caste-based abuse on social media. Here's how
This is a Behaviour Identification Model that can detect caste-based abuse within online communities which contain a spectrum of information
The founders of the Indian Constitution had written, clearly, that we will not discriminate based on our race, religion, colour, caste, creed or language. It's been 70 years but caste discrimination is still present in every walk of our life, social media being one of them. Even though social media platforms like Facebook, Twitter, Instagram and TikTok have algorithms that detect offensive posts, they have no means to ascertain if a user is discriminated against for their caste. Social Media Matters, a social change advocacy group, has partnered with Spectrum Labs to launch an Artificial Intelligence Tool that can detect caste discrimination on these platforms.
This Behaviour Identification Model can detect caste-based abuse within online communities which contain a spectrum of information ranging from personal to mundane to sharing political opinions and building communities. The model comes into play to detect any sort of casteist discrimination. "Caste discrimination is ingrained in people. It's one of the oldest forms of evils still existing in Indian society. Sadly, it is also reflected in cyberspaces," said Amitabh Kumar, founder of Social Media Matters. "We sifted through millions of datasets to form the guideline for the AI. With the COVID-19 crisis looming large, the social media moderators who check what posts are violating the community guidelines are indisposed and the pressure is completely on the AI. We need an AI development that can detect casteist abuses on social media," he added
Amitabh's organisation, together with Spectrum has created an Artificial Intelligence tool that will help social media platforms, like Facebook, TikTok, Twitter, Instagram, detect and remove caste-based abuse from their platforms. "It will decrease the time taken for detection and decrease the constant stress that human moderators have to go through constantly dealing with abuse. Initially, the model is trained to work with several languages - English, Hindi, and Hindi-English mix and we’ll continue to upgrade it further. The next step is to introduce Tamil, Marathi and Bengali," he said.
Amitabh Kumar, Founder of Social Media Matters
But that's not all it does. The AI can also pick up positive posts. "There are discussions on caste online or seminars on caste that are posted on the internet. It is very important to pick those up as well. Spreading awareness is necessary," he added. "We need to proactively tell people why it is evil. There is a need for a lot of work, both offline and online and I think our software can do both — it will serve the purpose of removing abuse and raising awareness," Amitabh said.
The hardest part of building an AI model that can effectively detect caste discrimination online is really defining and understanding not only what caste discrimination is, but also what it is not, said Justin Davis, CEO at Spectrum Labs. "Amitabh and his team at Social Media Matters have dedicated themselves to raising awareness about injustice and discrimination in many forms, so we could not have asked for better partners. Their insights and expertise have helped us navigate the nuances, history, and politics of caste discrimination, to build a tool that can combat it effectively and inclusively. We were humbled and honoured to work with them,” he added.