How this Kerala youth with Microsoft is leading a crusade against harmful content marring online education
For Krishna Selvaraj, a 34-year-old IT professional with Microsoft in the United States, a news article on how an online interaction among students descended into cyberbullying leading to the suicide of a 12-year-old student a few years ago turned out to be a trigger.
For someone who detested corporal punishment and intimidation right from his school days in Kochi in Kerala, it was the moment he decided to come up with a technological solution that cleansed online education of all kinds of harmful content, whether it is explicit images or incendiary language. The goal was to create a secure, positive, and conducive learning environment enabling children to thrive academically and socially without worrying about their safety and well-being, both emotional and physical.
Thus, as the leader of the Trust and Safety initiative at Microsoft Flip, a tool for students to record videos or images with narration, which can then be exported to social media platforms, Mr. Selvaraj came up with an artificial intelligence (AI) and machine learning (ML)-enabled content moderation model. Though now restricted to Microsoft Flip, the open-source technology he has developed can be applied to clean up online educational platforms across the spectrum in the future.
“It’s a one-of-its-kind tool, which in the last three years alone has taken down 1.30 lakh harmful content from across the world while also warning offenders that their inappropriate action has been flagged. Nearly 25,000 of those content posted accounted for hate speech alone. This was followed by content spreading false information, invasion of privacy, bullying, self-harm, violence etc,” he says.
Thanks to his innovation, Mr. Selvaraj, originally from Gandhi Nagar in Kochi, has since then become one of the top contributors across the globe in terms of ML and AI in the educational field.
ML plays a pivotal role in ensuring online safety on educational platforms by leveraging its ability to analyse vast amounts of data to detect and mitigate potential threats in real-time. One significant application is the automatic identification of cyberbullying and harassment. Advanced algorithms can analyse the language and context of messages to detect harmful behaviours such as bullying or hate speech, and immediately flag or block such content. Similarly, machine learning models are trained to recognise inappropriate images, videos, or links, preventing students from being exposed to explicit or harmful material.
Screening process
The screening is not being done on the basis of mere keywords or a checklist but takes into account the entire context in deciding whether the content is age-appropriate. Also, the screening is not restricted to content posted by students but educators as well, says Mr. Selvaraj who counts the former Chief Justice of India K.G. Balakrishnan among his mentors in developing the technology.
Having done his Masters in Computer Science, he was fascinated by the potential of machines to learn from data and make intelligent decisions. This led Mr. Selvaraj to dive deeper into ML, AI, and data science. He has been in the U.S. for a decade now and has to his credit a book titled The Future of Learning: AI’s Impact in Education.