Last year, in a memo which was leaked to the press, Twitter’s former CEO said, “We suck at dealing with trolls and abuse on the platform and we’ve sucked at it for years. It’s no secret and the rest of the world talks about it every day.”
When one of the world’s largest social media platform’s boss proclaims that the organisation is failing to make the internet safe – we know the situation is dire.
Many studies show that a person’s initial few posts can predict if or not they showcase troll behaviour in the future. Apps like “Rethink” here are integral to ensure that such trolls can be countered. Google’s predictive technology called “Perspective API” is another example of predictive apps that may help people think before they post.
I first came across rethink on “Shark Tank”, a reality television show that helps entrepreneurs connect investors. What “Rethink” does, is all in its name – it gives you an opportunity to ‘rethink’ your comment, in order to ensure that you stop – before you post. Similarly, Google’s perspective AI uses a scale that indicates if your comments could be hurtful.
Both these technologies are available as additions to your browser. They would not force you to not say something mean online, but they’ll ensure that you think before you speak – or rather, post. Surprisingly, when put into use, these technologies have been found to be effective about 93% of the time – a significant figure.
It is extremely important to ensure that we as a community try to highlight the dangers of cyberbullying through government-sponsored advertisements (a move that incidentally, our government is lacking), and help people self-censor, but sometimes, you just need to fight trolls with troll behaviour
One example of the same was the Zero Tolerance campaign by activist group The Peng Collective. Here, Twitter bots were used to automatically target people whose tweets appeared to be abusive. These users were given a somewhat tongue-in-cheek offer to take part in a ‘self-help programme’ to end their trolling.
Bots have come into public discourse thanks to social media. So the next question that comes up is when AI can do virtually everything, why can it not fight tolls online who indulge in cyberbullying?
The answer to that is surprisingly simple – bots cannot help pick out trolls because humans often don’t agree on what constitutes harassment, therefore it’s really hard to train computers to detect it.
However, utilising community mechanisms on social media can maybe be an effective way to utilise the same technology that is being abused. For examples, a program/bot can be set up that creates a list of users that have been flagged as trolls, in a system like a no-fly list. Yes, the misuse of this option can be an issue, but it’s pros greatly outweigh the cons.
In 2015, a system that uses Twitter bots were created to do the activist work of recruiting humans to do social good for their community called Botivist.
Botivist was an experiment to find out whether bots could recruit and make people contribute ideas about tackling corruption instead of just complaining about corruption.
When it noticed relevant tweets, Botivist would tweet in reply, asking questions like “How do we fight corruption in our cities?” and “What should we change personally to fight corruption?” Then it waited to see if the people replied, and what they said. Of those who engaged, Botivist asked follow-up questions and asked them to volunteer to help fight the problem they were complaining about. Mechanisms as such can be experimented with and put into action to counter cyberbullying too.
Another important aspect that we need to keep in mind is to understand that in most cases, an explicit plea for help is more likely to have people use the feature. Currently, Facebook and Instagram have pre-set options for people to complain. However, these may not satisfy someone who wanted to complain as they feel that the option does not bring out the incident clearly.
I’ll give you an example: I wished to report a page on Instagram that was posting lewd comments about actresses and encouraging other users to do the same. Sadly, I could not tell Instagram the exact reason as to why I wished to report the account – and thus, the page still runs. Now, the next time I’d like to report an account, chances are, that since I cannot tell Facebook/Instagram/Twitter the exact reason why I wish the account to be blocked, I’ll simply let the account no. Nothing happened the last time, so why waste the effort?
It’s time we as a community, come together to brainstorm and try to figure out a way that helps make the internet a safer space. Crusaders have huge amounts of data, but very little that makes sense. Activists have a vision, but not enough solutions. People have the concern, but not enough motivation.
Only when the common man tries to come to a conclusion together, can they unite and make the internet, no space for hate.