On 11 February 2021, my Facebook account of 13 years was disabled without reason. In an age when the world seems to be more lived on the virtual realm than in the real one, I had been also chipping in, albeit in a much lesser way lately than in the past. However, to be disabled without reason or recourse to appeal brought forth a beast that I had not expected to see myself, as much as I had read about it in both electronic and print media.
For me, Facebook has been a means to share my experiences and thoughts. It has been a means to stay in touch with people geographically placed in distant lands but share certain threads of memory or association with me.
Since I have never shared anything that may be distantly problematic or untrue, the whole incident was highly surprising. Initially, I thought that maybe certain ultra-left critics and opponents, mostly from my days in the United Kingdom, were back to call me out on my recent invitation to be called to BBCs Asian Network programme The Big Debate. For them, an Indic and Dharmic voice on an increasingly leftist channel was not digestible.
In the past, prominent Hindu speaker Shri Jay Lakhani had been hosted on this programme but faced a slightly distasteful experience. However, on closer investigation and consulting with personal sources, it came to light that all the admins of the Facebook page of a society that I had founded in Cambridge called CAMbFIRE had faced the same fate.
The absurd part was that the only slightly problematic event that may have triggered this was a certain Black Lives Matter event of which I was not even an active organiser of the event. The fact that I had been the President of the society and a remaining admin (on Facebook) from my term as President was enough for my account to have been disabled.
Had it been something glorious like standing up against an infamous but powerful public figure like Kim Jong Un or inglorious like the rants of Donald Trump on Twitter, it would still have made sense.
This entire incident brought to the fore a bigger problem that we face today — that of big tech, censorship and freedom of speech. Before going forward on this larger theme, I would like to highlight the central event and subject that seems to have been the basis of this move by Facebook.
CAMbFIRE began as a forum for free and open discussion, sans political rigidity, of socially relevant and politically important subjects of the day. In doing so, we did not flinch from asking the tough questions or exploring nuances that others on both sides of the political spectrum may not have on a given subject of discussion and debate. In January 2020, I left the society and only ever joined once in a while as an adviser and participant.
Back in the middle of 2020, CAMbFIRE had decided to organise an event on Black Lives Matter. The motivation to do so was the flare-up and movement for the cause underway in the United States, which had reached a crescendo after the deplorable death of George Floyd at the hands of cops.
The CAMbFIRE event was oriented in a manner wherein the facets of the BLM cause and movement were to be explored, along with — and this was what created the furore — the question of can there be a Black Lives Matter vs All Lives Matter. In the description of the programme on Facebook, the latter point was posed, becoming the bone of contention. Wave after wave of “activists” unleashed the worst kind of opposition, not with civilised criticism or debate, but just plain hate and even abuse (on Twitter).
In all this, I had no hand whatsoever in either the conceptualisation, promotion or actual organisation of this programme and yet was a subject of the hate. Now, as a common citizen, without much contact with the inner workings of Big Tech and without having been given notice or reason for this fairly random act, I can only guess that it was around the reporting of this particular event and the Facebook (CAMbFIRE) page that hosted the event that the whole series of account-disabling took place.
All this was without knowing that I had reservations on how the programme was framed and conceived, and based on the highly flawed idea of “platforming” simply because I happened to still have been an admin on the Facebook page of the society responsible.
Censorship and regulation of debate and discussion are important to ensure that such interactions (or rather certain positions in such interactions) do not lead to conflagrations or acts of mental, emotional or physical violence to society members. However, when this censorship targets those who even remotely are unassociated with any such positioning or instigation, then that is problematic.
In my case, I have never been one for the All Lives Matter platform. As much as Ad Hominems, Petitio Principii or Ignoratio Ellenchi are logical fallacies in a debate; tautology happens to be one too.
Ofcourse all lives matter and dignity of the individual is important, but does that mean that specific cross-sections of identities and their associated historical baggage or crises must not be discussed? If that were the case, then the truth of death must dissuade us from doing anything at all while living, for all human experiences end in the singularity of death.
Even in the discussion of caste-based reservations, I have stood for the truth of social repression of certain communities, not only historically but even today and very much so, as much as I have stood against the misuse of positive affirmation by those who may not need it.
In today’s world, everything is so binary that people often forget that some do not tread either of the poles that are so-presented. I have previously written about how I find the whole conception of the political binary between the Left and the Right not only absurd but primitive. We live in such a polarised world that people cannot see anyone who does not conform to certain set ideas and notions succeed.
I am yet to get the notice that we all were supposed to turn into sheep, blindly following one of two (evidently limited) poles. Anybody who stands for India or Hindu values, or who does not find communism to be the dream that some believe it is, or who speaks against the hypocrisy of some in seeking the space for dissent but at the same time crushing any opposition in the crudest manner possible is not an “unthinking and uninformed Bhakt”, just as everyone who does like Lenin and Marx is not a Maoist and Urban Naxal — these being some of the terms in wide-use currently. I am an Indian proud of my Dharmic roots as well as a modern outlook on various subjects, and yes, these things can co-exist.
If U.S. President Joe Biden wants to end what he calls an “Uncivil War”, he must begin by cleaning his backyard. The position of Big-Tech companies has been highly problematic, with various policy reversals, inconsistent enforcements, shifting and fairly-unclear rationales for how they will approach content moderation, over the last decade.
There have been both human as well as machine-based errors by Facebook and Twitter in the past. In June 2019, Twitter apologised for suspending accounts critical of the Chinese government before the anniversary of the Tiananmen Square massacre. This turned out to be a case of a mistake made by a system designed to catch fake accounts and spammers.
In June 2019, videos on Hitler uploaded by British history teachers were flagged by YouTube for hate speech accidentally, as per The Guardian. The machine-based and the human moderation system works on term-based flagging, user behaviour, and users’ reporting. Whether a certain content stays up or is taken down can depend on a moderator’s interpretation of a single word or phrase.
When one factors in subtleties like cultural context and understanding, this is a quicksand that social media sites and big-tech can often fall into. In 2017, Facebook was criticised by the members of the LGBTQ community after their accounts were wrongly suspended for using the word ‘dyke‘, as per Wired.
This has happened in the past when a story in the New York Post on an alleged meeting between Joe Biden and Ukrainian energy executive Vadym Pozharskyi was purportedly brokered by Biden’s son Hunter met immense opposition. While the story’s veracity is debatable, the systematic and brutal lockdown of the story’s distribution is a story in itself.
Former Democratic Party staff-member Andy Stone, who later became Facebook’s policy communication manager, said that Facebook would be reducing the distribution of the story. At the same time, Twitter barred users from sharing the story with followers or through direct messages and locked accounts of people who retweeted it. This included the then-White House Press Secretary Kayleigh McEnany.
In June 2020, Google excluded the website ZeroHedge from its advertisement platform based on “violations” in the comments sections of stories around Black Lives Matter. A month previously, Facebook took down 88 Trump campaign ads that attacked Antifa, the far-left movement.
On the other hand, there have been users who have said that Facebook does not censor conservatives. Speaking of Black Lives Matter, way back in September 2016, Shaun King, a well-known BLM activist and writer, was temporarily banned for posting a screenshot of an email he received that used the N-word, in what Facebook regarded as a violation of “community standards”.
As per a report published by National Public Radio (NPR), the world of censorship by Facebook is more convoluted (ironically, in its simplicity) than it may look. Speaking to employees of the company, it was found that when a user flags some content on Facebook, it is sent to a division known as the “Community Operations Team”.
Facebook apparently had tried crowdsourcing solutions such as CrowdFlower and then consulting firms such as Accenture that helped it assemble a dedicated team of subcontractors. Currently, this team has several thousand employees, with prominent offices in Poland and the Philippines. The catch here is that speed is of the essence for these employees, with a worker deciding on a piece of flagged content once every 10 seconds, on average.
This can create a problem when one has to understand the context or other nuances of a piece of content since these employees do not quite get time for making real judgements. In many cases, the resolution is fairly random and based on the decision made at that particular time. Moreover, due to privacy laws and technical glitches, subcontractors often cannot even see the full context of a piece of content.
In this essay, I have highlighted two distinct issues that afflict our world today. One is that of the arbitrariness and unfairness of big-tech censorship, having faced the same for a rather incorrect association, as highlighted. This has been seen by users across the political spectrum but has lately been used by the Left in its crackdown on opponents.
The other issue is that of the witch-hunt and hounding undertaken by the radicals on either side of the political spectrum (myself having faced that of the ultra-Left in recent years) of anyone who does not conform to their definition of what is “politically correct”.
While the latter can only be countered by the two-pronged approach of proactive politics that transcends dogma and political rigidity along with calling out and strongly standing against any instances of parochial politicking, the former needs a nuanced and comprehensive plan to make sure that the world of social media and big-tech is truly fair.
There is a need for a clear and consistent framework for virtual platforms and applying the same fairly and equally to all users. At the moment, there is a debate around Section 230 of the U.S. Constitution, which shields social media companies from any liability for what their users post. This is seen to give platforms immunity when “moderating objectionable content”. Joe Biden has called for Section 230 to be revoked.
Besides this, several bills that would hold these social media behemoths legally accountable for how they moderate content are circulating in the U.S. Congress, including the PACT Act and EARN IT Act. In India, we have had a greater focus of the government on big-tech and social media companies, with the government recently calling on Twitter to block certain accounts after the unfortunate incident of protesters storming the Red Fort in Delhi.
I think the recent rise of Koo, an app made by Indians and supported even by Union Ministers, has been commendable and goes a certain way in breaking the monopoly of the West when it comes to social media and big-tech. Although Koo itself has various elements that are similar to Twitter’s (including the concept of ReKoo instead of a retweet), the step is the first of hopefully many others which would involve fundamentally novel tech initiatives that can make a mark internationally and potentially also set a new benchmark of fairness.
The contemporary world is increasingly elevating itself to a more information-driven existence, which relies heavily on the virtual domain that is dominated by social media and big-tech. Even as we discover modes of functionality, we must also ensure the operation and preeminence of universal values and ideals in this realm.
And prime among those are the ideas of liberty and freedom of speech, but even more than that is the principle of equality and fairness. Censorship and regulation are important in this world, but the misuse of elements to ensure moderation is counterproductive to the aims of the said-moderation. I am not sure whether Facebook will allow me to present these aspects for my individual case (given that it apparently does not want the CAMbFIRE admins to even appeal) and misjudgement of the CAMbFIRE association, but I feel it is only right to speak up on this so that people can be made aware of this all-too-evident problem of the modern world.
I want to end by quoting Soviet and Russian littérateur and poet Yevgeny Yevtushenko: “When truth is replaced by silence, the silence is a lie.”