The Beginning of an Orwellian Nightmare

The Beginning of an Orwellian Nightmare
In a technology-driven world, governments are struggling to find a balance between national security and individual rights.

Desperate times call for desperate measures

The United Kingdom, a long term victim of domestic terrorism, has taken an aggressive approach to online surveillance that many, including the United Nations Special Rapporteur, have likened to “thought crime.” Law enforcement agencies now have unfettered access to the online activities of British people with three new categories of punishable offences which can earn sentences up to 15-year jail terms. These include “viewing” terrorist propaganda online, “physical entry to designated” areas abroad, and any “reckless expressions” of support for banned groups. With no clear definitions of what exactly constitutes the “viewing” of terrorist media online, legal experts question the state’s vague conditions to prove criminal intent. However, this has the potential to infringe upon a person’s right to information. 

The shifting Overton Window

The Overton window is a political theory that refers to the range of policies/actions that will meet general public acceptance at a given time. For example, gay marriage has moved into the Overton Window from an impossible policy position in the 1960s to a socially accepted norm in the 2010s. In an evolving society, the value judgement for social and other norms change. The question is whether the meaning of individual rights and self-privacy will also see a paradigm shift?

In this era of never-forgotten data and fast-paced social change, social media has introduced a global platform for public discourse.  Today the expression of ideas outside the Overton window could easily have an impact not only on you individually but also on your friends, family and co-workers. The fear of real-world repercussions to online political opinion may thwart people from freely expressing their views. They may instead choose anonymous forms of expression like the voting for Trump in 2016 or the Brexit referendum. However, at present, these unexpected outcomes are being brushed aside as being uneducated or uninformed by the intellectual elite.   

Drawing lines in the sand between free speech and hate speech

In a technology-driven world, governments are struggling to strike a balance between national security and individual rights. In an effort to combat hate speech against religious minorities and the LGBTQ+ community, U.K. police have started identifying and reporting on “non-crime hate incidents” both online and offline. The guidelines provided, allow these incidents to be recorded without evidence to prove hateful intent, as well as a provision to include this data in the Disclosure and Barring Service’s background checks for employees. In January 2020, police officers entered the workplace of Harry Miller, an ex-police officer and activist, to discuss certain “non-crime hate incidents” that occurred on his Twitter. Another woman was arrested at her home for referring to a trans-woman as a man on a heated Twitter exchange

In response to these incidents, legal experts caution that expanding on hate speech laws and pre-crime reporting could lead to an Orwellian future for the U.K. Citizens must remain vigilant to ensure that the right stays protected despite the changing mediums of discourse.


Hate speech is perceived differently based on the context of a criminal vs. ethical question

No man is an island in the era of Big Tech

The evolving nature of digital public discourse requires clarity in the role of private tech firms as a platform or publisher. It is their responsibility to uphold the principles of free speech and provide accountability for content that violates copyright or libel laws. Big tech products are becoming increasingly pervasive and inescapable, their reach goes beyond borders, and they have the power to censor content or users they internally deem as “hateful”. Between Facebook’s umbrella, Google’s productivity kit, and Amazon’s intuitive personal shopper, their combined impact on human life could easily surpass that of any government. 

Political polarisation is also encroaching into the workplace as seen by the political expression of tech employees through mass action, algorithmic bias, and occasionally, the firing of an employee who does not fall in line with the groupthink. The risk of employees’ political views influencing a company’s operations, products or stakeholders, makes big tech even more dangerous as regulators already struggle to define, verify and enforce transparency in tech products. 

Abating controversial discourse with digital exile

The real power of big tech was demonstrated during the coordinated silencing of Alex Jones by Facebook, Paypal, Youtube, Spotify and Apple. Private companies have established a robust digital infrastructure that can have real-world impacts on employment security, access to financial services, personal relationships and physical safety. The virtual self is increasingly becoming dependent on cross-functional private applications and even online public service access points (e.g. Ration cards, NHS, Canadian student loans).


  • The importance of free speech and liberalism must be taught to young people so that they can maintain vigilance against the infringement of these rights online. They must learn to accept diverse points of view, reject the tunnel vision of “safe spaces”, and critically assess information to refute fake news and discriminatory content.

  • The physical and societal risks to the individual for online expressions must be mitigated through legislation.   The distinctions between thought, speech, intentions and action are significant, and the law must reflect that difference. Government surveillance and the commercial data-driven digital infrastructure must have legal oversight. 

  • Companies who claim protections under the platform status and hold a significant market share must be held accountable, and their status questioned when they interfere in public discourse through censorship or shadow banning. Human confirmation bias must also be taken into account when setting artificial intelligence controls to limit overreach in government surveillance programs or big tech algorithms. 


Image Courtesy: Synergia Foundation