The Back Rooms
Caffeinated, blue-light-fixated, sleeplessness and the rolling-on of a grueling “prove yourself to the company” schedule of off-the-books work hours had a humble cubicle-dweller wondering whether he was awake or dreaming, or a little bit of both.
Harsh lighting flickered down a corridor that seemed to intersect with infinite office space smelling like chemical cleaning products, room after room of electric buzzing, low ceilings and yellow phosphorescence like a bright, over-exposed decomposing body. It looked like the office, but he couldn’t get his bearings. It was familiar in some general sense, but alien in every particular.
Just before finding himself here, half his field of vision had jittered between this place and his workstation, the calendar with forest-trail photos, and his scribbled-on agenda cover. The only times he had experienced anything like that was in video games, when two frames overlapped because the developers hadn’t properly planned for a certain angle. No-clipping, it was called. But was it possible to no-clip out of reality?
As it turns out, it was.
The man now found himself in what Internet forums ominously refer to as ‘The Back Rooms,’ a liminal reality said to be 600 million square miles, largely harvested from the unconscious of late capitalism’s white-collar drones.
And woe to them who, finding themselves here, begin to hear the approach of footsteps.
Fear of Simulacra
The Back Rooms, a popular bit of Internet folklore, about which many stories and video projects have been produced in recent years, reminds us of the recurring horror motif of a reality adjacent to ours, familiar but uncanny, and haunted by something malevolent, a dark, somehow mutilating, simulacrum: the “Upside-Down” in Stranger Things, the “underground tunnels” in Peele’s Us, the dreamy facsimile of the waking world in Nightmare on Elm’s Street, etc.
In part, the idea is that the surface is conditioned by some dark, hidden version of itself, and that, therefore, appearances to the contrary aside, we are not free to play by the rules of the world as it seems, for, unbeknownst to most people, other, more stringent rules apply. The surface, a benign appearance, hides a predatory intent.
This intuition takes a contemporary form in the widespread fear that well-intentioned, high-tech responses to crises (from global warming to COVID-19) are leading to a techno-dystopia. In this regard, we may consider the potential dangers of AI and ‘digital twinning.’
Digital Twins and AI
A Digital Twin is a virtual representation of some physical system so that a real-world place or process can be modeled in real-time, allowing private entities or policy-makers to visualize and monitor it, as well as run simulations of possible changes to the system. Like Artificial Intelligence (AI), municipalities are encouraged to incorporate this technology—by the EU’s Digital Transformation policy area and Digital Europe Program (DEP), especially initiatives concerned with constituting ‘Smart Cities’ (like the Smart Cities Marketplace). There is a great deal of positive potential associated with such measures, as well as a potentially steep price to pay if they are turned towards ideologically nefarious ends.
To the degree that we limit technology and hold tight its reigns, such that it is used for specific ends, and does not itself define the ends, we may succeed in putting moral a prioris and a vision of human flourishing ahead of the seductive draw of novelty and transformation for their own sake (or, indeed, for the sake of the revolutionary zeal to deconstruct past structures as “oppressive”). Alas, these policies are being pursued by politicians and technical experts with no grounding in virtue ethics, Plato, or St. Thomas.
The optimism with which it is proposed that our real-world environment be fitted with sensors to feed a virtual model with moment-by-moment information about, for example, pedestrian behavior, is worrying when we consider that there is also a push to facilitate the crunching of that data (‘big data’) through the use of AI. The dangers associated with such technologies have been highlighted by Elon Musk and others involved in this sector in an open letter:
Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.
The letter recommends independent review and specifically mentions OpenAI, the U.S. company responsible for ChatGPT, an AI software whose blatant ideology and political biases have been widely noted (to the point, to cite but one example, of providing information concerning historical atrocities related to Christianity while informing users that to do this with respect to other religions would violate its policy).
Hidden motives, automated management of real-world processes, and specifically, AI and Digital Twins, generate the same unease with which we react to highly-realistic deep fakes, for example.
There is a need to stand for the sovereignty of the real over the virtual; for ensuring that the instrument remains auxiliary and does not become the master; to harness policy to known ends rather than a ‘black box’ of financial interests and technical processes, and for clear-sighted moral intention over supposedly neutral automation.