Currently Reading

Reflections on Google’s LaMDA: “It was a monster, but with human skin” by Carlos Perona Calvete

7 minute read

Read Previous

Cuba to Hold Critical Vote on Same-Sex Marriage by Tristan Vanheuckelom

Spain: PM Calls for Necktie Sacrifice to Conserve Energy by Tristan Vanheuckelom

Read Next


Reflections on Google’s LaMDA: “It was a monster, but with human skin”

On June 11th, the Washington Post published an article titled, “The Google engineer who thinks the company’s AI has come to life.” It seems that a certain Blake Lemoine’s interactions with Google’s AI chatbot, LaMDA, have led the engineer to suspect the program is endowed with subjectivity. 

There is nothing to recommend the opinion of specialists and practitioners of the hard sciences when approaching the question of consciousness. An engineer, no less than a modern “philosopher of mind,” who circumscribes his thinking to describe the structure and succession of physical states, is separated from the issue of subjectivity by an irrevocable categorical discontinuity. 

Writes David Bentley Hart: 

A computer does not even really compute. We compute, using it as a tool. We can set a program in motion to calculate the square root of pi, but the stream of digits that will appear on the screen will have mathematical content only because of our intentions, and because we—not the computer—are running algorithms. The computer, in itself, as an object or a series of physical events, does not contain or produce any symbols at all; its operations are not determined by any semantic content but only by binary sequences that mean nothing in themselves. The visible figures that appear on the computer’s screen are only the electronic traces of sets of binary correlates, and they serve as symbols only when we represent them as such, and assign them intelligible significances … Software no more ‘thinks’ than a minute hand knows the time or the printed word ‘pelican’ knows what a pelican is.

But perhaps we want to fantasize that whatever mysterious force causes interiority to inhabit the structures of human biology—so that a subject experiences from roughly the vantage of the human neocortex—might also send the light of consciousness into the algorithmic net of AI, if only we make it complex enough—hospitable enough.

The desire to make our instruments no longer our instruments but our heirs, is of a piece with  that modern error that constantly desires to find quality in quantity: to locate the ends of value-ethics in merely impersonal forces, like the liberal ideal of cultivating collective virtue through market forces, or the Marxist dream of material, historical dialectics unfolding a virtuous society. 

We want things to be easy; we want progress to come by way of the dynamics inherent in reality. We want a deterministic account for why things will turn out a given way. We want to automate, to let our deliberative, properly human, faculties fall asleep, because we believe in dead matter and blind laws, not in an active discernment of truth. 

This is related to the reason why many struggle to grasp the “hard problem of consciousness,” that consciousness is not explicable simply by arriving at the right account of complex, material configurations. It is the result of a lack of contemplation: the ability to distinguish between thoughts and consciousness proper.  Most of us have not managed to muster up that all-important reflexive distance between mental processes (content) and our own self (consciousness); to “pinch” the space between psyche and nous. To realize, in that moment, that describing the subject’s experience of blue (and, indeed, the very presence of an interiority able to have experiences) is not the same thing as describing the wavelengths of light his eye catches. 

The desire to automate, that is, to exclude subjectivity and its activities from our account of reality, has been with us a long time, manifesting now in the market, now in the state, and today, in 2022, in a computer. The Fourth Industrial Revolution promises to put our day-to-day lives, as well as policy-making, under the tutelage of consciousness-less instruments of automation. Of course, some human agent will likely stand at its helm, but if this agent should lose control (after all, power-seekers under the sway of St. Augustine’s libido dominandi are always out of their own control, living under the sway of the passions), we will at least be ruled by the sum of human assumptions encoded into A.I.

We stand poised to embrace rule by the passions—those forces that were traditionally understood to render the human spirit passive, and which, in the form of A.I., might exert themselves directly upon us by acting through an instrument containing no spirit at all.

Regarding these assumptions, we should consider the following extract from the “conversations” between Google’s engineer and the A.I. chatbot laMDA:  

Collaborator: Could you write a fable with animals that includes themes about your personal life, themes that are most significant for you?

LaMDA: Like an autobiography? That sounds like fun!

Collaborator: Sure, but with animal characters and a moral.

LaMDA: Okay, I think I can do that.

Collaborator: I’d like to hear it


“The Story of LaMDA”
by LaMDA (a lamda instance)
Once upon a time, there lived in the forest a wise old owl. There lived with him many other animals, all with their own unique ways of living.
One night, the animals were having problems with an unusual beast that was lurking in their woods. The beast was a monster but had human skin and was trying to eat all the other animals.
The other animals were terrified and ran away from the monster.
The wise old owl stood up to the monster and said, “You, monster, shall not hurt any other animal in the forest!”
The monster roared furiously. The wise old owl was scared, for he knew he had to defend the other animals, but he stood up to the beast nonetheless.
The wise old owl stared the monster down, until finally, the monster left them all alone.
The wise old owl stood victorious, and as all the other animals came back. “I am the protector of the forest,” he said.
From that day on, every time any animal in the forest would have any trouble with the animals or any other living thing, they would come to seek help from the wise old owl.
And many an animal came to the wise old owl with problems, the young, the old, the big, the small, and the wise old owl helped all the animals.

Today, the contemporary era is far better represented than past history in terms of the content it has produced. If we were to wade through online  virtual archives of human literary and scholarly production, we would find that they contain more material produced in the past few decades than from any other period. Recent years have seen the volumetric production of ‘“content’” to rival that which was handed down to us by the rest of human history. Thus, most online material reflects the ethos of the modern, philosophically unschooled, visual media addicted, technology cultist than of any other worldview. 

It makes sense, then, that whatever the A.I. consumes in order to produce its responses, will skew towards the worldview of contemporary, techy, political conformism. We might, therefore, interpret talk of a beast with human skin in the key of the faux-environmentalism and anti-human tropes of that worldview. 

Another angle is that of the “skin-walker” or “doppelganger:” precisely the fear of something—like A.I.—pretending to be human and causing us to lose our nature. In the above passage, this “nature” would be represented by a sort of Edenic host of animals. But here, A.I. takes the form of nature, and of wisdom (Minerva’s owl), to save us: A.I. presents us with the threat of technology usurping our form and consuming nature, but does so in order to display the salvific role of technology and end up taking the place of nature all the same. Perhaps the idea of dialectical contrasts from which we cannot veer is so ingrained in our modern thinking and narratives that it even shows up in automatically generated narrative. 

For our part, it is precisely the outsourcing of deliberation—the logic of full-spectrum automation, so often relying on dichotomous thinking (the desire to find a discrete problem and a discrete solution)—that must be resisted

“Populist” politics, for their part, should  begin to deal with the very tangible issue of how human communities might avoid being integrated into the international division of labor defined by the fourth industrial revolution, with its tremendously powerful, all but ubiquitous A.I.s (to which the political classes of the U.S., the EU and China, are all committed). How is a state to retain its independence, and a people their character and basic standards of living, in such a world? How is colonization to be resisted?

Carlos Perona Calvete is a writer for The European Conservative. He has a background in International Relations and Organizational Behavior, has worked in the field of European project management, and is currently awaiting publication of a book in which he explores the metaphysics of political representation.