Tragic Seduction of Human Masked Intelligence
Unlike the steam engine, electricity, or the internet, even the experts find it challenging to see Artificial Intelligence for what it truly is. This is the greatest test we face ahead.
Chatbots based on Large Language Models are only one specific use case of AI. We have other systems that play advanced games or aid the invention of life-saving drugs. These are the silent back benchers. The flashy chatbots are stealing the show simply because they talk. To be fair, these chatbots are jacks of many trades. Their versatility makes them fit for a wide range of tasks, from an individual creating recipes to a company building its software. Consequently, these chatbots have become the face of AI for all of us.
Interfacing with chatbots through natural language triggers the phenomenon of anthropomorphism. Let us call it human-masking. This is our reflexive tendency to ascribe human qualities, like intent or emotion, to mere things.
True, the human-mask is naturally compelling when an AI gives personal advice on negotiating with a boss. It feels like talking to a friendly former colleague. But this mask also leads people to delegate their critical life decisions to chatbots. This was tragically seen in the case of Ben Riley’s father, who lost his life after he relied on AI counsel for his cancer. Unfortunately, the number of these tragic incidents continues to rise.
While the human-mask is already deceptive enough for society, expert perspectives only add to the grand confusion. Public discourse currently circles around two dominant camps.
There are experts who drape a human-mask over these machines with excitement. We can call them the Enthusiasts.
Then there are those who fix a human-mask onto chatbots with an unhealthy anxiety. These are the Skeptics.
The Enthusiasts camp wants to promote its powerful benefits and claim revolutionary potential. The catch is that these experts are the tech leaders of AI chatbot companies themselves. The insiders. The people with the biggest stakes in the continued success of AI chatbots in their current form.
LLM-based chatbots have reached a state that could positively impact all the industries. That is an amazing achievement which deserves due recognition and investment. However, they are asking for trillions of dollars more from the market. From society. Never before has so much money been sought from investors in a single year, reports The Economist. This is a massive bet on a specific version of the future, that needs massive promotion. The human-mask comes in handy here.
While Artificial Intelligence is immensely powerful in its own way, this camp claims that chatbots will eventually replace all human professionals, especially the white-collar workforce. However hyperbolic their perspective may be, it cannot be dismissed as mere noise. They are, after all, the innovators, and they deserve a seat at the table. But in a sense, these corporate giants already own a big part of the table itself.
Then there is the second camp of Skeptics. They recognize the utility of AI in some areas but remain unsure of its ultimate benefit to humanity. Some even venture into the realm of doomsday predictions. This group includes thought leaders like Yuval Noah Harari, who ponder deep societal questions. It even includes Professor Geoffrey Hinton, widely considered one of the godfathers of the field. These experts fix a human-mask onto the technology with a sense of existential dread.
It is valuable to think outside the box. It is even more valuable when done by people who are actually out of the box.
Because these thought leaders exist beyond the tech industry, their perspective is not mere noise. It is a genuine warning. Yet their voice unfortunately strengthens the human-mask even more.
The Enthusiasts and the Skeptics both demand a hearing. That is only fair. But because these are rival perspectives regarding AI potential, their clash gives a false signal of a balanced fight. This is turning the volume down on a third and vital perspective.
This third camp consists of experts who strip away the human-mask to reveal the probabilistic language generation machine underneath. We can call them the Realists.
They highlight the immense utility of AI without the need for a mask. They ask us to notice the genuine gold inside the box while ignoring the distracting glitter of the conversational facade.
Professor Michael Wooldridge of Oxford is a leading voice in this camp. As the Ashall Professor of Foundations of AI, his speeches and interviews provide a clear articulation of this view. He is not merely an academic leader. A Fellow of the Association for the Advancement of Artificial Intelligence, he has a long history of collaborating with the very firms driving the industry.
This camp also includes Professor Yann LeCun, who stands as a foundational counterpart to the more anxious Professor Hinton. Another godfather of the field, LeCun has led hands-on research teams for decades. He was the key figure behind the neural networks that revolutionized banking in the 1990s. As the former chief scientist of AI research at Meta, he famously unmasked the technology with a blunt comparison. He noted that a house cat has more common sense and world-understanding than the biggest large language model.
This functional view is also echoed by a stalwart of the industry establishment, Eric Schmidt. When interviewers pressed the former Google CEO on what to do if AI becomes a Terminator coming for human lives, he told them to just switch it off.
All these experts see AI as a suite of super-powered machines. No human-mask is required. They view the technology as a tool to be utilized appropriately rather than a creature to be cheered or feared.
The professors and tech leaders holding this view are giants in their own right. They certainly have their space in the public discourse, but not enough. This is partly due to the market aspirations of industry titans, partly because of the existential concerns of thought leaders, and finally because human-masking is a naturally attractive story for all of us.
The mask can be dangerous for individual users. It can be equally dangerous on a broader scale when it deceives policymakers. Especially at a juncture where jobs and livelihoods may be affected, the human-mask serves as a great distraction.
The polarized camps argue whether AI will eventually replace all jobs, a premise I have argued against previously. But to have a heavy impact on society, AI does not have to replace every human job. Any amount of job loss affects real lives. When that loss is broader and faster than before, the consequences are even worse.
It is vital that policymakers create appropriate safety nets for the affected parts of society. But to make the right policies, we must view the technology for what it actually is.
We must turn up the volume for the third camp of experts who strip away the human-mask from AI chatbots. After all, the future is defined by the prevailing perspectives.
We must remember that the same neural network technology sits at the heart of AI chatbots and the systems that predict protein structures for life-saving medicines. The core engine is the same. Only the outer layer and the interface change.
True, the experts are still figuring out the intricacies of how AI generates a specific output. This mystery exists partly because of the sheer scaling possibilities of the digital world.
But at a high level it is reasonable to look at it like this.
When we feed astronomical amounts of data about protein structures into the neural network black box and poke it, the golden machine yields possible protein structures. Only scientists can interpret that output.
When we feed planets of human-written text into the same box and poke it, the golden machine yields possible text outputs. Any human can understand that output.
This familiarity is what makes us readily put a human-mask on the AI chatbot but not on the AI that creates life-saving drugs. We end up deceiving ourselves about what AI really is. Isn’t that the real tragedy?
— sAb
(RECORD 005)



