War of Progress: AI vs Our Jobs
On a random day in early 2026, I stand in a queue of software professionals, reading a headline. The Managing Director of the International Monetary Fund recently warned in Davos that artificial intelligence is unleashing a tsunami across global labor markets. And she said that will affect forty percent of jobs worldwide. My sister is standing next to me in this same line, surrounded by a dreamy, dark atmosphere. The long queue leads to the altar of automation, where professionals are sacrificed! We software engineers are asked to go to the front, and for some reason, radiologists are walking in the opposite direction with a muted smile. How did my sister and I end up here?
While my elder sister studied computer science, I raided her bookshelf frequently, looking for books about coding or design. Eventually, a heavy book, thick as a pillow, anchored my curiosity and tagged with me on the bed. I slept with my head against the pages for many nights, hoping for an osmosis that never came. It was a dense roadmap for mimicking the human mind, and that was my first encounter with artificial intelligence. Eventually, I followed my sister into the software field, that pushed us out of our home city, Chennai, to opposite sides of the globe. In professions helping to maintain the hidden architecture of telecom giants and state infrastructures, with software.
My formal entry into AI began in 2019, when I enrolled in a remote Stanford certification in Data Science. While I was snailing through the basics in the quiet dawns of my software job, a revolution was dismantling the status quo. It feels as though I left my desk to grab a coffee only to return and find Google announcing that AI already writes more than a quarter of its new software code as of 2026. By the time I finish the coffee, Claude Code’s agent teams are writing the code while we take a nap.
This isn’t just a shift in the software industry. The impact is projected to be on the entire job market by experts, while some tech leaders even predict a total wipeout of white-collar work. Stuart Russell, the man who co-authored that pillow book, and educated generations of AI scientists and software engineers, was recently named by TIME as one of the most influential figures in the field of AI. And at a recent event, he warned that we should “consider the possibility” that the bus of humanity is headed toward a cliff, the steering wheel is missing, and the driver is blindfolded.
For decades, we software engineers meticulously built the altar of automation, sacrificing every kind of professional out there. Now, in the very same queue, we are asked to go to the front. I want to look closer. I have more than just skin in this game—my head.
At the Gates: Know thy Challenger
Artificial intelligence is the holy grail of computer science, and the field of machine learning is a path to get us there. But how does a machine learn?
Learning Road
Consider a simple engineering problem: a small town must decide the width of a new access road. Space is tight, and they need a design that accommodates most vehicles. They could survey every vehicle in town, or they could try a more organic experiment. They could lay down a temporary road of soft, semi-solid clay, draw a single central guide line, and keep a signboard to redirect all cars into the clay, for a week. On the seventh day, the clay will have recorded the impressions left by the tires. If most residents drive small sedans, the deepest grooves will appear at that width.
The success of this clay road depends on the crowd. Ten cars leave only a faint trail, but a thousand cars carve a valuable dent. If the town’s fishers, with their wide salt-crusted trailers, aren’t told about the experiment and it’s off-season, the final road will be unfair. If a heavy bus wanders through on a rainy Tuesday, it spoils the clay. If a neighbor prefers to keep their tracks private, they must be allowed to bypass. No questions asked. Consent keeps the productive experiment from morphing into exploitation. Finally, we can say the clay “learned” the common widths of cars.
In essence, with a scientific approach,
we dented the clay with things available in huge amounts and variety, to make it useful for us.
That was the real world. Constrained by physics.
The digital world, powered by silicon chips, is not tethered to the weight of tires or the friction of earth. Here, we can recolor the body, swap the engine, or sprout a wing on the car in seconds. We can create millions of copies and send them to Munich, Chennai, and Cape Town simultaneously, all at near-zero cost, and all while holding a coffee in one hand.
But there is a catch. We can’t drive these cars in real world.
So what?
Super Power Clay Ball
The possibilities of digital world were too great to ignore that we decided to translate our world into digits, like accounting sheets, movies, and architectural blueprints. So we could process it with surreal capabilities and translate them back into our world.
And?
Yes, we set out to create a digital equivalent of our learning-clay.
While we experimented with everything from rigid logical rules to elegant equations, one approach turned out to be the winner by a landslide. Imitating the brains of living beings, a digital architecture we call Neural Networks.
If they truly mimic the brain or not, remains a point of high-stakes debate. Yann LeCun says no. Geoffrey Hinton says yes. Both Turing award winners. As much as I could follow, I find myself aligning with LeCun here.
Whether they mimic the biology exactly or not, one point seems reasonably clear. Artificial intelligence need not be shackled to the performance limitations of its biological inspiration. After all, we built flying machines that surpassed the speed of sound, a feat that millions of years of organic evolution never quite managed.
The Neural Network was shaped over decades by pioneers like Geoffrey Hinton, Yann LeCun and Yoshua Bengio through significant phases. But this steady moving giant truck was supercharged into a high octane sprint by the silicon revolution of GPUs (Graphical Processors) in the 2000s.
If the clay road in our small town learned one feature, the width, a digital twin powered by GPUs now can learn millions. Layer in weight, speed, tire pressure, and temperature, possibly every touch point of that car’s existence. The result of that decades-long labor by the finest minds, with the necessary sprinkle of serendipity, is a super powerful digital clay ball capable of tracing millions of distinct features from every digital thing we run into it.
We got our cosmic clay ball, but now we need things in astronomic proportions to run into it. Digital things. I hear you, yes redirecting that million cars we sent to Chennai, into the clay will be a scene out of a Spielberg movie. But let me remind you of something. There is nothing interesting about identical things except the first one.
Our learning-clays thrive only on variety, the sedan, the hatchback, and the salt-crusted trailer of the fishers.
Data, Data, Data!
We can’t make bricks without clay. Wait. I wanted to say, we can’t make use of our cosmic learning-clay-ball without data. Vast amount of digital data. Diversity is crucial. Where can we get it?
Let us scroll up to early 1990s…
Steve Jobs who was fired from his own company, Apple, started his next company NEXT computers. Yes.
Steve Jobs and team at NEXT created an elegant black cube, a beautiful desk computer that combined innovative software, with incredibly powerful hardware, a true masterpiece of engineering and design. It failed in the market.
But, Sir Tim Berners Lee found that he could do things in the NEXT computer much faster with its intuitive interface and object oriented libraries, than other computers and he used it to create the World Wide Web. Paving the way for our Internet.
Later… baby internet, Mosaic Browser, Yahoo, Java, Microsoft Windows 95, Google, bigger internet, iMac, Dotcom boom, iPods, Google Maps, Facebook, much bigger internet… 2007.
Steve Jobs… Yes the same guy, not Jr… Yes he is back and heading Apple.
Steve Jobs, Jony Ive and team at Apple created an elegant pebble, a tiny pocket computer that combined innovative software, with incredibly powerful hardware, a true masterpiece of engineering and design. It succeeded in the market.
Like, in a big way.
iPhones, paving the way for its mimics, rained gasoline on the already spreading forest fire: The Internet.
It revolutionized the mobile interface, placing the grand world of the internet in our palms. It’s all from our pockets now. Facebook, Twitter, Reddit, Blogs, comments... billions of us. Data, Data, Data!
Planets of public data are for everyone to see, appreciate, and process with our human brains. But.
Is it okay process our public data, with powerful digital brains?
Well, that is how we built search engines like Yahoo and Google that acted as gateways, connecting the audience with the knowledge creators and artists.
Okay. No, not okay. Wait.
Is it fair to process our public data with super powerful digital AI clay ball brains? That actually cuts the connection between the knowledge creators, artists, and audience?
Our lawmakers, to whom we temporarily entrust the power to shape our destiny, should figure that out.
But.
To get back to our story, of the cosmic digital clay ball:
The tech companies ran planets of our texts into it. Boom.
Einstein and Mussolini
We got an AI chatbot.
“Hello! I am your friendly assistant.
Your recent shopping patterns suggest you are pregnant. Do you want a name for your baby girl?”…??!!!!!
…!!!!??!!
But…!!!!!!?? how does a digital ball of clay begin to speak intelligently? Most of the time, anyway. We are still figuring that out. But we know this: A piece of written text is a piece of persisted human intelligence. At least, a lot of them are. We shoveled planets of this into the learning-clay. It was an engineering feat of unthinkable proportions which I am not qualified to explain deeper.
But, we can all marvel when the superpower clay ball translates Taylor Swift’s lyrics into Shakespearean or explains quantum mechanics as a bedtime story. And have you ever met someone who has passionately read each and every word of their “heros” Mandela, Einstein, and Mussolini? If you meet. Run!
Though the AI chatbots have processed all three in mind-bendingly digital ways, my engineer friends have sweated to check and patch the clay repeatedly to ensure that the Mussolini in the clay doesn’t leak into the light. DISCLAIMER: Still not fully guaranteed as clarified in the disclaimer text of all chatbots.
What is the point of being gigantically intelligent without an inherent moral compass?
But wait, is this super power AI clay ball, intelligent in the same way as Einstein?
Expert thinkers of the field like Oxford’s Professor Michael Wooldridge disagree. In his recent Royal Society Michael Faraday Prize Lecture in 2026, he dismantled the illusion of machine intelligence in AI Chatbots while acknowledging its raw, undeniable power.
Talking about AI Chatbots/LLM, he says “They are statistical models of language that can approximate patterns of reasoning, planning, and problem solving based on [the astronomical] training data”.
That was a mouthful.
He also said “We got these things which we simply dont fully understand. They are on the one hand remarkable but they are on the other hand extremely weird”.
And that definition also applies to the childhood memory of my uncle.
Still we have to give credit to our uncle’s brilliance. Unlike the chatbot, which needs planets of text, all our uncle needed was a bunch of books and a bottle of whisky.
But we missed the most vital realization about this superpower: What is a piece of text with perfect consistency, zero grammatical errors, and we have planets of them?
The text of a computer code. Boom!…
—— ? ?
Wait. It doesn’t always just work. A lot of stunning engineering work is still needed.
Yes, now? Boom!
The Frontline: Software Engineers
Vanakkam!
I was passionately explaining all this with just the right timing, making my returning radiologist friend chuckle loudly while we caught up in that dark altar queue. My sister took a deep breath and looked away. Hold on, there is some kind of weird siren sounding in the background, but at least the line is moving. I found myself recounting the amazing journey it has been, from the tiny ancient perceptron to AlexNet to ChatGPT. This is a crowning achievement of humanity. A generational collaboration of scientists, innovators, and software engineers.
“Software engineers!”
[ weird siren sounds again ]
In February 2026, Anthropic built a fully functional C-compiler, a kind of software, from scratch with a swarm of sixteen autonomous AI coding agents. This is amazing. A compiler is a peculiar and a well-contained use case, but the feat remains staggering.
As a quick digression, we also have to genuinely appreciate the fact that Anthropic patched their AI clay ball with the crowbar of a moral constitution, exclusively written for it.
We are still in the honeymoon phase with these digital Agent swarms. Early users are reporting similar problems we see from the beginning with chatbots, like hallucinations, though they occur less often. It is reasonable to assume that with multiple agents, even small individual problems can propagate faster. But this is a beginning.
The imagined future is where any one of us can just type a prompt to ask for an inventory management application, for a car factory. A swarm of coding agents will tirelessly prompt each other, in a closed loop, to create our ultra-modern software. While we walk away to wash the dishes.
What if a hundred coding agents can work together to deliver a software product tomorrow? But first. How do human engineers work together to deliver a software today?
I have had the opportunity to look at this closely as a team player, and a team lead over many many years. And, as already critics of coding agents point out, a software team does not just create the software. We own it. I want to go deeper.
What does it mean to “own” a piece of software, at an individual level?
I think of professional ownership, of a piece of work, as a series of three Dials and two Vows.
Dial of Understanding: Guarantee you can add something without breaking or explain why not—either by yourself or redirecting to an accounted owner.
Dial of Anchoring: Account or credit ownership from others when you consume what they own.
Dial of Containment: Establish checks, guardrails and failure radius.
Vow to be transparent: Assure what it can do, what it can’t, and what you don’t know.
Vow to suffer: As a team, accept professional consequences for your assurances within reasonable tolerance—or individually, in an extreme, worst-case scenario.
The dials are complementary, ideally turned to peak or at least high enough to take the vows.
I have seen my team, my self, as well as other teams compromise the dials and mess up the vows to crumble our stack of cards. But we started again.
Once the above are ensured, you can own that piece of software. But ownership does not stop at the individual or one team. It scales. Each engineer owns their piece and cascades that to the lead. The lead ensures those fit together and, in turn, cascades that collective ownership to further stakeholders and the lego blocks builds on. Note. Ownership here is only about software. Only about the work and the things that go into it. People cannot own people.
So what if a hundred AI coding agents work together tomorrow to deliver a software product? Can they own it? They may turn the three Dials of ownership, they may take the vow of transparency or at least give a productive illusion of doing these four. But can they vow to suffer? Can they suffer?
Battle Ground: The Interior Lines
Zoom out of software, what about our world all around?
The Dials and Vows of ownership above, isn’t that true for all professions of humanity?
Consider a doctor who owns the “good health” of a patient for a window of time.
The doctor knows that adding a specific medication into patients’s body—the health system— will not “break” the patient’s “good health”. — Dial of Understanding (High)
They trust the lab report and consume it because they know the technician “owns” that specific data point. — The Dial of Anchoring (High)
For a simple cough, where the Dial of Understanding is high, the patient doesn’t need a hospital bed or a restricted diet. The “good health” here doesn’t demand strict containment. — The Dial of Containment (Low)
A general practitioner holds the Dials and Vows of ownership for a cough, but for serious ear issues, they cannot. Instead, they refer you to an ENT (Ear, Nose, and Throat) specialist who can own the health of your ears.
Isn’t this how we built our grand Lego world? By connecting these little brain-heart-blocks called humans with the hinge of ownership? Isn’t this how we touched the moon?
Isn’t that true for press, academia, government, businesses and every other human organization?
A junior lawyer owns a single brief, cascading it to their partner, while a driver owns the safe drive to the destination anchoring it to the Maps App and cascading it to the passenger. Isn’t that true?
Our artists own their creation, cascading it to their muse, or to the world, for eternity.
After all, what is the point of a poem when there is no lover? At least one lover?
Yes the AI can write a poem imitating Eliot with a bunch of metaphors about the “concept” of a father wearing a blue suit. Will it know the wrinkles of the veshtis my father wore all his life?
These AI clay balls can be utilized in all our professions, yes, but can it own a small piece of work?
Without having a head in the game, without suffering consequences, the AI clay ball cannot have the hinge of ownership to replace a single piece of brain-heart-block in our human lego world. Right?
But.
Suffering needs consciousness, say the philosophers.
Can AI become conscious?
Let’s go find shades under a tree! Please grab a cup of coffee.
Banyan Tree before the Battle: Matrix Dream
Yes. This coffee. What double? I need Grande! The big cup. Of course I know!
Will this super power AI clay ball ever become conscious with more data, more resources, and better engineering?
First, where does our consciousness come from? Our great-grandparents asked that thousands of years ago, sitting under a banyan tree. Last I checked with our philosopher-friends in jumpsuits? Nope. The question still stands.
But, at least you know you are conscious. Have you heard about the phrase “I think therefore you are”? No I haven’t. Then how do you know if I am conscious?
So, if I look like a human, walk like a human, and talk like a duck. You can do nothing but trust that I am conscious.
Even if this is all a matrix dream, this is the only one we got. My friend. We got to keep it. Keep it good.
Acknowledging another’s consciousness is the rarest of existential suspended judgments, a recognition we mutually grant each other. It is this that gave us the instinct to hit the brakes when a human figure slips into the road, but not for a tumbling piece of small wood. Should we ever grant that recognition to a superpower clay ball?
Even if we wrap that clay ball in a human skin and skeleton? Wait. Should we even put this AI clay ball into a human frame? Isn’t this equally dangerous—like them “gaining” a consciousness we can’t verify anyway? Food for our thought.
One more coffee, please. The same one. What? Why should we bring nationality in everything? Aren’t we all humans first? Who cares if the coffee is Irish. I… Love it! :)
[ Author looks around, takes a deep breath of the pleasant air ]
wow…! :)
Now. Keep consciousness away. Can AI just imitate suffering? The same way bots in video games do by losing virtual gold coins. Yes the current AI coding agents reportedly do it already with reward mechanisms, with carrot and sticks. But. A digital carrot and a digital stick. In the surreal world. Where Super Mario has 3 lives and a restart button.
Whatever the AI creates in the digital world of looping lives, if we want to use it in our real world with just one life, we should own it. Right?
So what if we can stack up multiple layers of dancing cards?
Hold on. I was supposed to ask a different question. I think, I think it was the coffee. Got it.
So what if we can stack up multiple layers of AI coding agents to create a software in the digital world?
Before we use that in our real world we gather teams of human software eng… okay, you know where I am going. Humans must own every loop, not just be “in” some.
Too much philosophy. AI in its current form exists for years. What do we actually see?
And that, that will be our last section before we warp pup… my goodness! The Coffee!
The Battle: What is the Battle?
AI is most successful in chemistry and biotech, solving problems that have baffled us for decades. With the potential to lift up millions of lives through new medicines and cures. Take AlphaFold AI from Google for example. It is undeniably the gold standard of the field, even winning a Nobel Prize for its creators. So, how should we read this? The AlphaFold predicts a possible chemical compound. But then what?
It is our human Scientists, Researchers, Technicians who sweat in a wet lab to verify and validate the output thrown by the AI machine, relying on their hard-won human intellectual skills to decide if the result is a breakthrough or a hallucination. Discovery takes decades. Validation takes months. So isn’t this a perfect fit for AI use, where the dials of ownership can be turned to peak by our human experts?
Wait a minute! We don’t bother asking if AlphaFold AI is conscious. Just because it doesn’t speak?!
What about AI chatbots? Eat one small rock per day to get the needed minerals in your body.
A generative AI model in its foundational stage recommended that. But no professional dietician worthy of their salt could own it. Yes, it was the early times for AI. Back then, it was like holding a gun that threw diamonds eighty percent of the time. Now it is ninety-two percent. It may get better. Okay that’s too harsh and too Tarantino. What about a short Wes Anderson clip?
Imagine you asked your sister to solve your math assignment over the weekend. You clean her bookshelf and arrange it with perfect symmetry. “Nemo, don’t move! Don’t move! You’ll never get out of there yourself. I’ll do it.”—The television. Your watery eyes are glued to the TV on one side of the frame, while your sister adjusts her glasses looking at your notebook on the other. Pleasant music from the Modern Mozart of Madras, AR Rahman interjects. Everything is colorful, pastel, and pleasing. She gives you the solution just before you leave for school on Monday morning. But then what?
You can show it off in front of schoolmates, but only if you can follow the logic behind her brilliance. Or at least that specific solution, to shield from the cross-questioning of the math teacher, who could never believe that you solved it.
If your sister‘s intelligence was behind the maths solution, these probabilistic language generation machines, meaning AI clay balls carry the billions of lived human intelligences we dented them with. Yes they do the grand beautiful cris-crossing between Leonardo da Vinci, Richard Feynman, and Lady Diana. But we can evolve progressively only as far as we “own” their answers.
AI is spitting out answers from our pockets already. They are starting to flood our offices, labs, and studios, pouring out their decisions, solutions and even finished artifacts awaiting a human expert’s input. Extremely useful. Extremely valuable. They will become as powerful as we are ready to bargain our energy, resources, and data. But. It is totally upon us how we place these superpower clay balls in human hands and supercharge our existing human organizations, with each and every one of us owning our parts in our grand Lego structures intact. This. We owe to the future generation.
To get back to that dinosaur in our room:
Will these cosmically massive, super powerful, digital AI clay balls be able to kill humanity’s sense of purpose, our jobs?
Possibly not. But.
Our failure to understand them will.
Our failure to contain them will.
Our failure to own them will.
— sAb
(RECORD 002)
I want to thank my friends and family listed below, for reading my drafts, offering critiques, and keeping me motivated. They helped me truly “own” this piece of work.
Alphabetical list:
Christopher Gebray [ Thanks for the encouraging feedback and for being a constant inspiration through your arts. :) ]
Gibu George [ Ryan’s dad. Thanks for the talks, games, and the Irish coffee ‘accident.’ Let’s meet more! :) ]
Hari Krishnan [ Dad of Manthrini & Mayuresh. Partner of Vidhya. From ‘falling-from-bicycle’ days, our talks continue forever. Your feedback meant a lot!]
Kerstin Haack [ Thanks for the ‘three-color’ highlight on the draft—Good, Okay, Confusing. And your valuable critiques and motivating words!]
Kishorekumar Suryaparakash [ Partner of Nivedha. From after-school snacks to world affairs, our talk never ends. I still owe you 10 Rs! :). Thanks for the valuable feedback!]
Manickathai Muthiah [ Younger sister: thanks for watching Finding Nemo with me and for the argument on why something didn’t work in the draft. I fixed them! ]
Ron Watson [ Thanks for the surgical feedback on those specific sentences and for the huge boost of encouragement! ]
Sivasankari Muthiah [ Elder sister: mentioned in essay. A constant inspiration since childhood. (Though I’m still better at math puzzles! :)) ]
Sreenivaasan Gajapathy [ Straightforward words since school days: ‘This works, this doesn’t.’ Thanks for always keeping it real! :) ]
Sudheer Seran [ Thanks for the long AI and tech calls. Run, fall, fly. This is you from school days. Continue :) ]
Supriya Sawane [ Ryan’s mom. Office desk mates to friends. —Please read comments for Gibu above— List is Alphabetical. :) Let’s meet more! ]
Vinod Balachandran [ First close friend, of life and for life. Roaming the world with partner Mala (Also school friend. :) Wow! ) Your feedback meant a lot! ]
Vivek Kidambi [ School lunch talks continue. Thanks for the feedback on the next essay draft—it fueled this one! :)]
Karthick Saravanan, Bharani Chandran, Shakthi Premkumar: [ You guys didn’t do much for this essay directly—but the list feels broken without your names! :)]
Last but not least: Mom and Dad. Who didn’t help with a single word, but made me write the whole essay.



