On February 9, 2026, Matt Shumer, co-founder and CEO of OthersideAI, published a blog post titled Something Big Is Happening. It spread with the velocity of a rumor that turns out to be true. His argument was stark: coding was the canary in the coal mine. What software workers had watched happen in their own field, AI moving from helpful tool to serious substitute, was about to happen across law, finance, medicine, accounting, consulting, writing, and design.

Fortune’s Jeremy Kahn pushed back, arguing that Shumer was assuming more full-task automation than current evidence supports. Most knowledge work happens inside messy organizations full of politics, relationships, ambiguity, and judgment. That is a different problem from bounded software tasks.
Both views are useful, but not new and have been repeated countless times. But I think the replacement frame misses something important. The real question is not simply whether AI will replace people. It is what AI can help people become.
Last year I wrote an essay called Having the Right Brain for the AI Age. It drew on two ideas. The first came from Iain McGilchrist’s The Master and His Emissary, which describes the right hemisphere as the master of context and meaning, while the left hemisphere is the emissary that executes with precision. The second came from Garry Kasparov’s idea of Advanced Chess, where humans and chess engines compete together. A reasonably skilled human who knew how to guide the machine could outperform a powerful standalone algorithm.
That became my frame for AI. I saw it as a vast collective left brain: extraordinary at execution, analysis, iteration, and synthesis, but still lacking intent. It does not form a worldview, establish priorities, or create meaning. It is a supremely capable emissary waiting for a master.
Over the holidays, I tested this idea by using Claude to build what I called an AI Team Member, a service that could participate inside our Trello workflow. I had not done serious software development in decades. Without AI, the alternative was not slower development. It was no development.
The experience was exhilarating, but also revealing. The AI generated useful code, but it also went off the rails on occasion, forgetting corrections from previous sessions, and inventing variable names when it lacked access to existing code. I began to think of the film Memento, in which the main character, Leonard Shelby cannot form new memories and survives only by relying on external artifacts.
The AI was Leonard Shelby. I had to provide the tattoos.
That realization led me to a framework I now think of as mold and material. In AI-augmented work, the human crafts the mold: the boundaries, constraints, architecture, standards, and intent. The AI fills the mold with generated material. When the mold is precise, the result can be extraordinary. When the mold has gaps, the material leaks.
More importantly, I wasn’t replaced, I was becoming something different. I became the Vision Holder, defining what we were building and why; the Mold Architect, shaping the structures that guided the AI; the Quality Guardian, reviewing and validating the output; and the Context Curator, ensuring the AI had enough memory to work coherently across sessions.
Those roles are not what people traditionally think of as coding. Yet without them, the project would quietly drift into failure. That is the pattern I suspect we will see across many fields: AI will take over more execution, while human value migrates toward intent, context, judgment, standards, and coaching.
Then I read Peak, by Anders Ericsson and Robert Pool, and the idea sharpened further. Ericsson is best known for the research behind Malcolm Gladwell’s popularized 10,000-hour rule, though Peak makes clear that the number itself is not the point. Mastery does not come from repetition alone. It comes from deliberate practice.
Deliberate practice requires clear goals at the edge of current ability, immediate and precise feedback, focused attention on weaknesses, and guidance from a coach who can see what the learner cannot yet see. In field after field, the bottleneck is not merely effort. It is access to high-quality, personalized feedback delivered at the right moment. Great coaches are rare, expensive, and difficult to scale.
AI, used correctly, can change that.
The human expert still defines excellence, sequences skill development, embeds domain judgment, and makes the calls that require reading another human being. But AI can deliver the teacher’s framework with tireless attention, personalized feedback, and real-time consistency at a scale no individual coach can match.
Consider flight training. Commercial aviation is extraordinarily safe because pilots train in sophisticated simulators, under the guidance of experienced instructors, against failures and weather scenarios too dangerous or impractical to reproduce in the real world. General aviation training, by contrast, is far more variable. Instructors are often young pilots building hours. The incentive structure is such that simulation is underused because it doesn’t count towards flight hours. Training quality depends heavily on the instructor, the aircraft, the weather, and the day; far from anything resembling deliberate practice.
Now imagine an AI system integrated with a high-fidelity simulator and guided by a human instructor. It observes every control input, instrument scan, checklist action, and workload response. It notices whether the student fixates on instruments, demonstrates incorrect, inadequate, or uncoordinated control inputs, lets airspeed wander on final, or loses checklist discipline under pressure. It gives immediate feedback, tracks progress, and recommends the next scenario from that student’s specific weaknesses. Student and AI work together without consuming the instructor’s time till the student is ready for the next phase. The patient AI gives the student the opportunity to pursue mastery at their own pace.
The human instructor is now elevated. Freed from some low-level diagnostic work, the instructor can focus on the big picture: judgment, trust, emotional readiness, motivation, and building on these skills in real airplanes resulting in a much more fulfilling training experience.
A few points to make before continuing. High-fidelity simulators used in airline and corporate aviation cost multiple millions of dollars. However, there are also full-motion simulators for trainer aircraft that are comparable to, or lower than the cost of training aircraft. Also, it’s not critical for simulators to replicate precisely the characteristics of real aircraft. What’s more important is that there is positive transfer of learning. The idea here is not to replace real flying with simulators; it is to make the most of it. Finally, what’s new in this model isn’t the simulator, it’s the AI that ensures that the human is getting the maximum benefit from the simulation.
The same model applies in medicine, athletics, language learning, music, public speaking, negotiation, and emergency response. Wherever performance can be observed, and wherever mastery depends on feedback, AI can help democratize access to coaching that was once available only to the fortunate few.
The mold still matters. An AI coach trained without true masters will simply scale mediocrity. But an AI system operating inside a well-crafted framework can extend the reach of human expertise in ways we have never seen before.
That is why I think the most important debate is not whether AI will replace us. Some work will be displaced, and the transition will be real. But AI’s deeper promise is not substitution. It is the acceleration of human development. That’s the killer app.
Pavan Muzumdar is chief operating officer of Automation Alley, responsible for facilitating smooth functioning of the organization and enabling execution of strategic goals. Blending his 20-plus years of experience in executive leadership roles with his love of financial analysis and entrepreneurial endeavors, Muzumdar brings a people-focused, fundamentals-based analytical approach to his work.




