The Sorcerer, His Apprentice, and the Exoskeleton

On losing memory, finding method, and why AI only works if you bring the fight

In 2013, something resembling a miracle brought me back to life. The experience was like what Robert Pirsig described in Zen and the Art of Motorcycle Maintenance — you wake up and don’t quite remember who you were. My brain, tortured by gargantuan hypertension, had been losing its grip on memory for years — not in a single dramatic event, but gradually, like a tide retreating so slowly you don’t notice the beach is getting wider until one day you’re standing on dry sand where there used to be water. English vanished. My native Polish developed various dis…orders. Mathematics retreated into dyscalculia. I became what I privately call a high-functioning vegetable.

Why “high-functioning”? Because I learned to live without memory. And that, against every expectation, changed how I teach.

I could only lecture when I had a story in my mind. Raw theory evaporated the moment I tried to recall it. Equations without narrative were water in a sieve. But if I told a story about the model — who built it, what it hides, what kind of creature it assumes you to be — the theory stayed. It stuck because stories stick. The human brain is wired for survival narratives, for conflict and resolution, not for abstract equations of stability. I didn’t know the neuroscience then. I just knew that without the story, I was lost.

So storytelling became my compensation mechanism. And step by step, the compensation became a method.

Every spell hides its price

Microeconomics, when you step back far enough, is built from a single constrained optimization template, cast recursively — first on the consumer, then on the firm, then lifted to the cosmology of general equilibrium. One spell to model them all. Elegant. Beautiful. Platonic. If you read my One Spell to Model Them All — and I hope you will, when it comes out — you will see how that single spell constructs an entire Platonic universe, and at what price. But the price is what matters here.

And my memory loss gave me a strange advantage: I couldn’t hide anything anymore. If an assumption wasn’t in the story, it didn’t exist for me. So I started asking, compulsively: what is the model actually saying about the person inside it? What kind of creature does this equation assume you to be? And the answers were unsettling.

The expected value formula hides two centuries of theological struggle over whether you’re permitted to calculate the future. The Marshallian cross hides the institutional machinery required to produce the equilibrium it depicts. The representative agent hides the assumption that your individual life can be represented as the average of all possible versions of yourself at a single moment in time. The introductory textbook’s ice cream and chocolate problem hides the fact that Kant’s ethics — treat humanity always as an end — isn’t wrong within the model; it’s unaskable. Every model carries a silent story about human nature, and that story travels from teacher to student without anyone flagging it as a choice.

These hidden stories — these ontologies buried inside the formulas — became the starting point of everything I do. I call the method Know Thyself: before you learn what this model predicts about human behaviour, find out — in your own data, from your own choices — what kind of human being you actually are. Then see whether the model’s story fits.

Forty ad hoc research projects, four laboratories

This was not a single insight followed by a tidy implementation. It was over twenty years of chaos gradually becoming order — of building tools, breaking tools, and discovering that the broken tools sometimes taught more than the working ones.

I now have over forty ad hoc research projects — not four, not ten, forty. I call them ad hoc deliberately: these are not full-scale scientific studies but epistemic provocations — classroom experiments, surveys, simulations, group tasks — designed to create a specific moment of discovery in the student before the theory arrives. They fall into four distinct laboratories, each opening a different window onto economic reality:

Classroom ad hoc research — students become subjects of research before they encounter the theory. They trade in double auctions, play gift-exchange games, fill out willingness-to-pay surveys, face allocation dilemmas between self and others. They don’t know what’s being tested. The gap between their experience and the textbook prediction — that gap is the question that makes the theory necessary. The platform, LabSEE.com, built with Robert Borowski, handles up to five hundred participants simultaneously.

Deterministic experiments in CAS — following Felix Klein’s forgotten postulates of mathematics reform, students experiment with the mathematical skeleton of models in Computer Algebra Systems. The shift is from tedious calculation to model thinking: change a parameter, watch the entire solution surface reshape. CAS exposes what pen-and-paper hides — the ethical assumptions inside the formulas, the zones where the model breaks, the sensitivity of results to assumptions nobody mentioned.

Monte Carlo and bootstrap simulations — randomness is injected into the deterministic models. Student data from classroom experiments become parameters. A thousand replications turn the textbook equilibrium into what it actually is: a cloud, a region, a statistical confession that the neat crossing of supply and demand is not a law but a fragile achievement of specific institutional conditions.

Agent-based computational economics — the heterodox layer. Populations of diverse agents learn, adapt, cooperate, betray, lock in. Here collective intelligence becomes visible: how institutions shape outcomes more than individual rationality, how path dependence locks economies into trajectories no single agent chose, how norms emerge that no utility function predicted.

These four laboratories are bound together by storytelling — the same storytelling that started as my disability compensation and became the method’s backbone. Every model is a narrative about human nature. Every experiment is a provocation that forces students to confront that narrative with their own behaviour. A nickname system lets each student find themselves in the data — anonymously, privately — and ask the uncomfortable question: is the theory wrong, or are my decisions biased?

Not just teaching — research rediscovered

Something unexpected happened along the way. The method did not only change how I teach. It changed how I do research — and what I consider research worth doing. When you lose the ability to take things for granted — when your memory no longer stores the settled consensus, the things “everybody knows” — you gain a strange freedom. Nothing is received wisdom anymore. Every model must be rediscovered from scratch, and in the rediscovery it is reinterpreted. The supply-and-demand diagram is no longer a fact to be transmitted; it is a story to be excavated. The expected value formula is no longer a tool to be applied; it is an ontological claim to be examined. Ergodicity is no longer a technical assumption buried in the appendix; it is the hidden protagonist of the entire plot.

This perspective — nothing taken as given, everything reopened as a narrative about the world — turned out to be enormously productive. Papers I would never have written began to emerge: on how deontological norms outperform utilitarian calculations in mixed societies, on how the institutional design of a market matters more than the rationality of its participants, on how the entropy of an environment can be purchased, priced, and disrupted. Each of these papers started not from a gap in the literature but from a question a student asked during an experiment — or from a question I asked myself when I could no longer remember the standard answer.

Model thinking is a central part of what I teach, but not in the way most microeconomics courses understand it. It is not about memorising the conditions for equilibrium or mastering the choreography of Lagrangian multipliers. It is about looking at the model as a story — an ontological and epistemological story — and asking: what kind of world does this story assume? What kind of human does it need to work? What does it illuminate, and what does it deliberately leave in the dark? When model thinking becomes this kind of inquiry, the boundary between teaching and research dissolves. The student who asks why does this model need a perfectly selfish agent? is doing methodology. The teacher who designs an experiment to make the answer visible is doing research. They are the same act, seen from two sides.

The sorcerer’s apprentice

For twenty years, the bottleneck was never the idea. It was the labour of translation. Every experiment took months of preparation — writing the protocol, coding the platform, programming the dashboard, debugging, testing. Every paper took years. And for someone relearning English for the second time in his life, every sentence was a small war.

Then artificial intelligence arrived.

I think of the economist as a sorcerer who casts the optimization spell recursively, building a cosmology from a single template. The spell is powerful. It is also laborious. AI is the sorcerer’s apprentice — not the sorcerer. The apprentice.

The sorcerer does what only the sorcerer can do: asks the question that has never been asked, designs the experiment that makes the hidden assumption visible, decides which ontological door to kick open next. The apprentice does what apprentices have always done: the grinding, the cleaning, the carrying, the translating. It writes the R code. It builds the Shiny dashboard. It turns a rough sketch into a working prototype in hours rather than months. It serves as an interlocutor who has read more papers than any human could and who never flatters.

What used to take me months — designing a simulation, debugging a thousand lines of code, drafting an article in a language my brain keeps forgetting — can now be compressed into days. Not because AI does the thinking. The ideas, the hypotheses, the provocations — those remain mine. But the execution, the translation from thought to working instrument — that is where AI removes the friction that used to make every project a multi-year odyssey.

The laboratory of thought

AI is something more than an apprentice, though. It is a laboratory of thought.

I can now run an intellectual experiment in conversation. I describe a theoretical intuition — say, that the gift-exchange game follows the emotional arc of classical tragedy, or that deontological altruists outperform effective altruists in mixed societies — and within minutes I have a structured argument, counterarguments, references I’d forgotten, connections I hadn’t seen. AI doesn’t validate my ideas. It stress-tests them. It is the most tireless sparring partner an academic has ever had.

This is not small. For someone whose memory has been unreliable for over a decade, the ability to externalise the reasoning process — to think with a tool that holds the context steady while my brain drifts — is the difference between productive work and staring at a screen trying to remember what I was doing five minutes ago.

The exoskeleton — and why it only works if you fight

There is a physical metaphor that captures this precisely.

An exoskeleton does not replace your legs. It amplifies whatever movement you can still produce. It does not walk for you; it makes your intention into motion. For someone who lost the ability to walk, the exoskeleton is the difference between immobility and freedom.

AI is my cognitive exoskeleton. It amplifies the thinking I can still produce. And here is the critical point — the one I want to be very clear about:

AI is not a threat to human thinking. But it has one condition: the thinking must be creative.

An exoskeleton attached to a mannequin does not walk. An AI prompted by a student with no question produces answers nobody needed. The human inside the exoskeleton must bring the fight — the irreducibly personal encounter with a problem that has not yet been solved.

Scott Page’s Diversity Prediction Theorem states an algebraic fact: the accuracy of any collective intelligence equals the average individual accuracy minus the diversity of judgements. A homogeneous crowd achieves exactly the accuracy of any one of its members, however wrong. Diversity is not a social virtue decorating the result. It is a structural component of the intelligence itself.

AI is trained on the accumulated written output of human thought. It is the crowd. And the crowd is only as wise as the independence and diversity of its inputs. The student who copies from AI back into AI — who consumes the aggregate without adding anything original — closes a loop that amplifies whatever biases the aggregate already contains.

But the student who has been through a Know Thyself experiment — who has been wrong about something specific, in their own data, with their own nickname — brings something the aggregate does not yet contain: a genuinely new epistemic event. That is what keeps the collective intelligence from converging to its own reflection. Human creativity, human error, human surprise — these are not obstacles to be overcome by AI. They are the fuel AI runs on.

The sorcerer needs the apprentice. But the apprentice, without the sorcerer, produces nothing but well-formatted emptiness.

If you are a teacher and you recognise this frustration

I am 57. I have been losing memory gradually since before I understood what was happening, and compensating for it with a method that turned out to be more powerful than the memory it replaced. The clock is ticking. I have over forty folders with ideas — some brilliant, some less so. The method is, for the first time, complete: the five pillars, the four laboratories, the replication protocol, the AI layer that scales everything.

And I want it to be yours.

If you are an economics teacher who has spent semesters drawing supply-and-demand diagrams while quietly suspecting that the students absorb the picture without the conditions — this is for you. If you have tried to fit ethics into the consumer choice chapter and failed because the curriculum has no room for it — this is for you. If you have read about classroom experiments and thought this sounds interesting but I don’t have the time, the platform, the programming skills — this is for you, because AI has removed that barrier.

Everything I have built — the experiments, the protocols, the prompts, the platform, the codes, the dashboards — is available. My know-how is yours. The method does not require you to have my particular brain damage. It requires only the willingness to change one thing:

Change the order.

Before you teach the formula — ask the question. Before you draw the diagram — run the experiment. Before you give the answer — let the student feel the gap between what they assumed and what their own data show.

The method doesn’t change what economics teaches. It changes the order in which the student encounters the question and the answer. And that order changes everything.

Ask the question that hurts. Wait for the student to feel it. Then teach.

· · ·

The full methodological manifesto behind the Know Thyself method is available as a working paper: Know Thyself: A Methodological Manifesto for Teaching Economics Through Epistemic Provocation — Tomasz Kopczewski, University of Warsaw. [link coming soon]

Tomasz Kopczewski
Faculty of Economic Sciences, University of Warsaw
tkopczewski@wne.uw.edu.pl