AI Can Only Make You One Level Smarter Than You Currently Are
Fortunately or unfortunately, your work still depends on your unique knowledge

Every few months, a new AI model is announced with promises to outthink us. Forecasts follow: algorithms will predict markets, generate insights, and perform analysis faster than any human ever could. It’s an appealing idea — that intelligence can scale like computing power. Yet after months of working with agentic systems in the field, I’ve come to a quieter conclusion. AI doesn’t replace intelligence; it amplifies it. It stretches what you already know, but it can’t transcend it.
If you’re a sharp analyst, AI sharpens you. If you’re lost in the noise, it only helps you get lost faster, by bombarding you with knowledge. It doesn’t fix misunderstanding. It magnifies whatever is already going on.
Infinite Intelligence?
We like to think that progress in AI will eventually produce an all-knowing mind, one that can connect every variable and explain the world’s complexity. But intelligence doesn’t expand indefinitely. AI doesn’t create knowledge out of thin air, it extends the reach of what’s already inside us.
It’s like a telescope: it magnifies your vision, but only if you know where to point it. Aim it carefully, and you’ll see patterns that were previously invisible. Aim it blindly, and you’ll stare deeper into confusion.
What We’re Seeing in the Field
In a pilot we currently have underway with a Fortune 500 company, the time savings are extraordinary. Tasks that once took days will now take hours, perhaps merely minutes. AI agents automatically clean and merge financial data, test hypotheses, and even draft complex and fully compliant financial reports.
The real story isn’t about speed though; it’s about divergence. Analysts who understand the logic behind the data use these tools to uncover relationships that were previously hidden. They use AI as a thinking partner, not a replacement. Others, less grounded in context, produce elegant charts that lead nowhere. The gap between good analysts and great ones is widening. AI doesn’t flatten expertise; it stretches it.
The One-Level-Smarter Principle
This has led me to what I’m calling the one-level-smarter principle. The smartest AI can only make you one cognitive level smarter than you already are. It scales your reasoning, but it doesn’t rebuild it.
If your assumptions are sound, AI becomes a powerful accelerator. If they’re weak, it becomes an amplifier of error. Even systems that reason causally — like our own platform, etio — can’t escape this rule. Etio can map causal relationships between sustainability factors and financial outcomes, quantify uncertainty, and suggest likely drivers. But meaning still depends on the human at the center: the analyst who chooses what question to ask, what variable to test, and what pattern to trust.
AI can hold the map, but you still have to know where you’re going.
Why This Feels Fundamental
I suspect this limit isn’t just practical — it’s cognitive. Both humans and machines learn by updating priors: internal models that interpret new information. Without them, data is noise. AI doesn’t replace those priors; it operates through them. So when AI “augments” us, what it really does is expand the surface area of reasoning available to our existing models of the world.
That’s why every breakthrough in AI seems to create both clarity and confusion. As our reach expands, so does our interpretive burden. We gain the ability to see further, but we also expose how narrow our understanding still is. A telescope can’t make you an astronomer; it just reveals how well you know the sky.
From Prompt Users to Prompt Architects
In financial analysis, this shift is transforming the role of the analyst. Modern workflows are less about executing calculations and more about orchestrating reasoning. AI now handles data cleaning, model fitting, visualization, and even first-pass storytelling. What remains is judgment — the ability to weigh evidence, challenge results, and interpret signals in context.
The most effective analysts I’ve worked with treat AI not as a black box but as a dialogue partner. They refine their prompts like scientists refining experiments — iteratively, with curiosity and precision. They understand that prompting is a form of reasoning. Others, treating AI as a shortcut, simply accelerate the production of mediocre work.
AI exposes the structure of our thinking. It shows where our logic is strong and where it’s brittle. In that sense, it’s a diagnostic tool for the mind.
Beyond Efficiency
Many organizations still approach AI as an efficiency project — a way to do the same work faster. But the deeper transformation lies elsewhere. When the mechanical parts of analysis disappear, what’s left is the creative act: designing better questions, testing wilder hypotheses, and exploring the unknown.
AI’s real gift isn’t automation — it’s imagination. It invites analysts to step back from rote reporting and return to the essence of inquiry: pattern, cause, and meaning. For the first time in years, data analysis can feel like exploration again.
The Real Singularity
We often speak of a coming “singularity,” the moment machines will surpass human cognition. I suspect the real singularity will look different. It won’t be a takeover, but a meeting point — humans and machines learning to think together.
The most valuable skill in that world won’t be technical mastery alone. It will be epistemic awareness: knowing how knowledge is built, what its limits are, and where intuition must fill the gaps. AI can expand your reach, but only you can determine the direction.
The next leap in intelligence will still begin where it always has — inside us.
Reads of The Week
This provocative piece from Pimlico Journal argues that artificial intelligence and genetic engineering are not just complementary technologies, but twin forces driving a radical redefinition of human society and nature itself. It suggests that as AI reshapes work by rewarding creativity over competence, the resulting societal disruptions could push us toward embracing genetic engineering—not just to eliminate disease, but to enhance human potential in ways once thought unethical or impossible. A very thought-provoking read, to say the least.
This thoughtful guide from Scientists & Poets explores how our tone, curiosity, and clarity directly influence the behavior of AI systems like ChatGPT and Claude. It warns that pushing machines too hard for confident answers can unintentionally prompt them to generate falsehoods, while patience and well-phrased prompts foster more transparent and useful outputs. I see in this some practical, ethical advice on how to engage with machines in a way that preserves not just accuracy, but humanity as such.
In this punchy essay, Jurgen Appelo makes a case for trusting AI over humans—not because machines are perfect, but because people are often confidently wrong. Drawing on examples from cosmology to database design, Appelo argues that language models like ChatGPT offer clearer, humbler, and more reasoned responses than most social media debates or casual human advice. It’s a sharp, entertaining challenge to our assumptions about intelligence, bias, and who we turn to for answers in a world overflowing with misinformation.



Wonderful article, so much of it resonates with me. I have definitely seen mediocre work presented by former colleagues who may not have even checked the output, because it didn’t make any sense. I’ve also experienced the joy of working collaboratively to reach new ideas and insights. The future does hold a lot of value for those with pattern intelligence.
Loved this: “They refine their prompts like scientists refining experiments — iteratively, with curiosity and precision. They understand that prompting is a form of reasoning. Others, treating AI as a shortcut, simply accelerate the production of mediocre work.”
Excellent