There Are Only Two Corporate AI Strategies
Either you grow all in-house, or you partner with the world
If you spend enough time in conversations with people who shape innovation inside large financial institutions, you start to notice a pattern. It doesn’t show up on the org chart or in the strategy decks; it reveals itself in the way someone lowers their voice when they talk about an idea that will never get approved, or how their eyes light up when a small experiment suddenly gets air cover.
After a while, you realise that there are really only two AI strategies in the corporate world. Either you try to grow everything in-house, or you partner with the world. And whichever path a company chooses ends up defining its entire relationship to intelligence, risk, and possibility.
I’ve seen both approaches up close. Some institutions are fortresses: everything must be built internally, everything must be aligned with a central roadmap, everything must pass through a labyrinth of approval committees. Others are more porous, almost curious by design; they invite outsiders in, co-create, test half-formed ideas, and accept that some of them will fail.
Neither strategy is inherently better. But they create different cultures, different speeds, and different kinds of people who thrive. And for someone like me—someone building a frontier technology from the outside—these differences are not abstract. They decide where the door is open, and where the door isn’t.
The Internal-First Strategy
The internal-first strategy is, in many ways, the most understandable. Large financial organisations have regulations to meet, auditors to satisfy, and governance structures designed to prevent exactly the kind of freewheeling experimentation that startups depend on. In these environments, the instinct is clear: if we can build it ourselves, we should. If we can staff it with our own people, we must. If we can avoid dependency on an external vendor, that is the safest route. Inside these walls, innovation becomes a carefully choreographed dance between committees, budgets, and priorities that all have to align at the same time.
You can feel the weight of this in conversations. Someone tells you about an idea they’ve been nurturing for months, but they say it quietly, like it’s a secret they’re not supposed to want. Someone else tells you that they would love to run an experiment with an external partner, but the approvals would take nine months and the timing would kill it.
Everyone means well. Everyone is smart, capable, and genuinely committed to progress. But the system learns at the speed of its hierarchy, and hierarchy is rarely fast.
There is also a talent dimension that no one admits out loud. The more internal-first an institution becomes, the harder it is for it to absorb frontier talent. Not because it can’t pay, but because it cannot offer the environment where cross-disciplinary profiles thrive. A physicist-turned-AI-founder with a sustainability focus—someone like me—doesn’t fit into a predefined internal job category. And when you’re running an internal-first model, you need predefined categories. That’s the whole point.
There is nothing wrong with this. It is stable, consistent, and predictable. It brings stability where it’s needed. But it is also intrinsically limited by the organisation’s imagination, which tends to reflect the imagination of the people who already work there. When the only intelligence you cultivate is your own, you learn only as fast as your structure allows.
The Ecosystem Strategy
The opposite strategy looks messier from the outside but far more alive from within. These are the institutions that don’t try to do everything internally. They know their edges. They know where their blind spots are. They know that, in a world moving this fast, even the most talented internal teams cannot cover the entire frontier.
So they open the windows.
They run open innovation challenges. They bring in startups for small, time-bound pilots. They experiment with co-creation programs. They allow multiple business units to propose ideas. They understand that innovation doesn’t scale linearly and that a diversity of inputs often produces unexpected breakthroughs. They don’t need every experiment to become a product. They are happy if an experiment simply clarifies a direction or helps them avoid a dead end.
The culture is noticeably different. Conversations have more space. People share half-formed thoughts without fearing that they’ll be punished for being early. Leaders are enthusiastic about bringing in external minds because they see value beyond immediate ROI—they see narrative value, learning value, and the value of having someone challenge their assumptions. These companies tolerate uncertainty in a way that feels almost luxurious compared to the internal-first world.
This doesn’t mean they are chaotic. It means they are committed to learning, even when learning isn’t tidy. They are willing to be surprised. And because of that, they evolve faster.
What This Means for People Like Me
When you are a founder bringing an unconventional technology to the market—agentic AI for financial reporting; causal inference for sustainability-embedded risk; capability that most haven’t even seen deployed at scale—this distinction becomes existential.
You quickly learn that internal-first organisations may admire your work, but structurally they cannot absorb you. You’re too weird, too fast, too ambiguous, too interdisciplinary. They want you, they really do; but they don’t know where you fit, and they don’t have the bandwidth to figure it out.
Sometimes you see the door close very quietly. A promising conversation, drawn out over months of regular calls, suddenly dries up because the project you discussed doesn’t align with a central roadmap. A champion inside the institution then tells you that the idea is good, but there is no formal mechanism to engage. You get the sense that, even if they loved what you built, the system isn’t designed to welcome outsiders unless the product already fits a known category. And by the time your product fits a known category, it’s no longer frontier.
With ecosystem institutions, the experience is almost the inverse. You walk into the room and people immediately start mapping possibilities. They don’t ask “Where do we put you?” They ask “What can we build together?” They even say “Gosh we have so many projects for you, let’s compete internally for the privilege to be working with you.” (For real, I’ve experienced this.)
They already know they don’t have everything in-house. They’re looking for perspective, not replacement. They’re looking for thought partners, not vendors. They respect the ambiguity of early-stage frontier work because they live in a structure that can tolerate that ambiguity.
It took me a while to accept this. Early on, I treated every large institution as if it should be accessible to a small startup with the right offering. But the truth is different. You cannot sell into an institution whose evolutionary strategy rejects your type of intelligence. And you don’t need to “do sales,” even for a second, to institutions that really, really want you from the get-go.
The strategy comes first, and you come afterward.
This Isn’t About AI
The longer I spend in this space, the more I realise that these two AI strategies are not really about AI at all. They reflect something deeper: how an organisation believes knowledge should be created. In one worldview, knowledge must be cultivated internally to be safe, legitimate, and controllable. In the other, knowledge is porous, emergent, and strengthened by external input.
Once you see this, you start to see it everywhere. You see it in how budgets are allocated, how meetings are run, how decisions are escalated, how talent is evaluated, how risk is framed, how narratives are told. An institution optimised for stability will naturally lean internal-first. An institution optimised for discovery will naturally lean toward the ecosystem. Neither is superior. But they are not interchangeable.
This is why the discussion around AI maturity often feels misguided. AI doesn’t transform organisations. It’s not about AI, or even about technology adoption at large.
It’s just about how different corporations work. It illuminates where they are rigid and where they are curious, where they are protective and where they are open, where they invest in learning and where they invest in control. The technology is neutral; the strategy is a mirror.
The Founder’s Lesson
This realisation changed my entire approach to building Wangari. I used to think that success meant getting into the biggest shops—the ones with the most prestige, the biggest budgets, the most brand recognition. But prestige isn’t the same as compatibility. A company can be enormous and still structurally incapable of learning from you. It can admire your work and still never let you in.
Now I look for different signals. Does the institution respond quickly? Do they ask real questions? Are they comfortable not having all the answers upfront? Do they see the value in experimentation, or do they need certainty before anything begins? Do they treat startups as partners or as vendors? Do they understand the difference between capability and product? Can they imagine a world different from the one they operate in today?
When the answer is yes, the work feels easy. Ideas flow. People lean in. The collaboration produces something neither side could have imagined alone. It feels alive. And when the answer is no, the project collapses under its own weight before it even starts. Recognising this has saved me months of effort and, honestly, a surprising amount of emotional energy.
This is what I wish more founders knew: you don’t need to break into every institution. You need to find the ones whose internal logic welcomes the kind of intelligence you bring. Ironically, along this way you’ll find prestigious shops that have just that; it’s just that you overlooked them by spending your time chasing the other prestigious shops with a strategy that’s incompatible with you.
The Bottom Line: What We Choose Reveals What We Believe
In the end, every large organisation is making a choice, whether they state it explicitly or not. Either they believe they can grow all their intelligence internally, or they believe that intelligence grows when it’s shared. One strategy isn’t safer. The other isn’t bolder. They simply lead to different futures.
For startups, recognising these two paths is liberating. It tells you where to focus your time, your imagination, and your energy. It reminds you that innovation is not about forcing your way in but finding the places where your way of thinking is not only accepted but needed.
The real question isn’t which strategy a company chooses. It’s whether that strategy matches the future they want—and whether it matches the future you’re trying to build.
Reads of the Week
Agentic AI—AI that acts for users, not just informs them—is no longer science fiction. This in-depth article explains how tools like Perplexity’s Comet browser are already reshaping how people interact with the internet, threatening businesses’ digital strategies, privacy norms, job security, and even cybersecurity. AI challenges turn out to be not just technical but deeply political, ethical, and economic, demanding urgent attention from executives, regulators, and society alike.
Cobus Greyling’s article dives into a new frontier of AI architecture: agentic workflows. Unlike monolithic AI agents, these workflows use lightweight, modular agents working together in dynamic networks—mirroring how human teams collaborate. For Wangari readers tracking the evolution of workplace automation and AI governance, this piece offers a compelling look at how AI could soon reshape not just tasks, but organisational logic itself, blending human and machine intelligence in deeply intertwined ways.
In the race to build powerful AI, NVIDIA’s Nemotron Nano V2 proves that smaller can be smarter. This deep technical dive by Alex Razvant shows how lightweight models using a new architecture are outperforming much larger rivals in reasoning tasks, making advanced agentic AI more accessible, efficient, and scalable. The future of AI systems doesn’t depend on size anymore, but on smart design.




Outstanding piece, Ari…
Your dichotomy resonates deeply with what I've been exploring in ESG consulting. In my article "From Oracles to Operators," I argue that corporations facing sustainability challenges must choose between clinging to legacy consulting models (internal-first thinking, just outsourced—or, worse: managed by mega-consulting "gatekeepers") versus embracing multi-modal expertise sourcing—boutiques, fractional experts and collaborative networks. The companies thriving in ESG are those who "open the windows," as you put it, welcoming specialized external minds rather than demanding that everything fit predefined internal categories. Your observation that internal-first organizations "learn at the speed of their hierarchy" perfectly captures why checkbox ESG fails. The ecosystem approach isn't just for AI strategy—it's the operational philosophy that separates transformational sustainability from compliance theater. Great parallel thinking here.
Cheers!
Otto