Scandalous! We Optimize Ads Better Than Social Outcomes
We have the data, the methods, and the technology to measure impact. We're just pointing them at the wrong problems
Every Tuesday morning, two professionals open their dashboards. One knows exactly which consumers will respond to which messages. The other has no idea whether their program changed anyone’s life.
The first works in marketing. She sees real-time data: conversion rates, customer segments, predictive models, causal attribution. She can answer the hardest question: “What would have happened without our campaign?” She knows because she’s measured it.
The second runs a nonprofit. He sees program completion numbers, maybe some survey responses, mostly uncertainty. He can answer easy questions: “How many people did we serve?” But the hard question—”Did we actually change anyone’s financial life?”—remains a mystery.
Both are trying to optimize human behavior. Both have access to data. Only one has access to good data.
This isn’t a limitation of science. It’s a choice about where we deploy our best tools.
The Measurement Chasm
Billions of dollars flow annually into social programs: workforce training, financial coaching, small business support, neighborhood investments. These programs are run by dedicated people trying to solve real problems. Yet we have minimal evidence about what actually works.
The nonprofit director knows their program is helping. But they can’t prove it to funders. So funding dries up. So the program shrinks. So fewer people get helped.
This creates a vicious cycle. Funders lose confidence in nonprofits because they can’t see clear evidence of impact. Effective programs go unfunded because they can’t prove their effectiveness. Ineffective programs continue because no one can prove they’re not working. Resources flow to what’s well-connected, not to what actually works.
We’re flying blind on social policy. We’re making billion-dollar decisions with the data quality of a high school survey.
Meanwhile, advertisers have solved this problem. They know exactly which messages resonate with which audiences. They can measure causality: “This campaign caused this conversion.” They can predict outcomes. They can optimize in real-time. They’ve built a sophisticated measurement infrastructure over decades, driven by one simple incentive: profit.
So here’s the scandal: we have better measurement for selling consumer goods than for lifting people out of poverty.
The Data Infrastructure Question
Why is this? It’s not because the technology doesn’t exist. It’s not because the data isn’t available. It’s because we’ve made a choice about where to deploy our best tools.
The profit motive is a powerful driver of innovation. Advertisers had a clear incentive: measure what works, do more of it, make more money. This drove investment in data collection, analysis, and causal inference. Over decades, this created a sophisticated measurement infrastructure.
Nonprofits faced a different incentive structure. Grants fund programs, not measurement infrastructure. Data access is expensive. Building causal inference tools requires expertise nonprofits don’t have. The structural incentives that drove advertising innovation don’t exist for social impact.
But here’s the thing: the data already exists. Data brokers collect granular financial and behavioral information on millions of people—the same data advertisers use. What if nonprofits could access this infrastructure? What if they could apply the same measurement rigor to social outcomes?
That’s exactly the question Vibhat Nair asked when he founded the nonprofit Magnolia Impact Solutions. I had the pleasure of speaking to him last week, and have the privilege to tell you firsthand what I learned (this article is not sponsored, it’s my opinion only).
Magnolia: Bridging the Gap
Vibhat spent years at JPMorgan Chase and McKinsey, working in financial services. He saw how we used data to optimize consumer behavior. He also saw how financial services failed many households. The insight was simple: “Why should advertising have better measurement tools than social programs?”
Magnolia’s approach is elegant. They use data from data brokers—the same sources advertisers use—combined with synthetic controls and matched comparison groups to create counterfactuals. In plain English: they find people very similar to program participants (same neighborhood, similar income, similar credit profile) and track both groups over time. Then they answer the question: “Did our program cause people to move above the poverty line? By how much? For how long?”
This is causal inference made accessible, affordable, and scalable.
Here’s what it looks like in practice. A workforce agency in Philadelphia works with 500 participants. Magnolia identifies 500 comparison individuals with similar characteristics. They track both groups using data broker data. They answer: “Did our program cause people to move above the poverty line?”
The result? Nonprofits and state administration programs can finally answer “Did we actually change anyone’s life?” They couldn’t do that beforehand: They didn’t have any proper control groups, granular data, or causal inference tech.
This change is empowering. It enables better decision-making. It increases funder confidence. It allows effective programs to be scaled and ineffective programs to be improved.
For the nonprofit director, this is transformative. They’ve spent years helping people. Now they can prove they’re helping people.
What This Means
Magnolia is just one example of what’s possible. But it points to a much bigger question: What do we choose to measure?
Data infrastructure is a choice. We’ve chosen to deploy it for profit. What if we chose to deploy it for public good?
This pattern isn’t unique to nonprofits. Where else are we solving problems badly because we haven’t deployed our best tools? In education, do we measure learning outcomes as rigorously as we measure test scores? In healthcare, do we measure patient outcomes as rigorously as we measure billing? In government, do we measure policy effectiveness as rigorously as we measure spending?
When we measure social outcomes better, everything changes. Nonprofits can prove their impact, which increases funding and attracts talent. Policy makers make decisions based on evidence, not hunches. Money flows to what works, not to what’s well-connected. Effective programs get scaled. Ineffective programs get improved or shut down.
Better measurement doesn’t just improve programs. It transforms entire sectors.
The Path Forward
The technology exists. The data exists. The methods exist. The only question is: what will we choose to measure?
If you’re a nonprofit leader, explore whether Magnolia’s approach could work for your programs. If you’re a funder, demand evidence. Fund tools that enable rigorous measurement. If you’re a policy maker, invest in data infrastructure for public good. The ROI is enormous.
For everyone: think about your own sector. Where are we solving problems badly because we haven’t deployed our best tools?
You can’t change what you can’t see. For decades, we’ve been trying to change social outcomes while flying blind. Tools like Magnolia are making the invisible visible—not through dashboards or slogans, but through quiet, careful measurement that restores clarity where there was once only clutter.
That’s how we bridge the counterfactual gap.
You can contact Magnolia for more information at info@magnoliaimpact.org or download their whitepaper.
Reads of The Week
Dutch Rojas’ exposé on Blue Cross Blue Shield reveals how these health insurance giants, while technically nonprofits, operate with the scale, profits, and executive pay of for-profit corporations. With $62.8 billion in revenue, multi-million dollar CEO salaries, and $3.6 billion in federal tax refunds since 2018, BCBS plans have built immense financial power—often while hiking premiums on policyholders. Rojas argues that these entities exploit their nonprofit status for financial gain, avoiding taxes and accountability, all while dominating insurance markets in 89% of U.S. metro areas.
William Lutz reflects on why leadership can feel especially challenging in the nonprofit world—because many of us struggle most with leading ourselves. Drawing from a candid group conversation, he argues that real leadership isn’t about charisma or perfection, but about small, consistent acts of responsibility and self-discipline. For those working to serve others, this is a powerful reminder: before we can lead teams or missions, we have to earn trust by leading our own lives with integrity.
Monique Steensma urges nonprofit boards to look beyond agendas and ask two overlooked questions: what are we not doing, and what are we not talking about? These blind spots—missed responsibilities and sidelined ideas—can quietly erode a board’s effectiveness. By tracking deferred tasks and capturing stray ideas through simple tools like “parking lot” systems, boards can strengthen fiduciary oversight, make room for innovation, and build habits that support thoughtful, sustainable governance.



