From Numbers to Narratives: A Simple Python Framework for Automated Commentary
How to make a formerly Excel-based process ready for AI

Most automated reports tell you what happened. Few help you understand why.
Analysts, on the other hand, do this instinctively. When they see a table of results, they don’t stop at “Loss ratio = 67%.” They ask: What changed? Why did it change? Is this the start of a trend or just noise?
That reasoning step — forming a plausible story from incomplete evidence — is what separates true analysis from mere data presentation. Yet when we automate reporting, this step vanishes. Dashboards summarize; they don’t speculate.
In this article, we’ll recreate that missing layer of reasoning with code. The goal is to build a lightweight Python routine that takes a reporting table — say, earned premium and incurred losses by line of business — and outputs short narrative comments such as:
“In 2024, the Property line’s loss ratio increased slightly, mainly due to lower earned premium.”
It’s not a chatbot and it’s not a dashboard add-on. Think of it as a first draft generator for analysts — a way to turn structured facts into sentences that capture intent and direction.
And although the example here is small, it sets the stage for something larger: a future in which analytical systems don’t just calculate but communicate.
Turning a Table into Facts
Before we can generate text, we need to formalize the relationships between our numbers. That means computing the indicators analysts normally infer by hand.
Let’s start with a minimal dataset containing:
earned_premium— the denominator of exposureincurred_loss— the numerator of risklob_nameandyear— context for grouping and comparison
From this, we’ll calculate the loss ratio and its year-over-year change, in the following fashion:
import pandas as pd
df = pd.read_csv(”reporting_data.csv”)
# Compute loss ratio and year-on-year delta
df[”loss_ratio”] = df[”incurred_loss”] / df[”earned_premium”]
df[”delta_lr”] = df.groupby(”lob_name”)[”loss_ratio”].diff()
df.head()This two-line transformation already turns static numbers into facts with motion. Each record now expresses not only a state (“loss ratio 67%”) but also a direction (“up 4% from last year”).
That’s the raw material of narrative: change, context, and contrast.
Before we can write sentences, we must teach the code to recognize those signals — which is what the next section, Giving Numbers a Voice, will cover.
Scaling Up: From Local Stories to Global Narratives
In a small organization, a single report often tells the whole story. But in a large enterprise — with dozens of business units and thousands of data points — no story stands alone.
Each business unit has its own local truth: a pattern that seems self-contained until it’s aggregated upward. A spike in losses in one unit may disappear when offset by another. A regional improvement may fade once normalized against group averages.
This is where reporting stops being arithmetic and starts being epistemology — a study of what remains true when perspective changes.
In code, this means moving from row-level commentary to group-level synthesis. We don’t just want to describe each lob_name in isolation; we want to know how all those narratives behave together.
Here’s a simple example:
# Summarize commentary by region or reporting level
summary = (
df.groupby(”region”)
.agg({”loss_ratio”: “mean”, “delta_lr”: “mean”})
.reset_index()
)
def make_group_comment(row):
direction = “improved” if row.delta_lr < 0 else “deteriorated”
return (
f”Across {row.region}, the average loss ratio {direction} “
f”to {row.loss_ratio:.1%} ({row.delta_lr:+.1%} vs prior year).”
)
summary[”comment”] = summary.apply(make_group_comment, axis=1)At this level, the comments no longer describe events; they describe consistency. If local stories conflict, the group story highlights that tension. If multiple lines of business show similar movement, the system can recognize alignment and summarize it as a trend.
In human terms, that’s what analysts do during consolidation: they test whether narratives hold when scaled up. Automating that step doesn’t replace judgment — it strengthens it.
It helps you ask better questions:
Which business units drive the overall trend?
Which ones contradict it?
Where does a local anomaly become a systemic signal?
And that’s the beauty of hierarchical storytelling: it mirrors how organizations themselves think. Local evidence is provisional; global evidence is explanatory.
By encoding that logic, we move closer to a system that doesn’t just describe financial data, but understands how truth evolves across scales.
Toward Narrative Intelligence
At some point, a comment generator stops being a toy and starts hinting at something bigger — a framework for how machines might reason about data.
Today’s code only describes what changed. But a true analytical assistant would ask why. It would connect trends across dimensions, test alternative explanations, and refine its own hypotheses with each new dataset.
This is where narrative intelligence begins: the ability to form, test, and revise stories based on evidence.
In practical terms, extending our simple reporting script could look like this:
Causal context. Introduce external drivers — like exposure, pricing, or economic indicators — and use them to infer likely causes for a change.
Counterfactual reasoning. Simulate what if scenarios: what would the loss ratio have been if exposure had remained constant?
Agentic exploration. Let the system decide which data slices to explore next, using feedback from the analyst as reinforcement.
Each step adds a layer of reasoning that brings the system closer to how analysts already think. The point isn’t to generate perfectly accurate narratives — it’s to accelerate the analytical dialogue between human and machine.
A well-designed system doesn’t replace curiosity; it scales it.
That’s the spirit behind what I’ll soon gather under a single banner: Wangari Labs — an open library of small, functional experiments in narrative analytics. Each notebook will explore a piece of the larger puzzle: how to make analytical systems more interpretive, conversational, and self-correcting.
Because in the end, what every analyst wants — and what every model should aim for — is the same: not more data, but better stories about what the data means.
The Bottom Line: Beyond Numbers
Analytics has always been about more than numbers. Every dataset hides a conversation — between what we think we know and what the evidence allows us to believe.
For decades, analysts have handled that conversation manually: scanning tables, spotting patterns, and drafting explanations. What we’re seeing now is the beginning of that process becoming codified — the translation of analytical reasoning into reproducible, inspectable steps.
Automated commentary isn’t the end goal. It’s a stepping stone toward systems that can reason, not just calculate. Systems that don’t drown us in dashboards, but help us listen to our data — and respond intelligently.
The true promise of narrative analytics is not to generate text, but to deepen understanding. To close the gap between measurement and meaning.
And perhaps, one day, the line between reporting and reasoning will blur entirely — when our analytical tools stop waiting for instructions and start asking their own questions.
That’s the future I’m building toward with the upcoming Wangari Labs repository — one small script at a time.


