Large Tabular Models Are Here. Are They Ready for Insurance?
What Nexus and similar models mean for financial reporting

The most consequential data in any insurance company does not live in a PDF, a chat log, or a photograph. It lives in tables. Policy registers, claims histories, mortality experience studies, premium ledgers, capital model outputs — all of it is structured, row-and-column data that actuaries have been wrestling with for decades using tools that, in many respects, predate the deep learning revolution.
When Fundamental AI emerged from stealth on 5 February 2026 with $255 million in funding and a model called Nexus, it was making a direct claim on that world: that a new class of foundation model, purpose-built for tabular data, can do in a single API call what previously required months of bespoke engineering. Nexus claims to operate at the predictive layer, not the spreadsheet layer — a distinction that immediately caught my attention, because this is exactly the modality we live in.
This is not a product review or a funding announcement. (I don't know Nexus’ creators personally nor do I have any connection to anyone who might have an interest in promoting the model.) This is a practitioner’s analysis from the perspective of financial reporting and actuarial modeling.
The questions I want to answer are the following:
Does a model like Nexus actually help in insurance-grade tabular pipelines?
If yes, under what architecture? If no, why not?
What Nexus Actually Is
At its core, Nexus is a Large Tabular Model (LTM), a new class of AI foundation model trained natively on billions of real-world tables. Unlike Large Language Models (LLMs) that process text, Nexus is designed to ingest raw data tables directly and output predictions, such as regressions or classifications.
It is built to handle the specific mathematical properties of tabular data: the complex distributions of numerical data, the semantic meaning of categorical variables, and the fact that the order of columns doesn’t matter (permutation invariance).
One of the most important technical facts about Nexus is what it is not: it is not based on a standard transformer architecture. This matters because transformers, designed for sequential data like text, struggle with the non-sequential nature of tables and can have issues tokenizing numerical data. Fundamental’s proprietary architecture is designed to avoid these pitfalls. It is also deterministic: the same input will always produce the same output, a non-negotiable requirement for use in regulated financial reporting.
To understand the intellectual underpinning, it helps to look at the framework laid out in Fundamental’s whitepaper: the Fundamental Tabular Process (FTP). The FTP decomposes prediction into three components:
knowledge of the Real World (for example, mortality rates lie between 0 and 1),
the Local Reality of a specific organization (for example, a column named “Madrid” might refer to a product, not a city),
and the labelled context of the task at hand.
By pre-training a model to internalize these components, the whitepaper argues, you can dramatically reduce the number of examples required to learn a new task. This is the core promise of Nexus: a model with a built-in prior for how enterprise tables behave.
The Emerging LTM Landscape
Nexus is not alone in pursuing this idea. A small but rapidly growing ecosystem of tabular foundation models is taking shape:
TabPFN shows that a model trained on synthetic tabular tasks can outperform tuned XGBoost on small-to-medium datasets with zero task-specific training.
TabICL extends the transformer paradigm to tables using in-context learning over synthetically generated causal data.
iLTM combines neural networks with tree-based methods, aiming to merge the strengths of deep learning and gradient boosting in a single architecture.
Each takes a different technical path, but they share a common goal: replacing bespoke feature engineering and task-specific model building with general-purpose predictors for structured data.
Seen in this context, Nexus stands out primarily for its scale and its focus on real-world enterprise tables. Whether that advantage translates into consistently superior performance across domains remains to be independently validated — but conceptually, all of these efforts point toward the same shift: prediction on tabular data is becoming a foundation-model problem.
With that context in place, the natural next question is not whether LTMs work in principle — but whether they work for actuarial and financial reporting in practice.
Why This Matters for Insurance
Actuarial and financial tables are a unique beast. They are often wide and sparse, with a mix of numerical and categorical data, strong time dependencies, and small numbers of labeled examples for many prediction targets. Crucially, they operate under heavy regulatory constraints (Solvency II, IFRS 17) and have an extremely low tolerance for silent failure. Typical tasks include predicting claims development, modeling policy lapses, detecting anomalies in financial reports, and setting reserves.
In theory, Nexus targets exactly this class of problems. The promise is that it can reduce the need for manual feature engineering, automatically learn meaningful representations for categorical variables (like policy type or region), and transfer knowledge from its vast pre-training to work effectively even with the limited data available for a specific task.
For an actuarial team that spends a significant portion of its time cleaning data and building bespoke models, the appeal is obvious. Consider these use cases:
Claims frequency and severity modeling: A canonical prediction task where Nexus could potentially replace dozens of separate, hand-tuned GLMs with a single, more powerful model.
Mortality experience studies: A model pre-trained on broad population mortality data could adapt to a specific insurance portfolio with far fewer observations than a model built from scratch.
Lapse and persistency modeling: A task where the predictive signals are often subtle and non-linear, making it a good fit for a deep learning model that can detect complex patterns.
In all these cases, Nexus offers the potential for faster, more accurate predictions with less manual effort. So far, so promising.
Why Nexus Alone Is Not Enough
This is where the hype meets the reality of an audited, regulated production environment. A model like Nexus, used in isolation, is insufficient for high-stakes actuarial and financial reporting pipelines for three critical reasons.
Reason 1: It Models Correlation, Not Causation
Nexus is a powerful prediction engine. It finds complex correlations in historical data to answer the question, “Given what I’ve seen before, what is most likely to happen next?” It gives you P(y|X). It does not, however, build a causal model of the world. It cannot, by itself, answer the questions that are often most important to a business:
“What will happen to lapse rates if we change our pricing structure?”
“How would our reserves be impacted if there were a new mortality shock?”
“What is the expected effect of this underwriting intervention on claims costs?”
These are questions about interventions and counterfactuals. They require a causal model, not just a predictive one. Using a purely predictive model to make decisions is a well-known path to failure, as it can easily learn spurious correlations that break down when the system changes.
Reason 2: The Governance and Reliability Gap
Actuarial reporting requires deterministic reproducibility, clear audit trails, rigorous model versioning, and explainable drivers for all key assumptions. Foundation models, as a class, present challenges here. They can drift silently as they are retrained, their predictions are based on opaque priors learned from billions of data points, and their internal logic is not transparent.
While Nexus’s deterministic nature is a huge plus, the “black box” problem remains. Without a heavy layer of governance and explainability tooling (like SHAP or LIME) wrapped around it, deploying Nexus directly into a regulated reporting process looks like a compliance nightmare.
Reason 3: The “Local Reality” Problem
Fundamental’s own whitepaper acknowledges this challenge. The model’s pre-trained “Real World” knowledge might conflict with the “Local Reality” of a specific company. A column name, a product code, or a regional identifier can have a unique meaning within a single enterprise.
This implies that even with a powerful foundation model, you still need an enterprise-specific adaptation and validation layer to ensure the model is interpreting your data correctly. The dream of a completely zero-shot, plug-and-play solution for complex enterprise data remains just that—a dream.
An Architecture for Using LTMs in Practice
So, if Nexus isn’t a standalone solution, how should we use it? The most robust approach is to treat it not as an autonomous brain, but as a powerful component in a larger, multi-layered system. I would architect it like this:
Layer 1: Nexus as a Tabular Representation Engine.
At the base of the stack, Nexus acts as a world-class feature extractor and baseline prediction generator. Its job is to ingest raw tables and produce powerful latent representations (embeddings) and initial predictions. It handles the heavy lifting of understanding the data’s structure, freeing up the layers above it to focus on higher-level tasks.
Layer 2: Agentic Orchestration.
Wrapping Nexus is an agentic layer responsible for control and validation. This agent would be responsible for tasks like: comparing Nexus’s predictions against a simpler, more transparent baseline model (like a GLM or XGBoost); running sanity checks on the outputs; flagging anomalous predictions for human review; and routing different types of tasks to the appropriate model. This layer provides the critical governance and control that is missing from the foundation model itself.
Layer 3: The Causal & Decision Layer.
At the very top of the stack sits the causal reasoning and decision-making engine. This is where business logic, regulatory constraints, and causal models live. This layer takes the representations and predictions from the layers below and uses them to simulate interventions, evaluate counterfactuals, and make auditable business decisions. It answers the “what if” questions that the predictive layer cannot.
In this architecture, Nexus becomes a perception layer, not the entire brain. It dramatically accelerates the predictive part of the workflow, but the crucial tasks of governance, causal reasoning, and decision-making remain separate. This is a subtle but critical shift in how we think about deploying these models.
A Practical (Conceptual) Demo
Because NEXUS is currently delivered through AWS Marketplace and provisioned inside customer cloud environments, there is no publicly available SDK that can be used for an immediate local demo. Instead of walking through a full AWS setup, the example below illustrates the conceptual flow of calling a deployed NEXUS endpoint from Python using standard AWS tools. This is meant to convey the structure of the integration rather than serve as executable code.
To make this concrete, consider a simple claims severity prediction task. Imagine a general insurer with a historical claims table containing policy attributes (age, region, vehicle type, etc.) alongside observed claim severities.
In a traditional workflow, this would require extensive preprocessing: categorical encoding, feature engineering, model selection, and validation. With NEXUS, the core interaction is much simpler. Once the model is provisioned as an endpoint in your AWS environment, you pass a raw tabular dataset to the service and receive predictions directly.
The example below illustrates this conceptual flow: loading a local claims dataset, sending the feature columns to a deployed NEXUS endpoint, and receiving predicted severities in return. This is not production code, but a schematic view of how NEXUS fits into a Python-based actuarial or analytics pipeline.
# Conceptual example (API shape simplified for clarity)
import pandas as pd
import boto3
import json
# 1. Load your local tabular data
# Columns might include: policy_age, region, claim_count, vehicle_type, etc.
df = pd.read_csv("claims_data.csv")
# 2. Isolate the data for prediction (without the target variable)
X = df.drop(columns=["claim_severity"])
# 3. Invoke the Nexus model via its AWS SageMaker endpoint
sagemaker_runtime = boto3.client("sagemaker-runtime")
response = sagemaker_runtime.invoke_endpoint(
EndpointName="nexus-production-endpoint-v1", # Hypothetical endpoint name
ContentType="text/csv",
Body=X.to_csv(index=False)
)
# 4. Parse the predictions from the response
predictions_csv = response["Body"].read().decode("utf-8")
predicted_severity = pd.read_csv(io.StringIO(predictions_csv))
# 5. Compare Nexus predictions to a local baseline (e.g., XGBoost)
# (Code for training and running XGBoost would go here)
print("Nexus Predictions:")
print(predicted_severity.head())This workflow is powerful because it bypasses the entire feature engineering step. You send the raw dataframe to the endpoint and get back a prediction. To make this production-ready, you would wrap it in the agentic and causal layers described above, adding validation, explainability (e.g., using SHAP on the black-box endpoint), and comparison to baselines.
What’s Coming Next
Taken together, models like Nexus signal a trend: the commoditization of pure prediction.
As foundation models for tabular data become more powerful and accessible, simply building an accurate predictive model will cease to be a competitive differentiator. The value will move up the stack.
Differentiation will increasingly come from:
Causal reasoning: the ability to move beyond correlation and make robust decisions under intervention.
Governance: deploying models in ways that are reliable, auditable, and compliant with regulatory expectations.
Agentic control: building systems that can orchestrate multiple models and tools to solve complex business problems end to end.
In this framing, Nexus is best understood not as a complete solution, but as a powerful new component: a perception layer for structured data. On its own, it predicts. In combination with causal modeling, validation pipelines, and agentic workflows, it becomes part of a decision-making system.
The winners in this new era won’t be the companies that simply predict better — they’ll be the companies that decide better.
Nexus and its contemporaries mark the beginning of that shift.
If you’re experimenting with foundation models on financial or actuarial data — or thinking about how to integrate them into regulated decision pipelines — I’d love to compare notes.


