
Summary
Enterprise AI investment continues to grow. Figures and scope vary across analyst reports, from IDC to MIT Nanda, but estimates range from tens to hundreds of billions of dollars.
However, results remain uneven. McKinsey & Company reports that only a small proportion of organisations generate more than 5 percent EBIT (Earnings Before Interest & Tax) from AI, while many report limited impact at scale. MIT Nanda reports a striking finding: 95% of organisations are getting zero return from their Generative AI investments. From hype to disillusionment, the line is thin.
The reality check comes from the lack of value these organisations get from these massive investments. The CFO and the boardroom apply the same rules for every project: what ROI are we getting from our AI strategies? For the CDAO, an AI strategy that cannot show its returns will be seen as a cost centre. The revolving door proves it: CDAO tenure is notoriously short for those unable to demonstrate their impact.
The economics are harder than they look. Every additional model in production carries a build cost, a maintenance cost and an expectation of measurable business outcome. Most organisations are managing that equation across a handful of use cases. The challenge is running it across dozens. For many organisations, the limiting factor is no longer the data. It is the cost and effort of turning it into models at scale. This is where tabular AI comes in.
Most enterprises already hold large volumes of structured operational data (a.k.a. Tabular data). For those still working on data quality, governance, and accessibility, that remains the constraint. For those that have addressed it, the question is no longer whether the data is there. It is whether the team can move fast enough from a viable use case to something running in production.
The CDAO who has invested in data foundations is sitting on an asset most of the business has not priced. Every structured dataset is a prediction problem. Every prediction problem that reaches production has a direct P&L (Profit & Loss) impact.
What is tabular data? Tabular data is structured information, consisting of all the records organised in rows and columns, living in your ERP, CRM, databases, data warehouses and spreadsheets. It is the primary language of business operations and it drives the prediction tasks that generate measurable business impact.

That data already exists, across every industry. Core systems capture it every day.
Retailers have access to huge amounts of historical sales data across their network of stores, from Stock Keeping Units (SKUs), local and national promotions, local events, all available on a timely basis, for each product and category. The objective for retailers is to use this data to accurately forecast demand, optimise stock levels, reduce overstock costs and prevent stockouts. Their catalogue is wide, with new references coming in and out of it from a long list of suppliers every day. The accuracy of product classification is essential to the right categorisation in the supply chain, for the product to be available at the right time, at the right place, including online. This flows directly into inventory forecasting and transport optimisation.
In manufacturing, sensor data records the conditions that precede equipment failure, from vibration signatures and thermal fluctuations to acoustic emissions, pressure changes, and power consumption anomalies. Models can learn from that historical data to predict equipment failures days in advance. In heavy industry, unplanned downtime costs hundreds of thousands of dollars per hour.
In financial services, transaction and behavioural data underpin credit scoring and fraud detection.
Insurers have decades of claims data they can use to produce risk scores for new policies.
These examples point to the same issue. The data required for additional prediction problems is already available, sitting in structured, tabular form across core business systems, in every industry. The question is how to operationalise more of them, faster.
Even when a prediction opportunity is identified, moving it to production requires significant effort. Standard approaches to structured data modelling (gradient boosting methods such as XGBoost and LightGBM) require labelled historical data, feature engineering, iterative model development, and dedicated deployment infrastructure. Data preparation alone accounts for the majority of a data scientist's time. Model validation and deployment add to that. Every new use case restarts the same cycle from scratch. Under these economics, organisations concentrate resources on a limited number of high-priority projects. The rest remain on the roadmap.
The burden does not end at deployment. Models must be monitored, retrained as data distributions shift, and re-engineered when upstream systems change. Few organisations are equipped for this at scale: only 45% of high-maturity organisations keep AI projects running for three or more years, and just 20% of lower-maturity ones manage the same. Sustaining a production model is a capability most have not yet systematised. As the number of models grows, maintenance competes directly with new development, and the roadmap stalls.
This is where tabular foundation models come into play. They do not remove the need for data preparation or problem definition. But once the data is ready, they compress the model development and iteration cycle. Use cases that previously failed a cost-benefit assessment because the build effort was too high may now be viable. Roadmaps built before Tabular Foundation Models (TFMs) may underweight some of them. The complexity scoring that kept them off the roadmap should be revisited.
TFMs also let companies build predictive models on small or specialised datasets that were previously too limited to justify an AI project. That alone reopens a category of use cases most organisations have written off.

For CDAOs, this points to four priorities.
First, auditing their dormant data. Mapping every structured dataset in the organisation and asking: what prediction problem could this answer? Churn prediction, demand forecasting, fraud detection, supply chain predictive analytics, risk scoring, quality control, conversion optimisation. The list in most companies is longer than anyone expects.
Second, revisiting use cases previously dismissed as data-limited. Not every high-value prediction requires millions of rows. Tabular Foundation Models (TFMs) can extract sophisticated insights from narrow or specialised datasets that traditional machine learning would have rejected as insufficient.
Third, reviewing their roadmap and running real pilots. Finding a use case where a 10% improvement in accuracy translates to millions of dollars, and benchmarking a TFM against the current approach, whether that is a legacy model, a rules engine, or a human judgment call.
Fourth, going back to their complexity scoring. Use cases that were deprioritised six or twelve months ago because the build cost was too high are worth a second look. The economics have changed.
A large share of enterprise value sits in structured operational data. Decades of transactions, sensor readings, customer interactions, operational records. Organisations have been sitting on prediction problems that were identified but never built. Not because the data was not there. Because the cost of turning each dataset into a working model made it difficult to justify more than a handful at a time.
Tabular AI change part of that equation. For the CDAO, this shifts the conversation, from AI as a cost centre to AI as a direct contributor to the P&L, one prediction at a time.
At Neuralk AI, we build tabular foundation models for structured data prediction. We work with enterprises in finance, industry, and beyond to deploy predictive AI that delivers measurable results on the data that actually runs their business. If you're exploring how TFMs can fit into your AI strategy, get in touch.
References
[1] IDC — Worldwide AI and Generative AI Spending Guide, 2025
[2] MIT NANDA — The GenAI Divide: State of AI in Business, 2025
[3] McKinsey & Company - The State of AI in 2025: Agents, Innovation, and Transformation, November 2025
[4] MIT Sloan Management Review - "Chief Data Officers Don't Stay in Their Roles Long. Here's Why"
[5] Forbes - "Breaking The CDAO Curse: How Companies Are Finally Getting It Right," January 2025
[7] Gartner — "Lack of AI-Ready Data Puts AI Projects at Risk," February 2025
[9] Schmetz, A. & Kampker, A. — "Inside Production Data Science," AI, MDPI, 2024