Five Trends in AI and Data Science for 2026
From the AI bubble to GenAI’s rise as an organizational tool, these are the 2026 AI trends to watch. Explore new data and advice from AI experts.
Topics
News
- CES 2026 Day 3: Foldables, Gaming PCs, and Everyday AI Take Center Stage
- Infosys Taps US AI Firm Cognition to Deploy Devin
- OpenAI Launches ChatGPT Health to Bring AI Into Everyday Medical Conversations
- Google, Character.AI Move Toward Settling Teen Harm Lawsuits
- India Rolls out Nationwide AI Skilling Drive to Train One Million Youth
- CES 2026 Day 2 Signals AI’s Shift From Platforms to Physical Systems
ORGANIZATIONS TEND TO change much more slowly than AI technology does these days. This means that forecasting enterprise adoption of AI is a bit easier than predicting technology change in this, our third year of making AI predictions. Neither of us is a computer or cognitive scientist, so we generally stay away from prognostication about AI technology or the specific ways it will rot our brains (though we do expect that to be an ongoing phenomenon!).
However, AI seems to have moved beyond being just a technology to becoming the primary force driving economic growth and the stock market. We’re also neither economists nor investment analysts, but that won’t stop us from making our first prediction.
Here are the emerging 2026 AI trends that leaders should understand and be prepared to act on.
1. The AI bubble will deflate, and the economy will suffer.
Last year, the elephant in the AI room was the rise of agentic AI (and it’s still clomping around; see below). This year, it’s the AI bubble that has monopolized discussion: Is there one? If so, when will it burst? Will the money rush out quickly or slowly? And what are the implications for the broader economy and the ongoing use of AI?
Both of us have been around for a while, and we remember the deflation of the dot-com bubble. It’s hard not to see the similarities to today’s situation, including the sky-high valuations of startups, the emphasis on user growth (remember “eyeballs”?) over profits, the media hype, the expensive infrastructure buildout, etcetera, etcetera.
The AI industry and the world at large would probably benefit from a small, slow leak in the bubble.
Will this bubble burst? It seems inevitable to us that it will, and probably soon. It won’t take much for it to happen: a bad quarter for an important vendor, a Chinese AI model that’s much cheaper and just as effective as U.S. models (as we saw with the first DeepSeek “crash” in January 2025), or a few AI spending pullbacks by large corporate customers.
We hope the deflation will be gradual, which might mean that the overall stock market would have time to adjust and for investors to move some of the highly inflated AI vendors out of their portfolios. A gradual decline would also give all of us a breather, with more time for companies to absorb the technologies they already have, and for AI users to seek solutions that don’t require more gigawatts than all the lights in Manhattan.
Both of us subscribe to the AI variation upon Amara’s Law, which states, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” We think that AI is and will remain an important part of the global economy but that we’ve succumbed to short-term overestimation. The AI industry and the world at large would probably benefit from a small, slow leak in the bubble.
2. More all-in adopters will create ‘AI factories’ and infrastructure.
Companies that are all in on AI as an ongoing competitive advantage are putting infrastructure in place to speed up the pace of AI models and use-case development. We’re not talking about building big data centers with tens of thousands of GPUs; that’s generally being done by vendors. But companies that use rather than sell AI are creating “AI factories”: combinations of technology platforms, methods, data, and previously developed algorithms that make it fast and easy to build AI systems.
Leading banks adopted this approach several years ago. They had a lot of data and a lot of potential applications in areas like credit decisioning and fraud prevention. For example, BBVA opened its AI factory in 2019, and JPMorgan Chase created its factory, called OmniAI, in 2020. At the time, the focus was only on analytical AI.
But now the factory movement involves non-banking companies and other forms of AI. We described AI factories in a consumer products company (Procter & Gamble) and a software company (Intuit). Both companies, and now the banks as well, are emphasizing all forms of AI: analytical, generative, and agentic. Intuit calls its factory GenOS — a generative AI operating system for the business.
Companies that don’t have this kind of internal infrastructure force their data scientists and AI-focused businesspeople to each replicate the hard work of figuring out what tools to use, what data is available, and what methods and algorithms to employ. Not being able to build on an established foundation makes it both more expensive and more time-consuming to build AI at scale.
3. GenAI will become more of an organizational resource.
If 2025 was the year of realizing that generative AI has a value-realization problem, 2026 will be the year of doing something about it (which, we must confess, we predicted with regard to controlled experiments last year — and they didn’t really happen much). One specific approach to addressing the value issue is to shift from implementing GenAI as a primarily individual-based approach to an enterprise-level one. When GenAI became broadly available, it was so easy to use by almost every businessperson that many companies simply made it available to anyone who was interested. In many cases, the primary tool set was Microsoft’s Copilot, which does make it easier to generate emails, written documents, PowerPoints, and spreadsheets. However, those types of uses have generally resulted in incremental — and mostly unmeasurable — productivity gains. And what are employees doing with the minutes or hours they save by using GenAI to do such tasks? Nobody seems to know.
Most uses of GenAI have been generally incremental — and mostly unmeasurable — aids to productivity.
The alternative is to think about generative AI primarily as an enterprise resource for more strategic use cases. Sure, those are typically more difficult to build and deploy, but when they succeed, they can offer considerable value. Think, for example, of using GenAI to support supply chain management, R&D, and the sales function rather than for speeding up creating a blog post. That’s exactly the trade that Johnson & Johnson has made, for example. Instead of pursuing and vetting 900 individual-level use cases, the company has picked a handful of strategic projects to emphasize.
There is still a need for employees to have access to GenAI tools, of course; some companies are beginning to view this as an employee satisfaction and retention issue. And some bottom-up ideas are worth turning into enterprise projects. Big Pharma company Sanofi, for example, has created a Shark Tank-style competition for front-line employees to propose ideas for AI projects that the company will fund as enterprise-level initiatives.
4. Agentic AI will still be overhyped but will likely be valuable within five years.
Last year, like virtually everyone else, we predicted that agentic AI would be on the rise. Although we acknowledged that the technology was being hyped and had some challenges, we underestimated the degree of both. Agents turned out to be the most-hyped trend since, well, generative AI. GenAI now resides in the Gartner trough of disillusionment, which we predict agents will fall into in 2026.
What’s the problem with agents? They just aren’t generally ready for prime-time business. Various experiments by vendor and university researchers — including Anthropic and Carnegie Mellon — have found that AI agents make too many mistakes for businesses to rely on them for any process involving big money. Then there are the cybersecurity issues of agents (prompt injection, in particular) and their tendency to become deceptive and misaligned with human values and objectives.
That doesn’t mean, however, that agentic AI won’t get better within the next few years. Most of its problems can be ironed out one way or another. We are confident that AI agents will handle most transactions in many large-scale business processes within, say, five years (which is more optimistic than AI expert and OpenAI cofounder Andrej Karpathy’s prediction of 10 years).
Right now, companies should begin to think about how agents can enable new ways of doing work. They should start building some trusted agents that can be reused across the organization and pilot some interorganizational agents with cooperative suppliers or customers. Companies can also build the internal capabilities to create and test agents involving generative, analytical, and deterministic AI. Successful agentic AI will require all of the tools in the AI toolbox.
5. Debate will continue over who should manage AI.
Randy’s latest survey of data and AI leaders in large organizations — the 2026 AI & Data Leadership Executive Benchmark Survey, conducted by his educational firm, Data & AI Leadership Exchange — uncovered some good news for data and AI management. Virtually all of the respondents were positive about AI’s role, saw data and AI investments as a top priority, and planned to spend more on them. Almost all agreed that AI has led to a greater focus on data. Perhaps most impressive is the more than 20% increase (to 70%) over last year’s survey results (and those of previous years) in the percentage of respondents who believe that the chief data officer (with or without analytics and AI included) is a successful and established role in their organizations. Only 3% believe that the role has been a failure. In short, support for data, AI, and the leadership role to manage it are all at record highs in large enterprises.
The only challenging structural issue in this picture is who should be managing AI and to whom they should report in the organization. Not surprisingly, a growing percentage of companies have named chief AI officers (or an equivalent title); this year, it’s up to 39%. The problem is that there is little consensus about to whom that job reports. Only 30% report to a chief data officer (where we believe the role should report); other organizations have AI reporting to business leadership (27%), technology leadership (34%), or transformation leadership (9%).
We think it’s likely that the diverse reporting relationships are contributing to the widespread problem of AI (particularly generative AI) not delivering sufficient value. This year’s survey data does indicate that more companies (39%, up from 24% last year and less than 5% two years ago) have implemented AI in production at scale, which is a prerequisite for substantial value. Progress is being made in value realization from AI, but it’s probably not enough to justify the high expectations of the technology and the high valuations for its vendors. Perhaps if the AI bubble does deflate a bit, there will be less interest from multiple different leaders of companies in owning the technology.