While the industry demands ROI within 6 months, traditional security frameworks are failing, and Europe is waking up late to its technological dependency. Welcome to the year where AI must move from promises to facts.
If 2025 was the year of unbridled hype, 2026 will be the year of the reckoning. And we are not just talking about financial invoices, but real accountability. After three years of massive experimentation and grandiloquent promises, investors have lost their patience. They have put nearly $2 trillion on the table and now they want to see numbers, not PowerPoint presentations.
Ursula Burns, Chairwoman of Teneo, summarizes it with brutal clarity: “Investors are becoming increasingly impatient for the ROI on these AI investments, creating a tension that will be important to watch in the coming year.” And when she says “tension,” she is being polite. The reality is that 53% of investors expect a return on investment in six months or less, while only 16% of CEOs believe they can meet that timeline. There is your tension.
But here is the problem that no one wants to mention out loud: while everyone is desperately chasing ROI, there is a giant elephant in the room that everyone pretends not to see. It is called security, and it is about to become the ultimate battlefield of 2026.
Security Theater: When Certificates Mean (Almost) Nothing
Let’s talk about uncomfortable numbers. In 2024, 23.77 million secrets were leaked through AI systems, 25% more than the previous year. Companies with impeccable ISO 27001 certifications, with audits passed with high marks, and security programs that look perfect on paper, are being compromised. And we are not talking about startups without resources, but organizations that invest millions in cybersecurity.
In December 2024, the Ultralytics AI library was compromised and malicious code was injected for cryptocurrency mining. In August, malicious Nx packages leaked 2,349 credentials from GitHub, cloud, and AI systems. ChatGPT had vulnerabilities that allowed unauthorized extraction of memory data. And here comes the part that should keep us all awake at night: these incidents occurred despite the organizations having comprehensive security programs that passed audits and met compliance requirements.
The problem? Traditional frameworks like ISO 27001 and NIST CSF 2.0 were not designed for a world where AI is the critical asset. They were built to protect servers, databases, and networks. No one thought of prompt injection when they wrote those standards. No one contemplated model poisoning. No one imagined that someone could corrupt an AI model during its own authorized training process, making it learn malicious behavior “legitimately.”
Rob Witcher, co-founder of Destination Certification, says it bluntly: “The controls that organizations rely on were not built with AI-specific attack vectors in mind.” And therein lies the problem. We are playing chess with checkers rules, and we are surprised when we lose.
But there is something even more concerning: many security teams cannot even inventory the AI systems they have in their environment. It is impossible to protect what you do not know exists. Entire departments are deploying models without IT or Security finding out, creating what the industry calls “Shadow AI.” Meanwhile, the EU AI Act is starting to impose real penalties for violations. Compliance with ISO/NIST is no longer enough. Companies are legally exposed despite their certifications.
Dan Ives of Wedbush summarized it perfectly: “This is the year where cybersecurity meets AI.” And the question is no longer “Did we pass the audit?” but “Can we detect and stop attacks as they happen? Do we have real control over our models?”
China: While the West Debates, They Build
And while the West debates frameworks and compliance, there is someone who has already solved the problem in another way. China did not wait for NVIDIA to solve its GDDR7 memory crisis, which by the way will reduce GPU production by 30-40% during the first half of 2026. It did not wait for prices to drop. It simply changed the rules of the game.
US chip export restrictions to China were designed to slow its AI development. The Chinese response was simple and devastating: “We don’t need your H100s.” And they proved it with facts, not promises.
DeepSeek V3.2 reached 99.2% on the HMMT 2025 benchmark, surpassing Google’s Gemini 3 Pro which achieved 97.5%. In advanced mathematics, the Chinese model is practically perfect. And here comes the interesting part: DeepSeek cost $5.5 million to train. Competing models from OpenAI and Google exceed $100 million. That is the difference between scaling vertically (more GPUs, more power, more money) and scaling horizontally (more intelligence, fewer resources).
Alibaba with its Qwen family has built the largest open-source ecosystem in the world. More than 400 million downloads. 140,000 derivative models. Developers in Japan building customer service chatbots. Car manufacturers integrating assistants. And everything optimized for local implementation from the start, not as an afterthought.
Qwen3-Next-80B-A3B is an 80-billion-parameter model that only activates 3 billion per token thanks to its Mixture-of-Experts architecture. It has the power of a giant model with the speed and cost of a small one. It can handle contexts of up to 262,000 tokens, scalable to one million. And it runs on hardware that does not require a data center the size of a football stadium.
In July 2025, China surpassed the United States in cumulative open-source model downloads, according to a16z. By the end of 2025, all top open-source models came from Chinese companies: MiniMax, Alibaba, DeepSeek, Zhipu AI. And the strategic lesson is clear: local AI is no longer a “plan B.” It is a top-tier competitive strategy.
Europe: The Late Awakening
And Europe, where is Europe in all this? Waking up late, but at least waking up (or so we believe).
In December 2025, Airbus dropped a bombshell. Catherine Jestin, its Executive VP of Digital, announced an investment of more than 50 million euros to migrate its critical systems to a “European sovereign cloud.” ERP, manufacturing systems, CRM, product lifecycle management, aircraft blueprints, technological know-how, classified military documents. Everything out of the reach of the US CLOUD Act, that American law that allows the US government to access data from American companies regardless of where the servers are physically located.
But here comes the brutal admission that should worry us all: the Airbus board of directors estimates the probability of finding a technologically capable European provider at only 80%. A 20% probability of failure. Airbus, with practically unlimited resources, admits that Europe might not have the technical capacity to support this migration.
Twenty years of technological convenience have left Europe without the capacity to protect its own critical assets. Google Workspace, Microsoft Excel, AWS. All American. All subject to the US CLOUD Act. All potentially accessible by the US government.
France is investing an additional 50 million in its AMIAD program to integrate AI into weapons systems, communications, and cybersecurity, with the explicit goal of “reducing dependence on non-French, non-European AI technologies.” Germany and France held a digital sovereignty summit in November 2024 focused on AI, cloud, chips, and open source. The message is clear: Europe is seeking common ground on digital sovereignty.
Jörg Kleiner, at that summit, said it bluntly: “Technological sovereignty should be discussed more broadly at the next European Council. But in the future, we shouldn’t even ask. We should simply do it and use European providers first.” The problem is that many European technology providers do not have the scale of international competitors and are less visible.
Meanwhile, the EU AI Act enters full application in August 2026. Spain has already published detailed guides through AESIA to help providers and users of high-risk AI systems comply with the requirements. The penalties are real. The grace period is over.

The New Value Equation
So, what do smart companies do while Europe “catches up” and tech giants fight for chips and market share? They build local, private infrastructure under their control. And they do so understanding that in 2026, real value is measured with a new equation: ROI × Security × Efficiency.

ROI without security is not enough. That is a time bomb, compliance theater that leaves you exposed to breaches and fines. Security without ROI is not enough. Investors will not wait, and a project that does not generate value is not sustainable. And efficiency without governance is not enough. Shadow AI and uncontrolled models are pure risk.
Local and private AI solves all three problems simultaneously. It gives you total control over performance metrics, you do not depend on third-party promises, and your costs are predictable. You avoid AI supply chain attacks—those poisoned models that no one detects until it is too late. You have real compliance with the EU AI Act, not theater. You know exactly which models you use and where. You audit complete training pipelines. You are not exposed to the US CLOUD Act. And you do not need massive data centers or H100s that are in short supply and cost a fortune.
You learn from Chinese efficiency: models optimized for local implementation, smart architectures that do not require brute power. You adopt European privacy: your data never leaves your infrastructure, you comply with regulation by design. And you build robust governance from day one: complete inventory, granular access controls, model integrity validation.
You do not need to be Airbus to act. While the giants seek solutions on a continental scale and debate which European provider might be up to the task, smart companies are building their competitive advantage today. Locally. Privately. Under their control.
While your competitors depend on AWS, Azure, or GCP and wait for their cloud providers to deliver ROI, you measure and optimize directly. While others discover breaches in annual audits, you have continuous governance. While others negotiate with NVIDIA for GPUs that will not arrive on time, you use efficient models that work with the hardware you already have.
2026: The Year the Wheat is Separated from the Chaff
2026 is not the year of more investment in AI. It is the year of smart investment in AI. The companies that survive will not be those that spend the most, but those that best control, measure, and protect their AI.
Maor Friedman, of F2 Fund, calls it the “year of sobriety”: “The gap between high expectations and real performance will become clearer. Investors will return to real business metrics: usage, efficiency, growth, and the ability to build a healthy company over time. Fewer stories and more numbers.”
Eric Sheridan of Goldman Sachs is even more direct: “If dollars continue to increase, we will have difficulty answering the ROI question. In every computing cycle I have analyzed, that has eventually led to a trough of disillusionment. I would be surprised if we avoided it this time.”
The window of opportunity is now. While the giants debate frameworks, look for capable European providers, and negotiate with NVIDIA for GPUs that will not arrive, smart companies are building their competitive advantage. Not with promises. With real infrastructure, under their control, generating measurable value.
Because in 2026, investors will not accept any more excuses. Regulators will not accept compliance theater. And competitors, especially the Chinese, will not wait.
The question is not whether your company will adopt local and private AI. It is whether you will do it before or after your competition. And that difference in timing could be what defines who leads your sector in the next five years.








