Here’s What Happened Last Week
Last week, South Korea’s AI Basic Act came into effect, joining the EU AI Act as a full-scale AI regulatory regime.
And the law is very clear. For high-impact AI outcomes, organizations must provide a meaningful explanation of:
why a decision was made
which criteria and principles were used
and what data the system was trained on
It also requires a mechanism for human intervention and supervision.
This is a clear signal. Even governments have begun treating AI as infrastructure, not experimentation.
Meanwhile in most enterprises today, AI systems still operate as black boxes, with explainability added later as narrative, rather than a capability.
Teams that design for explainability early are the ones that are going to adapt to this new era of AI.
In the last 6 months, we saw a shift that has the potential to destroy most AI enterprises:
Open-weight models coming out of China, like Kimi K2.5, DeepSeek, and Qwen, are now competing head-to-head with OpenAI and Anthropic on reasoning quality, cost-performance, and deployability.
The gap between closed-source and open-source models has collapsed to roughly six months.

For years, everyone assumed that if you wanted reliability and quality, you had to lock into a closed-source provider. That assumption is now wrong. The gap between closed and open source is now roughly six months.
For enterprises, this tells us two things:
1// Speed now matters more than certainty.
AI is no longer evolving in predictable, multi-year cycles.
One model release, one benchmark shift, or one geopolitical move can reset the competitive landscape almost overnight. What looked like a safe, long-term decision six months ago can become a constraint today.
In this phase of AI, shipping late is often the same as being irrelevant; speed is everything.
2// Betting on a single model is no longer a strategy.
For years, enterprises assumed that reliability, performance, and trust required locking into a single closed-source provider. That is no longer true.
Model quality is converging rapidly, and new open-weight and regional contenders are emerging faster than procurement cycles can keep up with. We have seen it in the last 4 years.

2022 → Launch of ChatGPT for conversational abilities
2023 → Anthropic's Claude 2 for longer context handling and safety
2024 → Meta LLaMA 3 boosting performance
Jan, 2025 → Deepseek R1 rivaling with top models, with dramatic cost reductions
Nov, 2025 → Gemini 3 passing ChatGPT on benchmarks
Feb, 2026 → Anthropic released Claude Opus 4.6, just days ago, improving coding, long tasks, and professional outputs
When capability differences are measured in months, not years, long-term advantage doesn’t come from picking the best model; it comes from designing systems that can be agile and shift to new models without disruption. And above all, the AI solution has to be able to explain itself; it is mandatory.
If you’re a founder, CTO, or VP Product who has tried to deploy AI into Enterprise and agree with the thesis, just reply to this email, let’s talk.
Signing off,

Ramu Sunkara
Co-founder,
CEO at Zenera AI
