· Yvette Schmitter · Technology · 7 min read
What Just Happened?
2025 Week 17, Artificial Ignorance

In what some people will view as a refreshingly candid moment, Anthropic’s co-founder recently admitted something the AI industry would prefer you didn’t think too hard about: Nobody, even Dario, actually understands how modern AI works. Not the companies building it, not the researchers training it, not the governments trying to regulate it.
As Anthropic puts it, “modern generative AI systems are opaque in a way that fundamentally differs from traditional software.” When conventional software does something, a human programmed it specifically to do that thing. But when AI does something — good or catastrophically bad — nobody can tell you precisely why and we’ve seen proof that AI will make stuff up.
Now, let that sink in.
Artificial Ignorance: The Tech Industry’s $100 Billion Mystery Machine
The technology being hailed as humanity’s greatest innovation since electricity is, in the words of its creators, a mysterious black box of billions of numbers doing… something. The tech being vomited into every business workflow, government agency, and consumer product is fundamentally not understood by the very people building and selling it.
In fact, Anthropic compares modern AI development to “growing a plant” where they “set the high-level conditions” but “the exact structure which emerges is unpredictable and difficult to understand or explain.” Looking inside these systems reveals “vast matrices of billions of numbers” that are “somehow computing important cognitive tasks, but exactly how they do so isn’t obvious.”
Picture NASA saying: “We don’t really understand how our rockets work, but they usually reach orbit, so climb aboard!” Or pharmaceutical companies marketing drugs with: “We have no idea what’s happening at a molecular level, but not many people died so, YOLO.”
Finding the Corporate AI Success Stories (Spoiler: Good Luck)
We’ve all seen the never-ending press releases from big tech, consulting firms and software vendors, about AI revolutionizing business. But as research continues to highlight: where are all the concrete success stories? If AI were truly delivering the transformative value promised, wouldn’t vendors be publishing libraries of detailed case studies documenting measurable ROI? And no, productivity metrics around generating meeting notes and emails don’t count because where and in what universe does an email generate revenue? None.
Instead, we get vague promises of “efficiency” and futuristic videos of chatbots solving problems that most businesses don’t actually have. Perhaps that’s because the real business model isn’t solving your problems — it’s getting you to pay a premium for features tacked onto platforms you already use.
There is a compelling trinity in AI’s business value creation: (1) providing capabilities you desperately need but don’t have, (2) delivering efficiency gains that dramatically outweigh implementation costs and (3) market revolution. Yet for organizations already using AI capable platforms, looking at these value creation paths get tricky, real fast. What exactly are these AI features doing that your business wasn’t already accomplishing? And if your organization was truly struggling to complete these tasks before, was AI really the missing piece — or was it something more fundamental?
The Opacity Problem
Anthropic admits that this fundamental lack of understanding creates serious problems. For instance, companies can’t use AI in “high-stakes financial or safety-critical settings” because they “can’t fully set the limits on their behavior.” In some cases, this opacity is “literally a legal blocker to their adoption” because decisions legally must be explainable.
More troublingly, Anthropic acknowledges that researchers “often worry about misaligned systems that could take harmful actions not intended by their creators” and that current “inability to understand models’ internal mechanisms means that we cannot meaningfully predict such behaviors.”
Let’s translate: “We’re selling you products we can’t explain, can’t control, and that might do unpredictable things we never intended.” Not exactly the most compelling sales pitch. Which is even more concerning where the companies creating these models are lobbying and taking the position that they are not responsible for the outcomes. Per OpenAI’s terms of use it disclaims responsibility for indirect, incidental, special, consequential, or exemplary damages, even if advised of the possibility of such damages.
The Race to Interpretability
To its credit, Anthropic is advancing the field of “mechanistic interpretability” — attempts to understand what’s happening inside these AI systems. They’ve made progress finding “features” and “circuits” that reveal some of the model’s thinking processes. But they’ve identified just 30 million features out of a suspected billion-plus in even small models.
The timeline? Anthropic believes a mature version of interpretability — a true “MRI for AI” — could be developed within 5-10 years. Unfortunately, they’re also predicting AI systems equivalent to a “country of geniuses in a datacenter” as soon as 2026 or 2027.
In other words, the interpretability technology needed to understand these systems will arrive after we’ve already deployed extraordinarily powerful AI throughout society. Talk about closing the barn door after the superintelligent horses have bolted.
Shadow Processes Surpass Fancy Features
Meanwhile, back on earth, your CEO is signing checks for AI features in enterprise software, blissfully unaware that employees abandoned the official platform long ago. Presumably your business is already doing the stuff that’s required, and as a CEO you’re probably unaware that everyone hates your Salesforce implementation so bad that they’ve got everything in spreadsheet trackers anyways. You can’t defeat shadow processes by writing a check.
This cuts to the heart of the AI hype problem. Most organizations don’t suffer from a lack of fancy tools — they suffer from poor implementation of the tools they already have, unclear processes, misaligned incentives, and communication failures. These are fundamentally human problems that new technology alone can’t fix. And if you think technology will, not only do you not understand how technology works, and you don’t understand your business.
Selling Obscurity as Innovation
The AI industry is peddling increasingly powerful systems they claim, “will be absolutely central to the economy, technology, and national security” while simultaneously admitting our “total ignorance of how they work” is “basically unacceptable.” By their own account, we’re trapped in a “race between interpretability and model intelligence” where interpretability might lose.
This candid acknowledgment should set off alarm bells and give every human being on the planet pause and every responsible business leader pause before signing the next AI contract. The proposition is absurd when stated plainly (i.e., no fluff, no marketing spin, no sales pitch): “We’re selling something incredibly powerful that we don’t understand and might not be able to control, but you should definitely integrate it into your critical business operations right away.”
Conclusion: Asking the Real Questions
So, before your organization jumps on the AI bandwagon, perhaps the most important question isn’t “Which AI product should we buy?” but rather:
- What specific business problems are we trying to solve?
- Are these problems actually technology problems, or are they organizational, process, or people problems disguised as technology problems?
- If an AI vendor can’t explain precisely how their technology works, what guarantees can they really provide about performance, safety, and results?
The trillion-dollar AI industry thrives in the gap between dazzling demos and non-existent measurable results. It sells digital alchemy – transformation without explanation – and counts on your FOMO to override your due diligence. We’re also witnessing a remarkable inversion of business logic: companies scrambling to implement solutions to problems they haven’t clearly identified, sold by vendors who candidly admit they don’t understand their own products. In what other industry would this approach survive scrutiny? The hard truth is that AI’s most impressive feat may not be its technical capabilities, but its ability to suspend normal business judgment in otherwise rational decision-makers.
In the end, the most valuable business question might not be “How can we use AI?” but rather “Who’s actually asking for this, and why?”