We were doing just fine.

November 18, 2025
8 min read
ChatGPT was released not three years ago. Think about that.
AI Meme

We Were Doing Just Fine.

Sam Altman had been on quite a ride. He wasn’t a year into his tenure as OpenAI CEO when the whole pandemic thing started in the first place, and the company somehow managed to release the first public version of Chat GPT on November 20th, 2022.

Remember, 2022? Wow were we tired. The COVID pandemic was dragging on with over 100 million worldwide cases. All the supporting cash had run out. By the time the first ChatGPT was released the overwhelming majority of people on the planet had had COVID at least once.

As I write this, we’re coming up on a mere 3 years since that first ChatGPT release. Can you imagine? 3 years since basically no one cared, to now people not being able to shut up about it. Three years, “developers” became “researchers”. Three years, “Software Companies” became “AI Operators”. Three years from relative social obscurity to “we’re too big to fail and the government will be the insurer of last resort”.

It’s bonkers to think we’ve gone from the starting line to having our futures entirely dictated by AI companies. Some will say that’s hyperbole, and to that I say:

  • The current “magnificent 7” were named in mid ’23 by BoA.
  • They control 37% of the S&P’s market capitalization
  • October ’25 gave us around 50k layoffs directly attributable to AI
  • AI is now responsible for not only significant layoffs, but also altering hiring plans going forward
  • Things called “AI” are now shoehorned into every element of human life. My Toothbrush has “AI” in its app.

Even if we set aside that last point as the most ridiculous application of technology yet, since neither AI nor an app are necessary for the brushing of teeth, we’re being told that all of this is the future, and we better get on board. Humanity is doomed, the planet is doomed, and the only answer is datacenters in space and a one-way ticket to Mars.

This is required so we can save humanity and win the race. Race to where, no one knows, but it’s a grand future to be sure. Disease? Gone. Pestilence? Handled. Aging? Forget it. It’s a post-scarcity world, baby, but only if (and especially if) all the robots will listen to Elon.

On a recent interview in response to Andersen Cooper’s question, “Who elected you and Sam Altman,” Dario Amodei responded, “no one, honestly no one, and this is why I have always advocated for responsible and thoughtful regulation of the technology.” He of course doesn’t define responsible or thoughtful in this context.

Let that one sink in. He’s running Anthropic, and in this one statement he disclaims responsibility for the lack of any regulation or control because that’s someone else’s job. But for lack of regulation, Dario will happily just keep on keepin’ on. It’s not his fault if bad shit happens, he's been asking for regs all along. This gets under my skin because there’s an alternative: Dario could lead. He could realize the need for international, equitable understanding and use his position to help create that, but instead he does another interview where we hopes-and-dreams our way through impending societal upheaval.

I pick on Dario because this interview is a proximal example. Altman isn’t any better (already asking for a bail-out), and Musk is way worse (will only run a car company if he controls the coming robot army). These are very smart people hell-bent on a mission to de-humanize humanity. Long term dreams? Sure lots of stuff is possible. But in our lifetimes (or, practically, our children’s lifetimes)? Just a lot of headaches and we still get cancer. No unified moonshot, not even a cogent conversation about the impacts to society, nothing… just a race, to somewhere.

Let’s take a look at another person in the space: investor Marc Andreessen. Marc is quite well-read and has expansive knowledge about Western history. To hear him tell the tale (on the Lex Fridman podcast), the only thing holding him back from achieving his dreams was the horrifying oppression Biden administration and DEI. On its face, this is absurd—he’s a billionaire and in no way put-upon—but also clarifies his opinion. Anyone questioning his agenda is wrong, and the reasons why his agenda is being questioned are oppressive. He sees a way forward, and that is the way forward, and there is no other way. He speaks about how people don't have beliefs but just go along with whatever, but then he flip-flops his politics because "the government can't tell companies what to do." Resistance is futile, but irony is evidently alive and well.

Two technology luminaries, both of whom firmly state that AI is a thing that is *going* to happen *to you*, and you’ve got little to say in the matter because it’s necessary for winning the race to somewhere, unless someone comes up with something responsible or thoughtful, but who knows when the rest of you are gonna get off your butts and say something we don’t actively hate. Jump in the boat!

I have been lucky in my career to have worked with and for a great many outstanding leaders. Only one real stinker in the bunch, and for a 25-year career that ain’t bad. But watching interviews of today’s technology kingpins makes me think that they haven’t experienced this luxury. One of the downsides of success is that succeeding is self-reinforcing. You’ve succeeded and are rich, and it’s all bee due to your indomitable moxy. Therefore, everything you do is good and noble, even if you behave like an alien in a human suit, bereft of any true capacity for empathy. But a history of success does not a good leader make—and right now in human history good leaders are precisely what the doctor ordered.

I’ll go so far as to say: we were doing just fine. There were many problems a mere 3 years ago. The history of humanity is one of overcoming problems. In the three years since we’ve all had AI foisted upon us, increasingly, from every direction, how many problems has it actually solved? It’s costed far more than it is saved, resulted in a lot of layoffs, and subjected us to endless generated content on virtually every platform. I ask again: what problems has AI solved for humanity?

Before everyone looses it on me, let me make one thing clear: I don’t debate the utility of current AI systems. I use them every day, as do most of my clients. What I debate is this: thus far, has it been worth it, and given where we are (with not enough electricity), why are we plowing ahead? Seriously, think about it. For the low every day price of what remains of our privacy, all our data, our human dignity, and probably our jobs, we’re simply promised... something, sometime, somewhere, but it’ll be awesome. Sure, I’ll use Claude to generate document summaries when I get behind, and it's really good at CSS which I despise. But we've all managed the biggest chunk of our lives without any of it.

I’ve written extensively about the requirement for engineers to act ethically, responsibly, and in the best interest of humankind. That’s not what’s happening now. From the unprecedented demand for natural resources to the intellectual strip-mining of every human creative work, it’s been, to quote my favorite movie, like riding a psychotic horse towards a burning stable.

The good news is that the technology is good and we can turn it around. What we need to turn it around isn't data centers in space. We need Leaders. Success is neither necessary nor sufficient as proof of leadership acumen. For AI to succeed and for us to retain our humanity, leaders must emerge and inject some sense into the conversation. Stop trying to manipulate us with technobabble and shiny digital objects, and instead lead the world in an actual conversation about what is needed and when, and how to do so without sacrificing humanity. Otherwise, the whole thing is just a thin veneer of potential over a shit-heap of negligence.

It's a weird thing to say that AI needs "saving", but at this point I'm pretty sure it does. At Fusion Collective we’re heading the charge to change this conversation, to help save the technology from misuse, and ensure it’s all done with human dignity at its core.

The one thing all of today’s technology leaders should remember: We could have never even heard of ChatGPT or LLMs, and we would have been just fine.

Related Articles