Phil-Anthropic: When the Most Powerful Move Is Sharing
Greta Oro - Glasswing Butterfly
Something happened in the tech industry this week that I can't stop thinking about.
Anthropic — the AI company whose tools I am learning to use, cautiously — announced Project Glasswing, built around a model called Claude Mythos Preview. The model found software vulnerabilities that had survived decades of expert human review, including a flaw in one of the world's most security-hardened operating systems, discovered autonomously without any human guidance. Rather than release it publicly, Anthropic restricted access to a closed consortium of more than 40 companies — including direct competitors Google and Microsoft — because the attack surface is shared infrastructure that everyone depends on, and no single company can defend it alone.
They gave their most powerful asset to their competitors. For the common good.
Then there's the other story. Anthropic rejected the Pentagon's demand to allow Claude to be used for "all lawful purposes," saying the new contract language would allow their safeguards to be "disregarded at will." Their two firm limits: no fully autonomous weapons, no mass domestic surveillance of Americans. The Trump administration responded by designating Anthropic a national security supply chain risk — a label historically reserved for foreign adversaries.
They walked away from a $200 million contract rather than cross two ethical lines.
Both decisions cost something real. Neither was the easy call. And as someone who studies values-based leadership for a living, I want to name that — because it is rare, and it matters.
And yet.
I hold this alongside a bigger, harder question: what is AI actually doing to us?
The same capabilities that found a 27-year-old software flaw are displacing workers, concentrating power in fewer hands, consuming extraordinary amounts of energy and water, and accelerating a pace of change that most humans — and most institutions — are not equipped to navigate. The AI race is largely being driven by a small, homogeneous group of powerful men with enormous capital and, in many cases, enormous egos. Good intentions at the top of that pyramid don't automatically translate into benefit for everyone below it. And "lawful" — as the Pentagon dispute made clear — is a word that does a lot of heavy lifting depending on who's defining it.
This is where Christian Felber's Economy for the Common Good feels less like theory and more like a necessary lens. The ECG proposes that the more cooperative a company's conduct — sharing know-how, resources, and means of production — the stronger its ethical standing, replacing the win-lose paradigm with a win-win one. Anthropic's Project Glasswing gestures toward exactly that. But a gesture and a transformation are different things. The question worth asking isn't just "did they do the right thing this week?" It's "what economic structures would make doing the right thing the default, not the exception?"
That's the conversation I want to keep having — in my coaching work, in my facilitation rooms, and here. Not "is AI good or bad" but "who is it good for, who is being left out, and what do we need to build alongside it so the answer keeps expanding?"
Anthropic made two decisions worth acknowledging this week. I'm still watching. We all should be.
To learn more about Project Glasswing:VentureBeat On the Pentagon dispute:CNN Business On the Economy for the Common Good:econgood.org