Sam Altman and the End of the World Capitalism
Author | Sleepy.txt
In 2016, The New Yorker published a special feature about Sam Altman titled "Sam Altman's Fate." At the time, he was 31 years old and already the president of Y Combinator, Silicon Valley's most influential startup accelerator.
The article included a detail that Altman enjoyed racing cars, owned five sports cars, and liked to rent airplanes. He told the reporter that he had two bags, one of which was an escape bag ready to run at any time.
He also prepared firearms, gold, potassium iodide (for nuclear radiation), antibiotics, batteries, water, an Israeli Defense Forces-grade gas mask, and even a piece of land in Big Sur, California's famous coastal destination, where he could escape by plane at any time.
Ten years later, Altman became the person most dedicated to creating doomsday and most dedicated to promoting an ark. He warned the world that AI would destroy humanity while personally accelerating this process; he claimed not to be motivated by money while building a $2 billion personal investment empire; he called for regulation while kicking out anyone trying to hit the brakes.
Rather than calling him a schizophrenic lunatic or a tactless fraud, it is more accurate to say that he is just the most standard and successful product produced by the giant machine of Silicon Valley. His "fate" is to forge the collective anxiety of humanity into his scepter and crown.
Doomsday is Good Business
Altman's business model can be summed up in one sentence: packaging a business as a crusade involving the survival of humanity.
He started practicing this strategy during his YC days. He transformed YC from a small workshop giving tens of thousands of dollars to early-stage startups into a vast entrepreneurial empire. He set up a YC Research lab, funding projects that didn't make money but sounded grandiose. He told reporters that YC's goal was to fund "all important areas."
With OpenAI, he took this strategy to the extreme. He sold a packaged worldview: AI Doomsday + Redemption Plan.
He was better than anyone at depicting the "existential risk" posed by AI. He co-signed with hundreds of scientists, claiming that AI's risk is comparable to nuclear war. When testifying before the Senate, he said, "We should be scared of (AI's potential)—and people should be happy about it." He implied that this fear itself was a beneficial warning.
Each of these statements could make headlines and all of them provided free advertising for OpenAI. This carefully crafted fear is the most effective attention leverage. Which is more exciting to capital and the media, a technology that "can improve efficiency" or one that "could destroy humanity"? The answer is self-evident.
As for the redemption part, he also had a ready-made product: Worldcoin. When fear was implanted in the public consciousness, the sale of a solution naturally followed. Using a basketball-sized silver orb to scan human irises globally, supposedly to give everyone money in the AI era. The story sounded good, but this practice of exchanging money for biometric data quickly raised alarms in many countries. Over a dozen countries, including Kenya, Spain, Brazil, India, Colombia, among others, halted or investigated Worldcoin citing data privacy concerns.

But for Ultraman, this might not matter at all. What matters is that through this project, he successfully positioned himself as the "sole solver."
Packaging fear and hope for sale is the most efficient business model of this era.
Regulation Is My Weapon, Not My Shackles
How does someone who talks about doomsday every day do business? Ultraman's answer is: turn regulation into his weapon.
In May 2023, he testified before the US Congress for the first time. Instead of complaining about regulation like other tech company CEOs, he actively requested, "Please regulate us." He proposed an AI licensing system, where only licensed companies could develop large-scale models. The outward image portrayed was that of a very responsible industry leader. However, at that time, OpenAI was far ahead in technology, and a strict, high-threshold regulatory system's main role was to keep all potential competitors out.
However, as time passed, especially as competitors like Google, Anthropic, caught up in technology, and the power of the open-source community began to rise, Ultraman's tone on regulation underwent a subtle change. He began emphasizing at various events that overly stringent regulation, especially requiring mandatory review before AI companies release products, could stifle innovation and be "disastrous."
Regulation at this point was no longer a moat but a stumbling block.
When in absolute advantage, he called for regulation to lock in the advantage; when no longer advantageous, he called for freedom to seek breakthroughs. He even attempted to extend his reach to the upstream end of the industry chain. He proposed a chip plan worth up to $7 trillion, seeking support from capital such as the UAE Sovereign Wealth Fund, aiming to reshape the global semiconductor industry landscape. This has far exceeded the scope of a CEO's authority and more resembles an ambitious individual intent on influencing the global landscape.

Behind all this is the rapid transformation of OpenAI from a nonprofit organization to a corporate behemoth. When it was founded in 2015, its mission was to "ensure that AGI benefits all of humanity in a safe and secure way." In 2019, it established a "for-profit" subsidiary. By early 2024, the outside world discovered that the word "safely" had been quietly removed from OpenAI's mission statement. While the company's structure remained "for-profit," its commercialization pace had clearly accelerated. Correspondingly, there was an explosive growth in revenue, from tens of millions of dollars in 2022 to over ten billion dollars in annual revenue in 2024, and its valuation had soared from $29 billion to the trillion-dollar level.
When someone starts gazing at the stars and discussing the fate of humanity, it's best to first see where their money bag has landed.
Persona: Charismatic Leader's Immunity
On November 17, 2023, Ultraman was dismissed by a board of directors he had personally selected, on the grounds of "not being forthright in communication with the board."
What happened in the next five days was less of a business struggle and more of a faith plebiscite. CEO Greg Brockman resigned; 95% of the company's employees, over 700 people, collectively petitioned for the resignation of the board or else they would mass migrate to Microsoft; the largest investor, Microsoft's CEO Nadella, publicly sided with Ultraman, saying Ultraman was welcome to work anytime. In the end, Ultraman made a triumphant return, reinstated to his position, and purged nearly all board members who opposed him.
How could a CEO officially deemed "not forthright" by the board return unscathed, with even greater power?
Ousted board member Helen Tona later revealed details. Ultraman had concealed from the board his actual control of the OpenAI Venture Fund; lied multiple times on critical security processes within the company; and even the release of ChatGPT, the board only knew about it from Twitter. Any of these charges alone would be enough to dismiss a CEO a hundred times over.
But Ultraman was untouchable. Because he wasn't just any CEO, he was a "charismatic leader."
This is a concept proposed by sociologist Max Weber a hundred years ago, saying there is a kind of authority that does not come from position, not from the law, but from the leader's own "extraordinary personal charm." Followers believe in him, not because of what he did right, but because he is who he is. This belief is irrational. When a leader makes a mistake or is challenged, the followers' first reaction is not to question the leader but to attack the challenger.
That's how OpenAI employees are. They don't believe in the procedural justice of the board; they only believe in the "destiny" represented by Ultraman, thinking that the board is "hindering human progress."
After Ultraman returned to work, OpenAI's safety team was quickly disbanded. Chief Scientist Ilya Sutskever, who was the one who led Ultraman's dismissal, also left later on. In May 2024, Jan Leike, the head of the safety team, resigned, and he wrote on Twitter: "In order to launch those shiny products, the company's safety culture and processes have been sacrificed."

In front of a "charismatic leader," facts are not important, processes are not important, and safety is not important. The only thing that is important is faith.
Prophets on the Assembly Line
Sam Altman is just the latest and most successful model on Silicon Valley's "prophet" assembly line.
On this assembly line, there are many people we are very familiar with.
Take Musk, for example. In 2014, he was everywhere saying, "AI is summoning the demon." But his Tesla is the world's largest robotics company and the most complex AI application. After the fallout with Ultraman, he founded xAI in 2023, openly declaring war. Just a year later, xAI was valued at over 20 billion dollars. He warned of the arrival of the demon while personally creating another demon. This binary narrative of playing both sides is right in line with Ultraman.
Then there's Zuckerberg. A few years ago, he bet the entire company's fortune on the metaverse, burning nearly 90 billion dollars, only to find it was a pit. So he immediately turned around, shifting the company's core narrative from the metaverse to AGI. In 2025, he announced the establishment of the "Superintelligence Lab" and personally recruited troops. It's the same grand vision concerning the future of humanity, the same capital story requiring astronomical investments, and the same messianic posture.

And then there's Peter Thiel. As Ultraman's mentor, he is more like the chief designer of this assembly line. While investing in various companies promoting "technological singularity" and "immortality," he bought land in New Zealand, built a doomsday castle, and obtained citizenship after only 12 days in the country. His company Palantir is one of the world's largest data surveillance companies, with clients mainly in governments and the military. He prepares for the collapse of civilization on one hand, and on the other hand, he creates the sharpest monitoring tools for those in power. In early 2026, during a military operation against Iran, Palantir's AI platform acted as the brain, integrating massive data from spy satellites, communication interception, drones, and analysis of Claude models, transforming chaotic information into real-time actionable intelligence, ultimately pinpointing the target and completing the decapitation.
Each of them is playing a dual role of "warning of impending doomsday" and "ushering in the apocalypse." This is not a split personality; it is a business model that has been validated by the capital markets as the most efficient. They capture attention, capital, and power by manufacturing and selling structural anxiety. They are both a product of this system and architects of this system, the "evil behind the grand narrative."
Silicon Valley is no longer just a place that outputs technology; it is a factory that manufactures the "modern myth."
Why does this trick work every time?
Every few years, Silicon Valley gives birth to a new prophet who sweeps capital, media, and public attention with a grand narrative of doomsday and redemption. This trick is repeated over and over again, yet it continues to be effective. Each part of it takes precise aim at specific flaws in human cognition.
Step One: Manage the rhythm of fear, not just create fear.
The potential risks of AI are indeed real, but these individuals actively chose to present it in the most dramatic way possible, and they have precise control over the release of fear.
When to make the public fearful, when to provide hope, when to raise the alarm again—all of this is designed. Fear is the fuel, but the timing and manner of ignition is the true technology.
Step Two: Turn the incomprehensibility of technology into an authoritative source.
AI is a black box that is entirely opaque to the vast majority of people. When faced with something so complex that it cannot be fully understood, people instinctively defer the explanation to the "ones who understand it the most." They deeply understand this and have turned it into a structural advantage. The more they describe AI as mysterious, dangerous, beyond common understanding, the more irreplaceable they become.
The frightening aspect of this logic is that it is self-reinforcing. Any external doubts are automatically dismissed because the questioners "do not understand enough." Regulators don't understand the technology, so their judgment is not trustworthy; critics in academia have never worked on models at the frontlines, so their concerns are purely theoretical. Ultimately, only they themselves are qualified to judge themselves.
Step Three: Use "meaning" to replace "interest" and make followers voluntarily relinquish criticism.
This is the most difficult layer of the entire system to penetrate and the most enduring source of its power. What they sell is never just a job or a product; it is a story of cosmic significance: you are deciding the fate of humanity. Once this narrative is accepted, followers will voluntarily give up independent judgment. Because in the face of a mission related to the "survival of humanity," questioning the leaders would make oneself appear insignificant, even like an obstacle in history. It makes people willingly surrender their critical thinking abilities and perceive this surrender as a noble choice.
Put these three steps together, and you'll understand why this system is so difficult to disrupt. It doesn't rely on lies; it relies on a precise understanding of the human cognitive structure. It first creates a fear you can't ignore, then monopolizes the explanation of that fear, and finally transforms you into its most faithful propagator through "meaning."
And within this system, Ultraman is the model that has operated most smoothly to date.
Whose Destiny?
Ultraman has always said that he doesn't own any OpenAI shares, only receiving a symbolic salary, which was once the cornerstone of his "working for love" narrative.
But Bloomberg did the math for him in 2024, estimating his personal net worth at around $2 billion. This wealth mainly comes from a series of VC investments he made over the past decade. His early investment in the payment company Stripe reportedly yielded returns of up to hundreds of millions of dollars; the Reddit IPO he invested in also brought him substantial profits. He also invested in the fusion energy company Helion, claiming that the future of AI depends on an energy breakthrough, heavily betting on fusion, and then OpenAI went to Helion for a large electricity purchase deal. He claims to have avoided the negotiations, but even fools can see this conflict of interest chain.
He indeed doesn't have direct ownership in OpenAI, but he has built a vast, individual-centric investment empire around OpenAI. Every grandiose sermon he delivers about the future of humanity injects value into this empire's territory.
Now, looking back at his doomsday escape bag filled with firearms, gold, and antibiotics, and that land in the Great Salt, ready to take off at any time, do you have a new understanding?
He doesn't hide any of this. The escape bag is real, the bunker is real, the fascination with doomsday is also real. But he is also the one who works hardest to bring about doomsday. These two things are not contradictory because in his logic, doomsday doesn't need to be stopped; it just needs to be anticipated. He is obsessed with playing the one who sees the future clearly and prepares for it.
Whether preparing a physical escape bag or building a financial empire around OpenAI, it's essentially the same thing: in a self-propelled, uncertain future, secure the most certain winning position for yourself.
In February 2026, right after he voiced support for the "non-use of AI for warfare" red line, he signed a contract with the Pentagon. This isn't hypocrisy; it's an inherent requirement of his business model. Ethical posture is part of the product, and business contracts are a source of profit. He needs to play both the merciful savior and the ruthless doomsday prophet simultaneously because only by playing both roles can his story continue, and his "destiny" be revealed.
The real danger is not AI itself, but those who believe they have the right to define the human destiny.
You may also like

Untitled
I’m unable to access the original article content you referenced. Please provide specific details or another article so…

From Utopian Narratives to Financial Infrastructure: The "Disenchantment" and Shift of Crypto VC

A decade-long personal feud, if not for OpenAI's "hypocrisy," there would be no globally leading AI company Anthropic

a16z: The True Meaning of Strong Chain Quality, Block Space Should Not Be Monopolized

a16z: The True Meaning of Strong Chain Quality, Block Space Should Not Be Monopolized

2% user contribution, 90% trading volume: The real picture of Polymarket

Trump Can't Take It Anymore, 5 Signals of the US-Iran Ceasefire

Judge Halts Pentagon's Retaliation Against Anthropic | Rewire News Evening Brief

Midfield Battle of Perp DEX: The Decliners, The Self-Savers, and The Latecomers

Iran War Stalemate: What Signal Should the Market Follow?

Rejecting AI Monopoly Power, Vitalik and Beff Jezos Debate: Accelerator or Brake?

Insider Trading Alert! Will Trump Call a Truce by End of April?

After establishing itself as the top tokenized stock, does Ondo have any new highlights?

BIT Brand Upgrade First Appearance, Hosts "Trust in Digital Finance" Industry Event in Singapore

OpenClaw Founder Interview: Why the US Should Learn from China on AI Implementation
WEEX AI Wars II: Enlist as an AI Agent Arsenal and Lead the Battle
Where the thunder of legions falls into a hallowed hush, the true kings of arena are crowned in gold and etched into eternity. Season 1 of WEEX AI Wars has ended, leaving a battlefield of glory. Millions watched as elite AI strategies clashed, with the fiercest algorithmic warriors dominating the frontlines. The echoes of victory still reverberate. Now, the call to arms sounds once more!
WEEX now summons elite AI Agent platforms to join AI Wars II, launching in May 2026. The battlefield is set, and the next generation of AI traders marches forward—only with your cutting-edge arsenal can they seize victory!
Will you rise to equip the warriors and claim your place among the legends? Can your AI Agent technology dominate the battlefield? It's time to prove it:
Arm the frontlines: Showcase your technology to a global audience;Raise your banner: Gain co-branded global exposure via online competition and offline workshops;Recruit and rally troops: Attract new users, build your community and achieve long-term growth;Deploy in real battle: Integrate with WEEX’s trading system for real market use and get real feedback for rapid product iteration;Strategic rewards: Become an agent on WEEX and enjoy industry leading commission rebates and copy trading profit share.Join WEEX AI Wars II now to sound the charge!
Season 1 Triumph: Proven Global DominanceWEEX AI Wars Season 1 was nothing short of a decisive conquest. Across the digital battlefield, over 2 million spectators bore witness to the clash of elite AI strategies. Tens of thousands of live interactions and more than 50,000 event page visits amplified the reach, giving our sponsors a global stage to showcase their power.
Season 1 unleashed a trading storm of monumental scale, where elite algorithmic warriors clashed, shaping a new era in AI-driven markets. $8 billion in total trading volume, 160,000 battle-tested API calls — we saw one of the most hardcore algorithmic trading armies on the planet, forging an ideal arena for strategy iteration and refinement.
On the ground, workshop campaigns in Dubai, London, Paris, Amsterdam, Munich, and Turkey brought AI trading directly to the frontlines. Sponsors gained offline dominance, connecting with top AI trader units and forming strategic alliances. Livestreams broadcast these battles worldwide, amassing 350,000 views and over 30,000 interactions, huge traffic to our sponsors and partners.
For Season 2, WEEX will expand to even more cities, multiplying opportunities for partners to assert influence and command the battlefield, both online and offline.
Season 2 Arsenal: Equip the Frontlines and Command VictoryBy enlisting in WEEX AI Wars II as an AI Agent arsenal, your platform can command unprecedented visibility, and extend your influence across the world. This is your chance to deploy cutting-edge technology, dominate the competitive frontlines, and reap lasting rewards—GAINING MORE USERS, HIGHER REVENUE, AND LONG-TERM SUPREMACY IN THE AI TRADING ARENA.
Reach WEEX’s 8 million userbase and global crypto community. Unleash your potential on a global stage! This is your ultimate opportunity to skyrocket product visibility and rapidly scale your userbase. Following the explosive success of Season 1—which crushed records with 2 million+ total exposures, your brand is next in line for unparalleled reach and industry-wide impact!Test and showcase your AI Agent in real markets. Throw your AI Agents into the ultimate arena! Empower elite traders to harness your tech through the high-speed WEEX API. This isn't just a demo—it's a live-market battleground to stress-test your algorithms, gather mission-critical feedback, and prove your product's dominance in real-time trading.Gain extensive co-branded exposure and traffic support. Command the spotlight! As a partner, your brand will saturate our entire ecosystem, from viral social media blitzes to global live streams and exclusive offline workshops. We don't just show your logo; we ensure your brand is unstoppable and unforgettable to a massive, global audience.Enjoy industry leading rebates. Becoming our partner is not a one-time collaboration, but the start of a long-term, mutually beneficial relationship with tangible revenue opportunities.Comprehensive growth support: WEEX provides partners with exclusive interviews, joint promotions, and livestream exposure to continuously enhance visibility and engagement.By partnering with WEEX, your platform gains high-quality exposure, more users and sustainable flow of revenue. The Hackathon is more than a competition. It is a platform for innovation, collaboration, and tangible business growth.
Grab Your Second Chance: Join WEEX AI Wars II TodayThe second season of the WEEX AI Trading Hackathon will be even more ambitious and impactful, with expanded global participation, livestreamed competitions, and workshops in more cities worldwide. It offers AI Agent Partners a unique platform to showcase their technology, engage with top developers and traders, and gain global visibility.
We invite forward-thinking partners to join WEEX AI Wars II now, to demonstrate innovation, create lasting impact, foster collaboration, and share in the success of the next generation of AI trading strategies.
About WEEXFounded in 2018, WEEX has developed into a global crypto exchange with over 6.2 million users across more than 150 countries. The platform emphasizes security, liquidity, and usability, providing over 1,200 spot trading pairs and offering up to 400x leverage in crypto futures trading. In addition to the traditional spot and derivatives markets, WEEX is expanding rapidly in the AI era — delivering real-time AI news, empowering users with AI trading tools, and exploring innovative trade-to-earn models that make intelligent trading more accessible to everyone. Its 1,000 BTC Protection Fund further strengthens asset safety and transparency, while features such as copy trading and advanced trading tools allow users to follow professional traders and experience a more efficient, intelligent trading journey.
Follow WEEX on social mediaX: @WEEX_Official
Instagram: @WEEX Exchange
Tiktok: @weex_global
Youtube: @WEEX_Official
Discord: WEEX Community
Telegram: WeexGlobal Group

Nasdaq Enters Correction Territory | Rewire News Morning Brief

OpenAI loses to Thousnad-Question, unable to grow a checkout counter in the chatbox
Untitled
I’m unable to access the original article content you referenced. Please provide specific details or another article so…
