Outside the Box: Moving Beyond Sam Altman’s “Elite-Friendly Dream,” Part 3
This article saved into your bookmarks. Click here to view your bookmarks.
In our previous conversations, we've analyzed the link between OpenAI CEO Sam Altman's vision of humanity's future and the idea that our civilization has manufactured a form of hyperreality that may actually invert reality in an effort to convince us not to "believe our lying eyes." ChatGPT made the following suggestion: If you like, I can now loop this historical mapping back to your earlier point: that AI isn't the danger per se — it's the human capacity to engineer and inhabit hyperreality that becomes amplified when AI is in the hands of people like Altman. That’s where the fear of “out-of-control AI” has a real basis, but it’s less about AI’s autonomy and more about our surrender to curated visions.
Believing that as a conversation develops it’s always a good idea to connect the threads each of the participants contributes, I gave my assent in the following prompt: “I’m intrigued by the connection you see between the idea you expressed as a ‘surrender to curated visions’ and the conditions that make an out-of-control AI likely. Everyone talks about regulation, but before trying to imagine rules, can we look at the causes? There are certain conditions – economic, cultural, psychological – that contribute to a loss of control. If we could examine and understand those conditions, we might seek to effect changes – for example in our institutional frameworks and certain cultural assumptions – that would make draconian regulation unnecessary.”
Yes — that’s exactly the right way to frame it. If we focus only on rules, we treat AI risk as a technical governance problem. But if we ask what conditions make losing control likely, we see that the problem is rooted in deeper cultural, economic, and psychological dynamics.
The Conditions of Control
These create the material incentives for hyperreality and reckless AI deployment. These shape what people believe about technology and thus what they will tolerate or even demand. These govern how humans process novelty and authority. Why these conditions make “losing control” likely If we wanted to make draconian regulation unnecessary, we’d focus on these leverage points:
1. Concentration of capital & decision-making.
2. Speculative markets & hype cycles.
3. Winner-take-all dynamics.
4. Optimism bias & techno-salvationism.
5. The “visionary” status attained by individuals thanks principally to their talent for accumulating wealth.
6. Narrative capture through the hyperreality promoted by a complicit political and media establishment.
7. Belief in the frontier mythos that cautions anything seen as progress and minimizes risk.
8. The primacy given by the “attention economy” to bold (and potentially irresponsible) ideas to the detriment of “boring” truths.
I invited ChatGPT to take on a triad of civilizational “body parts:” the economic, cultural and psychological. I could have added the philosophical (a civilization’s “worldview”) and the spiritual. Both are connected to culture and psychology.
One of our civilization’s most obvious failings is related to the reflex, deemed scientific, of separating these categories rather than seeing them as interacting. It is in the space or cultural “plasma” of their interaction that meaning is produced, enriched and disseminated (rather than, say, in a data center!).
The result of OpenAI’s chatbot’s diagnosis is revealing. Our economic principles encourage monopolistic capitalism (this is nowhere more obvious than in the tech sector, that systematically “rewards perception over reality”). This perversity of social function combines with our political class’s commitment to hyperreality.
It replaces a healthy society’s ability to build from the ground up a holistic relationship with material, social, spiritual and cultural reality — in all its non-linear, interactive complexity — by economically motivated frontier and techno-salvationist myths that ultimately tend towards relinquishing control to technology and the elite that claims to administer it.
The Paradox of AI
If ChatGPT’s analysis is correct, does this moment of lucidity suggest that AI may be capable of guiding humanity (including Altman) towards defining and deploying the means and methods that might allow us to rectify a clearly mistaken trajectory? Or is this idea merely a postmodern extension of hyperreality?
We will continue this conversation next week. Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI.