AI Will Eat Itself; It’s Like Mad Cow Disease – Tech Insiders Not Concerned by Accelerating Development
G. CALDER
As artificial intelligence accelerates into every corner of modern life, the dominant narrative remains one of inevitability. With smarter models and total automation, many feel catapulted toward a future reshaped by machines that outperform humans at almost everything. But increasingly, tech insiders are vocalising alternative views: that the trajectory is neither healthy, sustainable, or inevitable. Dan Houser – co-founder of Rockstar games – recently compared the current boom to Mad Cow disease, saying it’s a system that’s feeding on itself and will eventually become fundamentally unstable. It’s a provocative analogy that opens a deeper question that may bring relief to some – what if the flaws emerging in AI are not bugs to be fixed, but are structural limitations that will prevent the technology from ever truly “taking over the world”?

The Mad Cow Analogy: Is AI Eating Itself?
Houser’s comparison hinges on a specific historical lesson. Mad Cow disease spread when cattle were fed processed remains of other cattle, creating a closed loop of degraded biological material that eventually produced catastrophic neurological failure. His argument is that artificial intelligence is – rather than becoming invincible and taking over the world – actually drifting into a similar pattern. Models are increasingly being trained on synthetic outputs that were previously generated by other AI systems – not on human-created knowledge.
Essentially, as automated models continue growing, more of what we see on the internet is generated by the same systems. Thus, as new and existing models train further, they are in fact only digesting their own outputs. Researchers have already documented a phenomenon known as model collapse, where generative systems trained repeatedly on AI-created data become less accurate, less diverse, and more detached from reality over time. Instead of their intelligence compounding, the systems end up hollowing themselves out, reinforcing their original errors and flattening nuance.
A Growing Problem Tech Leaders Don’t Talk About
Public-facing AI marketing focuses on scale: more data, more integration, more parameters. What’s not being talked about, however, is the growing scarcity of high-quality, human-generated training material. Much of the open internet has already been ingested by existing models, meaning what’s left is increasingly polluted by spam, automated noise, and other forms of AI content.
Large language models without access to continuously renewed human input such as art, reasoning, writing, and genuine lived experience, are at serious risk of stagnation or regression. The irony is stark: as more automated content floods the web, the less reliable the web becomes as a training source.
Houser’s criticism cuts deeper than technical architecture. He argues that those pushing hardest for complete AI adoption are often insulated from the intellectual and cultural costs, instead prioritising efficiency over proper understanding. In his own words, these executives are “not fully-rounded humans” who are narrowing perspective inside decision-making circles.
What Video Games Teach Us About AI
Rockstar Games – which Houser co-founded – built its reputation on human-crafted complexity including satire, cultural texture, and general creativity. These are exactly the qualities that generative AI struggles to reproduce convincingly.
While models can generate dialogue, textures, and code snippets, they lack an internal sense of meaning, motivation or consequence. These are qualities essential to storytelling and world-building, and game developers have long since encountered AI’s limits in practice. They highlight a broader issue: AI can imitate form, but it doesn’t understand context. It can predict what should come next, but not why it should come next at all.
Others are Sounding the Same Alarm
Houser is just one example of a growing number of concerned tech executives who are all echoing similar sentiment. They often warn that AI systems are brittle, overhyped, and fundamentally misaligned with how intelligence really works.
Confident but false outputs are often called “hallucinations”. These act as signs that these systems don’t actually know anything in a human sense. There’s also the concern of skyrocketing energy costs, data bottlenecks, and diminishing returns as models scale. Rumours are circulating that brute-force scaling and trying to expand as rapidly as possible is in fact approaching economic and physical limits.
Reassuringly perhaps, the fear of runaway super-intelligence starts to look less like an imminent threat, and more like a distraction from the real risks: cultural homogenisation, misinformation, and institutional over-reliance on systems that can never work like human beings.
Are These AI Limitations a Good Thing?
This structural weakness may be precisely what prevents catastrophe. If AI systems degrade when isolated from human input, then they can never become self-sustaining intelligence forms. They remain parasitic on human creativity and judgement, and that dependence undermines the popular science-fiction images of machines autonomously improving themselves beyond human control.
In that sense, AI may be more like an amplifier than a replacement. It can be a powerful tool, but fundamentally constrained. Perhaps it can accelerate patterns already present in society, but it cannot generate meaning, ethics, or purpose on its own. It may not be harmless, but it does start to appear limited.
The Real Risk Behind It All
The most serious danger in this case would not be AI itself, but rather how institutions respond to it. Corporations, media organisations, and even governments are increasingly treating AI outputs as authoritative, even when accuracy is uncertain. Over time, this degrades human expertise, accountability, and critical thinking.
If AI-generated material becomes the default reference point in law, journalism, education, or policy, for example, then errors stop being isolated mistakes and start being systemic failures. This is the true “mad cow” risk: not that machines rebel, but that humans outsource judgement until the feedback loop implodes.
Houser simply asks whether society is confusing automation with wisdom, and speed with progress.
Final Thought
If AI is truly entering its “mad cow” phase, then the fantasy and fear of total machine dominance looks less convincing. That may disappoint futurists and alarmists, but it should reassure everyone else.
The future certainly needs human judgement, creativity, and understanding. If we take arguments like Houser’s, then the danger isn’t that AI will replace everybody, and it doesn’t look like AI could ever take over the world. But that doesn’t mean we won’t end up surrendering it voluntarily by relying on automated models too much in the meantime.
This article (AI Will Eat Itself; It’s Like Mad Cow Disease – Tech Insiders Not Concerned by Accelerating Development) was created and published by The Expose and is republished here under “Fair Use” with attribution to the author G. Calder

••••
The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB)
••••
Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning.
••••
Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
••••
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of The Liberty Beacon Project.





A perfect voicing of exactly how I have been feeling about it all along. Now we just have to wait for the world, especially TPTB, to wake up to the inevitable decline in the use of the ‘tool to end all tools’ and to return to using a variety of appropriate tools dictated by the task and context. How long is that going to take and how much do we suffer along the way?!