Discussion about this post

User's avatar
Himanshu Chhaunker's avatar

Dear Jaspreet, thank you for bringing originality back into researched and thoughtful writing on AI. The way you weave together the two MADs, the Habsburg analogy and the Bhasmasura story with model autophagy, gradient descent and hypertuning is outstanding, it makes a very technical risk both accessible and hard to ignore. I thoroughly enjoyed the piece and will go back to read what I have missed so far on Dharma of AI.

The insatiable hunger for data was familiar; the idea that it could itself lead to a MAD like collapse was genuinely eye opening. I had assumed that data would more or less “grow with the need”, or that we would always find new seams to mine, not that the system could start poisoning its own well. I would be very keen to hear your take on how developments in Responsible AI and AI governance intersect with this risk and how far the current guardrails and regulatory thinking really go in preventing the kind of feedback loop you describe.

No posts

Ready for more?