Dystopia in AI – or how Douglas Rushkoff run over my like a bus


I had never heard of Douglas Rushkoff before – and honestly I did not expect such a keynote. It felt like being run over by a bus. The perspective he offered at DLD on AI and its dangers, combined with the historical context, was entirely new to me. I will definitely read his books, watch more of his talks, and listen to his podcast. The boldest move from my perspective was turning a panel into a captivating monologue – Pericles would have been proud. The Video of his speech can be found here 

Rushkoffs Core Thesis

Modern technologies – including AI – are often deployed by existing power structures in ways that disempower rather than empower people. The true potential of AI is not in quickly “delivering answers,” but in iterative questioning, collaborative thinking, and human metabolization of information.

Key Points

  • Projection of Tech Elites’ Fear: The idea that AI may treat humans as poorly as tech leaders treat society reflects existing dynamics of control and exploitation.
  • Historical Parallels:
    • Industrial Age: Assembly lines and chartered monopolies served not efficiency for all, but de‑skilling of workers and power concentration for owners.
    • Dumbwaiter (food elevator): Designed not to ease labor, but to hide it – making human work invisible to elites.
  • What AI/LLMs really are: The first native app of the internet, turning the network itself into content. Their strength lies in connecting and re‑contextualizing information, not reflecting “reality.”
  • Wrong Paradigm (Industrial Thinking): Using AI for final answers or products creates feedback loops (e.g., blindly forwarding AI‑written business plans).
  • Right Paradigm (Generative Thinking): AI as a partner in process – like wind chimes you adjust iteratively. Humans respirate and compost data, giving it meaning together.
  • Hidden Costs of AI: Rare earth mining, water consumption, and human labor in data labeling – new invisible forms of work.
  • Alternative AI Paths: Beyond massive data models – e.g., small data, fractal approaches, and generative systems that go beyond statistical averages.

Reflections & Takeaways

  1. Ask the right question: Not “What are humans for?” but “What is this technology for?” Humans are not instruments.
  2. Adopt iterative workflows: Use AI for question iteration, scenarios, perspective shifts, and dialogue, not for premature final answers.
  3. Expose hidden costs: Make ecological, material, and social impacts visible.

Bottom Line

Rushkoff’s view: AI does not automatically make us more or less human. We decide – through cultural framing, ethical choices, and collaborative meaning‑making. Used under industrial logic, AI pushes people down the value chain. Practiced as generative, relational work, it can make us more human.

My view: AI is not just another technology. It will reshape society across all areas of life, social classes, and work environments. It is essential to remain aware of our own interpretive authority in the “dialogue with the machine” and to follow Rushkoff’s mantra: technology should serve people, not the other way around. Despite justified criticism of regulation, the EU AI Act is certainly a step in the right direction.

Douglas Rushkoff – Short Bio

Douglas Rushkoff is a media theorist, author, and professor known for his critical perspective on digital culture, economics, and technology’s impact on society. He has been named one of the “world’s ten most influential intellectuals” by MIT.

He has written influential books such as Team Human and Survival of the Richest, and hosts the Team Human podcast. His work emphasizes how technology should serve humanity rather than exploit it.

Kommentare

Hinterlasse einen Kommentar