Stupidity as a Systemic Outcome: Psychopolitics, Digital Architecture, and the Purge of Critical Voices in Tech

Engineered stupidity is not accidental; it is architected, built into the very platforms that dominate our attention economies.

Introduction: The Age of Compulsory Distraction

In Psychopolitics: Neoliberalism and New Technologies of Power, philosopher Byung-Chul Han argues that contemporary capitalism has moved beyond brute repression to control populations. Instead, it now employs subtler, more insidious mechanisms: overstimulation, overinformation, and forced positivity. These forces combine to produce a society of exhausted, distracted, and politically passive individuals.

Han’s concept of “stupidity as a systemic outcome” does not refer to a lack of innate intelligence. Rather, it describes the active erosion of deep, critical thought under conditions of perpetual distraction. This engineered stupidity is not accidental; it is architected, built into the very platforms that dominate our attention economies. Within the U.S. IT and social media landscape, this architecture is further reinforced by a troubling pattern: the systematic exclusion of those who might challenge it.

Psychopolitics: How Digital Capitalism Manufactures Stupidity

Han distinguishes between the disciplinary societies of the past, where power demanded “Obey!” and today’s achievement societies, where power whispers “Perform! Optimize! Engage!” In this new model, exploitation no longer comes primarily from an external oppressor. It emerges from internalized self-exploitation, a compulsive drive to be productive, visible, and constantly consuming.

This shift enables key mechanisms that manufacture digital stupidity. The attention economy, for instance, sees social media and apps designed to maximize engagement, not understanding. Features like infinite scroll, autoplay, and algorithmic feeds fracture concentration, making sustained critical thought increasingly difficult. Han also warns of overinformation and noise, where an excess of data leads not to enlightenment but to paralysis. When every opinion, fact, and conspiracy theory is granted equal platform space, our capacity for discernment collapses. Furthermore, forced positivity and self-branding reward performative outrage and shallow hot takes over nuanced discourse. Dissent is not always censored; it is often simply algorithmically suppressed or drowned out in the noise. This system cleverly avoids the need for overt censorship; it merely floods the mind until thinking deeply becomes a practical impossibility.

Architectural Enforcement: How U.S. Tech Companies Build Distraction

The U.S. IT industry is not merely complicit in this system; it is its primary engineer. This enforcement happens through two interconnected layers: design and personnel.

First, user experience (UX) design functions as a form of thought control. Platforms utilize variable reward systems, akin to Skinner Box logic, where intermittent dopamine triggers, likes, notifications, new videos, keep users in a state of addicted passivity. They promote frictionless consumption by removing natural pauses for reflection, such as when Twitter eliminated headlines from article shares. Meanwhile, algorithmic narrowcasting feeds users content that confirms existing biases, effectively eliminating the cognitive dissonance necessary for growth and critical thought.

Second, this architecture is protected by the elimination of dissenting voices. Those who best understand these harmful mechanisms, including engineers, ethicists, UX researchers, and journalists, have been systematically purged. The mass layoffs at companies like Meta, Google, and Twitter between 2022 and 2024 disproportionately targeted trust and safety teams, ethical AI researchers, and policy experts. Whistleblowers like Frances Haugen at Facebook and Timnit Gebru at Google’s Ethical AI team were pushed out for exposing systemic harms. Furthermore, the gig-ification of tech labor ensures that contract workers, such as content moderators and data labelers who witness the worst platform toxicity firsthand, remain precarious and voiceless. The result is a tech industry that has become structurally incapable of self-correction.

The Future: Can Critical Thought Be Recovered?

Han’s framework suggests that resistance cannot stem from individual willpower alone; it requires profound structural change. One path involves breaking the attention monopoly by supporting alternative platforms, like Mastodon or Bluesky, that explicitly reject engagement-optimized design. Another is re-regulating the tech industry, enforcing transparency in algorithms, and banning deceptive “dark patterns” that trap user attention. Finally, protecting critical labor is essential. Unionizing tech workers and legally shielding ethicists from retaliation could help restore a measure of accountability and diverse thought within these powerful companies.

Conclusion: Stupidity Is Not an Accident—It’s a Business Model

The “stupidity” Han describes is not a personal failing but a designed outcome of digital capitalism. It is a profitable business model built on architectures of distraction and enforced by the silencing of critique. Until these architectures are dismantled,and until those who challenge them are welcomed back into the room, the cycle of passive cognition will only deepen.

The pressing question thus evolves. It is no longer, “Why are people so easily manipulated?” but rather, “Who benefits from ensuring they stay that way?”

Further Reading:

  • Byung-Chul Han, Psychopolitics: Neoliberalism and New Technologies of Power
  • Shoshana Zuboff, The Age of Surveillance Capitalism
  • Jaron Lanier, Ten Arguments for Deleting Your Social Media Accounts Right Now