If you could un-invent something, what would it be?
Introduction: The Ripple Effect of Removal
The question arrives inevitably at dinner tables and late-night conversations: “If you could un-invent something, what would it be?” It is a question that seems to invite moral clarity – a chance to identify a singular technological mistake and erase it from human history. Many respond instinctively: the atomic bomb, social media algorithms, plastics, guns, fossil fuel technology. The appeal is intuitive: remove the harmful artifact, restore the innocence of the pre-invention world, and solve the problems that technology created.
Yet this framing rests on a profound misunderstanding of how invention, knowledge, and human society actually function. To un-invent is not merely to remove a device; it is to attempt to excise a node from a vast, interconnected web of discovery, social practice, and biological necessity. More fundamentally, it represents an attempt to escape responsibility by imagining that the problems we face are primarily technological rather than human in origin. This essay argues that the desire to un-invent reflects a deep misunderstanding of technological causation, human psychology, and the nature of knowledge itself. Examined through the lenses of history, sociology, psychology, and philosophy, the un-invention impulse reveals itself not as a solution but as a symptom – a misdirected longing that obscures what we might actually do: steward our creations with ethics, wisdom, and foresight.
Part I: Historical Analysis – The Myth of the Lone Genius and the Inevitability of Discovery
The Doctrine of Multiple Discovery
A fundamental historical principle stands at odds with the narrative of un-invention: most discoveries and inventions emerge not from singular geniuses but from the convergence of multiple minds, working independently, arriving at similar conclusions.[1][2] This phenomenon, termed “multiple discovery,” suggests that scientific and technological breakthroughs are not accidents attributable to one person but inevitable outcomes of the intellectual and material conditions of their time.
The calculus of Isaac Newton and Gottfried Wilhelm Leibniz exemplifies this perfectly. Newton developed his method of fluxions between 1665 and 1670, producing a treatise in 1671 that remained unpublished for decades. Leibniz independently developed his own variant of calculus beginning in 1674, publishing his findings first in 1684.[3] The two men arrived at essentially identical systems – one geometric, one analytic – through different paths and different cultural contexts, with no meaningful direct collaboration. Today, historians accept without controversy that both invented calculus independently, and calculus would have emerged regardless of either man’s individual contributions.[1][2] The mathematical tools of the age – Cartesian algebra, infinite series, infinitesimals – had created the conditions for this discovery. The genius lay not in inventing calculus from nothing, but in synthesising existing techniques into a systematic, generalisable method.[3]
The implication is stark: un-inventing calculus by removing Newton or Leibniz would have changed only the timeline and attribution. The calculus would have emerged nonetheless, developed by some other mathematician working within the same intellectual ecosystem. The same principle applies far more broadly. Simultaneously independent discoveries of oxygen (Scheele, Priestley, Lavoisier in the 18th century), the theory of evolution (Darwin and Wallace in the 19th century), and the development of the airplane (multiple aviation experimenters across Europe and America) demonstrate that invention responds to underlying scientific readiness.[2]
The Atomic Bomb and the Inevitability of Destructive Knowledge
No twentieth-century invention carries greater moral weight than the atomic bomb. If any technology could justify the fantasy of un-invention, it would seem to be this. Yet even here, historical examination reveals the trap embedded in the question.
Nuclear fission was discovered in the winter of 1938-39 by Otto Hahn, Fritz Strassman, Lise Meitner, and Otto Frisch.[4] This was fundamental physics – the public knowledge that certain heavy atomic nuclei could be split through neutron bombardment, releasing enormous energy. The concept of a nuclear chain reaction followed almost immediately in early 1939.[4] These discoveries were not secret laboratory findings but published scientific results, known to physicists worldwide, including in Germany. The Manhattan Project did not invent the bomb from theoretical possibility; it engineered the bomb from public knowledge.[4]
What the Manhattan Project accomplished was execution: turning known physics into weapons-grade material, solving unprecedented engineering challenges, assembling the organisational apparatus to achieve this at wartime scale. This was no small feat. The project employed over 130,000 people, cost roughly $28 billion in today’s dollars, and pushed engineering and physics into uncharted territory.[4] But could the bomb have been prevented by removing any single individual from the equation? The project itself came close to failure on several occasions; delays of a few months might have resulted in Japan’s surrender before a functional weapon existed.[4] Yet this merely postpones, not prevents: the fundamental knowledge remained public, and any major industrial power with sufficient resources would eventually develop nuclear weapons.[4]
Indeed, the historical record vindicated this assessment. The Soviet Union, working with limited access to the Manhattan Project’s technical achievements, developed an atomic bomb by 1949. Other nations followed. The knowledge of nuclear weapons was not a secret that could be un-invented; it was a problem of physics that became solvable once fundamental physics was understood. The bomb exists not because of the genius of specific individuals but because the laws of physics were discovered. To truly un-invent the bomb would require un-inventing nuclear fission itself – which is to say, returning to a state of ignorance about how matter behaves.
This reveals the paradox at the heart of un-invention fantasies: they assume knowledge can be selectively erased or locked away. History shows it cannot. Once a scientific principle is discovered and published, it enters the commons of human understanding. Suppression may delay adoption, but cannot indefinitely prevent it.[5] Scientists in different countries will rediscover the same truths. The competitive and collaborative nature of science ensures that major discoveries cannot remain hidden.[6]
Gunpowder: When Invention Does Not Determine Society
If technology is not inevitable in its social effects, gunpowder provides a compelling case study. Invented in 9th-century China, gunpowder initially served ceremonial and signalling purposes before evolving into explosive weapons and, eventually, into firearms that could be used as projectiles.[7] The trajectory of gunpowder in Europe versus China, however, demonstrates that the same technology can produce radically different social outcomes depending on institutional and cultural context.
In China, despite inventing gunpowder and subsequently developing advanced firearms centuries before Europe, institutional choices prevented the democratisation of violence.[8] The Ming Dynasty’s legal code, the “Da Ming Lu,” prescribed execution for private firearms ownership, even as the military developed sophisticated cannons and muskets.[8] This was not accidental policy but reflected centuries-old Confucian philosophy emphasising centralised control of coercive power. The result: China, which invented gunpowder and the flintlock, never experienced the feudal fragmentation that Europe underwent.[8][9]
Europe, by contrast, received gunpowder technology centuries after China. Yet European adoption was transformative. Competition among decentralised feudal powers created an arms race in ballistic weaponry. The cannon made feudal fortifications obsolete; the musket negated the military advantage of mounted knights. These technologies did not cause the transition from feudalism to the nation-state, but they accelerated and reinforced it by making centralised, large-scale military organisation more effective than decentralised feudal power.[9] European societies made deliberate choices about how to organise war, and these choices shaped political outcomes.[9]
This distinction is crucial: the same invention – gunpowder and firearms – produced opposite social outcomes in China and Europe, not because the technology determined society, but because human institutions made different choices about how to deploy it. Un-inventing gunpowder would not have prevented the rise of the nation-state in Europe; rather, it would have delayed it and altered the mechanisms through which centralisation occurred. Nor would it have changed the fundamental political logic – that large-scale coordinated violence requires centralised authority – which could have been achieved through other technologies and tactics.
Part II: Sociological Perspectives – Technology, Structure, and Dependency
The False Choice Between Determinism and Constructivism
The relationship between technology and society has long been contested through two opposing frameworks. Technological determinism argues that technological development follows its own logic, independent of social forces, and that society must adapt to technological change.[10] Social constructivism counters that society shapes technological development – that human needs, values, and institutions determine which technologies are invented and how they are adopted.[10][11][12]
The evidence suggests the answer is neither pure determinism nor pure constructivism, but rather bidirectional interaction. The Social Construction of Technology (SCOT) model, developed through detailed historical studies, demonstrates that technologies are not fixed objects with predetermined purposes but rather flexible artifacts shaped by the interpretations and practices of different social groups.[11][13] A technology may have multiple meanings – what engineers design, what manufacturers produce, what users adopt, and what social consequences emerge are not predestined but negotiated through social practice.
Yet this insight, while correcting determinism’s naïveté, can obscure a different reality: certain technological developments become effectively inevitable once the underlying scientific knowledge exists and resources are available to develop applications. The Haber-Bosch process illustrates this tension perfectly.
The Haber-Bosch Paradox: Systemic Dependency and Moral Limits
In 1909, German chemist Fritz Haber discovered how to synthesise ammonia by combining atmospheric nitrogen with hydrogen under extreme heat and pressure.[14] This was a remarkable scientific achievement, but it was not obvious that this discovery would reshape human civilisation. Yet when Carl Bosch scaled the process to industrial production in 1913, the consequences became epochal.[14] Ammonia could now be manufactured at scale, making synthetic fertilisers economically feasible.[14]
The historical impact is staggering: the world’s population grew from 1.6 billion in 1900 to over 8 billion today. Scientists estimate that approximately half of the world’s current population would not be alive without the Haber-Bosch process.[14][15][16] The correlation between the introduction of synthetic fertilisers and global population growth is not coincidental but causal. Agricultural productivity increased precisely because farmers could replenish soil nitrogen continuously rather than relying on limited natural sources (manure, nitrogen-fixing crops, bird guano).[15] With pre-Haber agricultural techniques, Earth could support only approximately 4 billion people.[15][17]
Now consider the proposition: un-invent Haber-Bosch, reverse the development of synthetic fertilisers, and return to early 20th-century agricultural methods. The immediate consequence would be catastrophic famine at a scale exceeding any historical precedent. Billions would die. Yet this proposed un-invention is superficially attractive because synthetic fertilisers have created environmental harms: nitrogen runoff causes dead zones in aquatic ecosystems, the production process consumes over 1% of global energy (currently fossil fuel-derived, thus contributing to climate change), and synthetic fertilisers have contributed to soil degradation in many regions.[14][15][17]
The utilitarian calculus appears simple on the surface: weigh the environmental harm against the loss of human life. Yet even this framing reveals the moral trap. We cannot know what counterfactual history would have emerged without Haber-Bosch. Perhaps agricultural populations would have stabilised at sustainable levels. Perhaps agricultural innovation would have proceeded differently. Perhaps civilisation would have collapsed or transformed in ways we cannot predict. We are asked to sacrifice real, living billions for a speculative vision of sustainability.
This is not an argument that Haber-Bosch was ethically justified – only that un-inventing it is not a coherent ethical response. The response to keystone technologies is not erasure but stewardship: developing green hydrogen production to eliminate fossil fuel dependency in fertiliser synthesis, implementing sustainable agricultural practices that maximise the benefits of synthetic fertilisers while minimising environmental harm, and regulating nitrogen use to prevent runoff. These are difficult problems, but they are solvable through human ingenuity, regulation, and collective action. Erasure is neither feasible nor ethically superior.
Unintended Consequences: When “Solutions” Become Problems
The washing machine offers a humbling case study in how technologies deployed to solve problems can, through their interaction with social structures and expectations, create new ones. In the early 20th century, washing clothes was among the most labour-intensive household tasks. The washing machine was heralded as a liberation technology – a device that would free women from gruelling, repetitive, physically punishing work and enable them to pursue education, employment, and leisure.[18][19]
The outcome was far more complex. While washing machines did reduce the time required to wash clothes, they did not reduce the total time women spent on housework or increase their leisure time.[20] Instead, the labour saved through mechanisation was reinvested into household standards that rose to match the capabilities of the technology.[20][19] Clothes could be washed more frequently and more cleanly, so expectations shifted: families were now expected to have more frequent changes of clothes and higher standards of cleanliness. The freed-up time was recaptured by these rising standards.[20][19]
More broadly, washing machines did not challenge gender divisions in household labour.[18][20] Women continued to bear responsibility for laundry, even as the nature of the task changed. Smart home technologies and AI-powered domestic robots follow an identical pattern: they are marketed as solutions to household drudgery, yet evidence suggests they will not redistribute gendered labour but instead enable higher expectations of domestic performance for women while potentially freeing men from domestic involvement.[20][19]
This is not a fault of the technology itself, but rather reflects how technological possibilities interact with existing power structures, gender norms, and cultural expectations. The technology created space for change, but institutional arrangements, cultural narratives, and gender ideologies channelled that space in ways that perpetuated rather than transformed inequality. Understood sociologically, the washing machine teaches us that technology does not automatically liberate; it requires parallel changes in social structure, values, and the distribution of power to produce genuinely emancipatory outcomes.
Part III: Psychological Dimensions – Why We Want to Un-Invent, and Why the Reasons Are Illusory
The Hedonic Treadmill and Adaptation
Humans exhibit a remarkable psychological capacity called hedonic adaptation: the tendency to return to a baseline level of happiness despite significant life changes.[21][22] When you receive a salary increase, you experience an initial surge of happiness that quickly dissipates as you adjust to the new circumstances. Your living standard normalises; the pleasure derived from the additional income decreases. Over time, you return to approximately your previous baseline of happiness.[21][23]
This adaptation process protects us from the extremes of both perpetual elation and perpetual despair. It allows us to remain functional despite adversity and to maintain motivation despite success. But it also means that technologies marketed as solutions to happiness – smartphones providing instant connection, streaming services offering infinite entertainment, social media enabling global communication – deliver diminishing returns in well-being. The initial novelty wears off; we adapt to the new capability and return to baseline happiness.[23][24]
Many expressions of the desire to un-invent something begin from this hedonic treadmill dynamic. People attribute their current anxiety or dissatisfaction to modern technologies – social media, smartphones, the internet – and imagine that un-inventing these would return them to greater happiness. Yet the hedonic treadmill suggests this is illusory. If pre-smartphone happiness levels could be reconstructed, we would likely adapt to those conditions just as we adapt to current ones. The technology is not the cause of unhappiness; it is the lens through which we interpret a fundamental aspect of human psychology.[21][22][23]
The Psychology of Loss Aversion and Nostalgia Bias
A more insidious psychological mechanism drives much un-invention fantasy: nostalgia and what researchers call “declinism” – the belief that the present is worse than the past.[25][26][27] This cognitive bias is powerful and nearly universal. Our memory systematically distorts the past in unrealistically positive directions. We forget or minimise the bad elements of previous eras while rehearsing and reinforcing memories of the good.[26][27]
The psychological mechanism is loss aversion: humans feel losses roughly twice as intensely as equivalent gains.[26] When a new technology creates visible problems (cyberbullying via social media, sleep disruption from smartphones, algorithmic amplification of outrage), these losses loom large in our perception. Meanwhile, the benefits of the technology – the connection it enables, the information it provides, the coordination it facilitates – become background, normalised, almost invisible. We focus on what we have lost (pre-digital social interaction, face-to-face community, slower-paced life) and minimise what we have gained.
This explains the persistent cultural narrative of decline, visible as far back as ancient Greece, where philosophers lamented the corruption of youth by writing – a technology so normalised to us that we forget it was once considered dangerously novel.[26][27] Every generation tends to view the current moment as worse than the golden age of their youth, yet objective measures of welfare – life expectancy, childhood mortality, disease burden, poverty rates – show consistent improvement across centuries.[26][27]
The un-invention impulse thus reveals itself as partially a defence mechanism: an unconscious attempt to externalise responsibility for present unhappiness by blaming external technologies rather than examining internal or structural causes. By imagining that un-inventing social media would restore well-being, people avoid confronting the possibility that isolation, anxiety, and dissatisfaction might stem from economic insecurity, status anxiety, social fragmentation, or existential uncertainty – factors that no un-invention could directly address.
The Extended Mind and Cognitive Amputation
A different psychological mechanism makes un-invention even more problematic: the extended mind thesis. This philosophical position, articulated by Andy Clark and David Chalmers, argues that human cognition is not confined to the brain but extends into tools and artifacts we use.[28][29][30] When you use a notebook to record a thought, that notebook becomes part of your cognitive apparatus. When you use a smartphone to navigate, the device becomes part of your spatial reasoning system.[29][30]
This is not metaphorical. The coupling between mind and tool is functional and real. To lose the tool is to suffer an actual cognitive disability. A mathematician losing access to pencil and paper, or a modern person losing access to digital devices for organising information and maintaining social relationships, does not merely lose convenience – they lose cognitive capacity.[28][29]
Consider what smartphones now are: repositories of memory (photos, messages, calendar events), tools for problem-solving (calculators, search, navigation), extensions of social connection, and interfaces for participating in collective knowledge and institutions. To un-invent the smartphone would not merely mean losing a gadget; it would mean cognitive amputation on an unprecedented scale. Billions of people would lose capacities they have developed to depend on.
This psychological reality undercuts the appeal of un-invention fantasy. We cannot un-invent technologies that have become constitutive of our cognitive capacities without causing genuine harm. The solution is not erasure but thoughtful stewardship and design: shaping technologies to augment rather than impair cognition, protecting privacy and agency within extended cognitive systems, and building redundancy so that technological failures do not catastrophically disrupt critical functions.
Part IV: Philosophical Dimensions – Knowledge, Responsibility, and the Limits of Moral Fantasy
The Utilitarian Calculus and the Problem of Measuring Harm
The most rigorous ethical approach to questions of harmful technologies is consequentialism: calculate the total harm and benefit produced by an invention and determine whether the net balance justifies its existence. This seems straightforward until applied to actual cases.
Consider antibiotics. Discovered by Alexander Fleming in 1928, antibiotics have saved hundreds of millions of lives. Before antibiotics, a scratched knee or a childbirth complication could result in death from infection.[31] Antibiotics transformed infectious disease from a leading cause of mortality to a manageable problem in most contexts. From a consequentialist perspective, antibiotics are unambiguously positive: the prevention of suffering and death vastly outweighs any harm produced.
Yet antibiotics have also created antimicrobial resistance (AMR), a phenomenon where pathogenic bacteria develop the ability to survive antibiotic exposure. Overuse of antibiotics – in human medicine, in veterinary practice, in agricultural animal feed – accelerates the evolution of resistant strains. As resistance accumulates, antibiotics become less effective, and previously treatable infections become dangerous once again.[31][32][33] Some modern bacterial infections are now resistant to multiple classes of antibiotics, leaving limited or no treatment options.[31][33]
The utilitarian calculus thus becomes genuinely complex. Antibiotics have prevented suffering for hundreds of millions of people. The emergence of AMR threatens to make infections dangerous again, imposing future suffering of potentially enormous scale. How do we weigh present lives saved against future lives endangered?[32][34] Do we apply a discount rate to future suffering, valuing it less than present well-being? Do we account for intergenerational justice – the obligation we bear to future generations?[32] Do we recognise that the benefits of antibiotics are distributed globally but unevenly, while the costs of resistance also distribute unequally, with the poorest and least developed nations potentially facing the worst consequences?
These questions have no clean answers. Yet the fact that utilitarian calculus becomes difficult does not mean we should un-invent antibiotics – a proposal that would immediately cost hundreds of millions of lives. Rather, it suggests that the proper ethical response is complex stewardship: developing new antibiotics, regulating antibiotic use to prevent resistance, investing in alternative antimicrobial approaches, and building equitable access systems that ensure antibiotics remain effective and available to all.[31][33]
Pandora’s Box: The Inseparability of Knowledge from Application
A deeper epistemological problem confronts any proposal to un-invent something: knowledge cannot be un-known, and it is impossible to separate the discovery of a principle from its potential applications.
Nuclear fission provides the canonical case. The physics of nuclear fission is fundamental science: understanding how certain heavy atomic nuclei can be induced to split, releasing enormous energy. This is a fact about the natural world. Once discovered and published, the knowledge enters the scientific commons. It cannot be un-published, un-understood, or restricted to benign applications. The same physics that could generate electrical power also describes the process by which an atomic bomb works.
Some have proposed that certain knowledge is so dangerous that it should not be pursued – that scientists should self-censor research into gain-of-function pathogen modifications, advanced synthetic biology techniques, or certain forms of artificial intelligence. Yet this position faces profound difficulties.[5]
First, attempts at suppression rarely succeed permanently.[5] If suppressed in one country or institution, research proceeds elsewhere. Knowledge rediscovered independently is simply discovered again. History shows no case where a major scientific discovery has been permanently prevented through suppression.[5]
Second, the distinction between “dangerous” and “benign” knowledge is often impossible to draw in advance. The technology that appears merely theoretical or distant in application can suddenly become practical. Conversely, the research program intended to yield weaponisable results might instead generate unexpected medical applications.[5] The same virology that could engineer a pandemic pathogen might also reveal how to create a vaccine. The same atomic physics that enabled the bomb also enabled nuclear medicine, nuclear energy, and radiometric dating of geological and archaeological materials.
Third, the policy of suppressing knowledge invokes a dangerous epistemic authority. Who decides which knowledge is too dangerous to pursue? History provides cautionary tales of institutions (the Catholic Church, authoritarian governments, scientific academies) suppressing research that challenged their power or ideology.[5] The cost of empowering institutions to censor science is potentially higher than the risk of pursuing it.[5]
A more honest response to Pandora’s box is to acknowledge its reality while rejecting the invitation to refuse opening it. Science is indeed the process of opening a never-ending series of boxes that contain potential good or ill.[5] Rather than refusing to open them, the responsibility is to approach each box with wisdom, foresight, and mechanisms for responsible innovation: to anticipate potential harms, to design governance structures that manage risks, to ensure benefits are distributed equitably, and to remain accountable to the broader society that bears the consequences of our knowledge and creations.[5]
Technology as Value-Laden and the Problem of Human Responsibility
A final philosophical confusion surrounds the question of whether technology is “neutral” – a mere tool that can be used well or badly, with the moral character determined by the user, not the tool itself. This myth of technological neutrality obscures a crucial reality: all technologies embed values in their design, and these values shape how technologies are used and what consequences they produce.
Social media algorithms provide the clearest modern example. These algorithms purport to rank content according to “engagement” – the likelihood that a user will click, like, share, or comment on a post. This appears neutral: simply presenting content the user is most likely to engage with. Yet this choice of optimisation function is not value-free. By prioritising engagement, algorithms systematically amplify content that provokes emotional response: outrage, fear, controversy, excitement.[35][36] They disfavour nuance, accuracy, and complexity. They treat addictive content the same as enriching content, hollow entertainment the same as genuine insight, provided both generate clicks.[35][36]
This is not a neutral choice. It reflects deliberate value judgments: that user engagement is the paramount good, that profit (engagement drives advertising revenue) supersedes user well-being, and that the platform bears no responsibility for the social consequences of amplifying divisive or false content.[35][36][37] When algorithms function at scales of billions of users, these “small” value choices accumulate into massive social consequences: polarisation, misinformation, degraded discourse, and measurable harms to mental health.[35][36][38]
The broader principle is this: every technological design embeds choices about what matters. A city designed around automobile transportation embeds values about speed, individual mobility, and resource consumption. A factory designed for efficiency embeds values about cost reduction, worker safety, and environmental impact. A medical system designed around cost containment embeds values about access, quality of care, and profitability. These values are never neutral, and they cannot be removed from the technology through claims about “proper use.” Rather, they are embedded in the artifact itself.[35][36][37][39]
This recognition of value-ladenness does not mean technologies are inherently harmful or that their designers are villainous. It means that responsibility cannot be externalised or denied. The engineers, designers, and institutions that create technologies bear responsibility for the values embedded in those creations and the foreseeable consequences that follow from them.[38][40][41] This is not an optional ethical nicety; it is a basic requirement of integrity.
The desire to un-invent something is often an implicit acknowledgment of this responsibility – a wish that someone else would have made different choices, or that the consequences of those choices could be magically erased. But this fantasy displaces responsibility precisely at the moment when responsibility is most needed. Instead of asking “what if this had never been invented?” the more urgent question is “how do we now take responsibility for what we have created?”
Part V: The Futility of Regression and the Necessity of Stewardship
Technological Inevitability Is Not Fate
A final myth sustains the fantasy of un-invention: the belief that technological development is inevitable and unstoppable, that once certain knowledge exists, technological deployment follows inexorably. This deterministic worldview is psychologically appealing because it absolves humans of responsibility. If technology is inevitable, then technological consequences are not our fault; we are passengers on a technological tide, not agents shaping our future.[42][43]
Yet this view is demonstrably false. Technology is not inevitable. It is contingent on human choices, on the availability of resources, on institutional decisions, and on cultural values. Throughout history, humans have faced viable technological possibilities and declined to pursue them, or have pursued them and then abandoned them.[43] Flying cars have been technologically feasible for decades, yet they remain rare because the social, regulatory, and economic conditions required to deploy them at scale have not materialised.[43]
Furthermore, appeals to technological inevitability often mask self-fulfilling prophecies. When powerful institutions claim that a technology is inevitable, they marshal resources to make it so, creating the conditions for the inevitability they describe. If we believed that surveillance capitalism was inevitable, we might accept the comprehensive data collection and targeted advertising that now characterises digital platforms. Yet alternatives – platforms structured around user privacy, data minimisation, and algorithmic transparency – are technologically feasible; they simply require different business models and regulatory regimes.[43]
The belief in technological inevitability is not only false; it is pernicious. It deprives us of agency at precisely the moment when we most need it. Instead of asking “what technology is inevitable?” we should ask “what technology do we collectively want, and what values should it embody?”[43]
The Real Work: Stewardship, Regulation, and Design Ethics
The un-invention fantasy, when examined seriously, reveals itself as a misdirection of moral energy. The actual challenges posed by our technological civilisation are not solved by wishing artifacts away; they are solved through sustained, difficult, collective work at multiple levels: individual choice, institutional design, regulatory frameworks, and cultural values.
At the level of individual choice, people do have agency. The recognition that technology is value-laden and shapes behaviour does not mean it controls behaviour absolutely. Humans can choose to use technologies differently, to resist their designed affordances, to maintain older practices alongside new ones, to collectively demand changes in design.[38][44] Understanding the mechanisms through which technologies shape behaviour – dark patterns exploiting cognitive biases, infinite scroll driving compulsive use, algorithmic amplification of outrage – empowers us to resist these mechanisms and advocate for alternatives.[38]
At the level of institutional design, the work is to embed ethics into the processes through which technologies are created. This requires codes of conduct for designers and engineers, institutional review processes that consider broader social impacts before deployment, and transparency about how technologies function and what values they embody.[38][44] It requires that engineers and designers understand themselves not as neutral technicians but as bearers of responsibility for the values their creations embody and the consequences they produce.[40][41]
At the level of regulation, the work is to establish guardrails that protect public welfare while preserving innovation. This might include privacy protections that limit data collection, mandatory algorithmic audits that assess bias and harm, labour protections that cushion technological disruption, environmental regulations that require technologies to internalise costs they might otherwise externalise, and antitrust enforcement that prevents technological monopolies from capturing regulatory processes and distorting markets.[38][44][45]
At the level of cultural values, the work is to resist narratives of technological inevitability and technological utopianism alike. It requires recognising that technologies are tools created by humans to serve human purposes, that these purposes are not predetermined by the laws of physics, and that we have ongoing agency in shaping what technologies we develop, how we deploy them, and what consequences we permit them to produce.[43]
The Dangerous Implications of Successful Un-Invention
Finally, we must confront a question rarely asked: what would actually change if we could un-invent something? Would the problems it created vanish?
The fantasy of un-inventing social media, for instance, imagines a world where people are less anxious, less isolated, and less subject to algorithmic manipulation. Yet the drivers of anxiety and isolation – economic insecurity, status hierarchies, social fragmentation, existential uncertainty – would remain. The mechanisms of manipulation would persist, but would operate through different channels. Predatory lending, pharmaceutical marketing, televised politics, workplace surveillance – the older technologies of manipulation would dominate absent the newer ones.
Or consider the fantasy of un-inventing weapons. Gunpowder did not create the impulse toward organised violence; it merely changed its mechanisms. Before firearms, humans waged war with swords, spears, bows, and catapults. Un-inventing gunpowder would not create a peaceful world; it would simply return warfare to its pre-gunpowder forms. The underlying causes of war – competition over resources, group identity, political power, ideological conflict – would persist.
What would actually change is this: we would lose the knowledge of how to accomplish certain things, and we would lose the benefits those accomplishments provide. Millions would die without antibiotics. Billions would face hunger without synthetic fertilisers. Billions would lose cognitive capacities on which they have come to depend. And the underlying problems that drove us to invent these things in the first place – disease, food scarcity, cognitive limitations – would still demand solutions.
The un-invention fantasy is thus ultimately a fantasy of escape – an imagining that we might avoid the difficult work of living wisely with our creations by simply erasing the creations themselves. It offers false comfort: the comfort of imagining that our problems are external, that they could be solved by removing artifacts rather than changing ourselves, our institutions, and our values.
Conclusion: Un-Ringing the Bell
The desire to un-invent something expresses something true: our technologies have consequences we do not fully control, and many of these consequences are harmful. Social media does correlate with increased anxiety and polarisation. Synthetic fertilisers do contribute to environmental damage. Weapons do enable unprecedented violence. The desire to reverse these harms is not irrational.
Yet the fantasy of un-invention mistakes the source of the problem. We do not suffer from technology abstractly; we suffer from specific choices about how technologies are designed, deployed, regulated, and governed. We suffer from the values embedded in our technological systems – profit maximisation over user welfare, engagement over truth, efficiency over equity, convenience over dignity. We suffer from institutions that prioritise innovation speed over responsibility, from regulatory vacuums that permit harm to accumulate before rules are written, from cultural narratives that frame technological development as inevitable and thus beyond critique or control.
These are real problems, but they are not solved by un-invention. They are solved – imperfectly, slowly, requiring sustained collective effort – through the hard work of stewardship: through designing technologies more carefully, regulating them more wisely, governing them more justly, and building institutions and cultures that recognise the values embedded in our technical creations and insist that these values align with our deepest commitments to human dignity, flourishing, and justice.
The bell has been rung. We cannot un-ring it. What remains is to learn to live with the sound it makes and to cultivate the wisdom and responsibility to shape what sounds we create next.
Bob Lynn | © 2026 Vox Meditantis. All rights reserved.
References
[1] Leibniz–Newton calculus controversy
[2] Multiple discovery
[3] Newton, Leibniz, Calculus – Mathematics
[4] Manhattan Project – Encyclopedia of the History of Science
[5] Science is a Pandora’s box – but we should open it anyway
[6] Reflections on “Making the Atomic Bomb”
[7] 10 Moments in the Invention of Guns and Gunpowder
[8] The Historical Origins and Contemporary Value of China’s …
[9] Civilization #45: The Gunpowder Revolution
[10] Technological Determinism vs Social Constructivism – Studocu
[11] Determinism versus Constructivism – TU Delft OCW
[12] Technological Determinism versus Social Determinism
[13] Social construction of technology – Wikipedia
[14] The Haber-Bosch Revolution: How Hydrogen and Nitrogen …
[15] How fertiliser helped feed the world
[16] World population with and without synthetic nitrogen …
[17] The Environmental Impact of the Haber-Bosch Process
[18] The gendered division of household labor and emerging …
[19] A History of the Subtle Sexism of Home Technology
[20] AI Solutions For Domestic Labor May Exacerbate Inequities
[21] Hedonic treadmill
[22] Hedonic Adaptation to Positive and Negative Experiences
[23] How the Hedonic Treadmill and Adaptation Affect Your …
[24] Hedonic Treadmill
[25] The Distinct Psychological Roles of Nostalgia and Declinism …
[26] Why we think life was better in the ‘good old days’ | News
[27] Rosy Retrospection and Declinism: Why the Past Looks …
[28] Overcoming the ‘inside–outside’ dualism in the extended …
[29] The extended mind thesis is about demarcation and use of …
[30] Extended Mind Thesis
[31] Antibiotic resistance as a tragedy of the commons
[32] antimicrobial resistance and distributive justice
[33] Perspectives on the Ethics of Antibiotic Overuse and …
[34] antimicrobial resistance and distributive justice – UCL Discovery
[35] Value Alignment of Social Media Ranking Algorithms
[36] [PDF] Embedding Societal Values into Social Media Algorithms
[37] Beyond Neutrality: Conceptualizing Platform Values
[38] Ethics of UX Design in Social Media
[39] What if algorithms could abide by ethical principles? …
[40] Rethinking scientific responsibility – PMC
[41] Scientific Responsibility and Development – PMC
[42] Ten Logical Fallacies of Popular AI Narratives — May 2025 …
[43] The fallacy of technological inevitability – Tools for the revolution
[44] Ethically Designing Social Media
[45] Responsible Innovation Advantage in Knowledge Exchange
[46] The Day Oppenheimer Feared He Might Blow Up the World
[47] Charles Bossut on Leibniz and Newton
[48] ‘Destroyer of Worlds’: The Making of an Atomic Bomb
[49] Newton and Leibniz: The Fathers of Calculus
[50] Production of nitrogen fertilizer in relation to world population
[51] Was the making of the atomic bomb inevitable?
[52] Can inequality be blamed on the Agricultural Revolution?
[53] How the Agricultural Revolution made us inequal
[54] The “Gender Agenda” in Agriculture for Development and …
[55] What if Gunpowder Never Existed? by Alternate History Hub
[56] [PDF] Analysis of Technological Determinism and Social Constructionism
[57] Historical availability of arable land affects contemporaneous …
[58] The irony of gunpowder | OUPblog
[59] On the Origins of Gender Roles:
[60] The Golden Age Fallacy » Beyond the Rhetoric – Michael Kwan
[61] The Extended Mind and the Influence of Cognitive Artifacts …
[62] Against Strong Ethical Parity: Situated Cognition Theses and …
[63] Nostalgia bias: Understanding our perception of the past
[64] How to Escape the Hedonic Treadmill and Be Happier
[65] Pandora’s box: The two sides of the public sphere
[66] Pandora’s Box – Thematic Option Honors Program
[67] What Pandora’s Box tells us about AI
[68] Antibiotics and antibiotic resistance: what do we owe to …
[69] The Power of Metaphor and Myth: Pandora’s Box (2014)
[70] Technology Neutrality is a Myth
[71] Have Technological Advances Reduced Chore-Time? …
[72] Polio, public health memories and temporal dissonance of re-emerging infectious diseases in the global north
[73] The Paradox of Choice
[74] Poliomyelitis: Historical Facts, Epidemiology, and Current …
[75] Between access and anxiety: the paradox of digital mental health …
[76] History of Polio: Key Milestones & Global Eradication
[77] The Paradox of Choice: The Intersection of Freedom and …
[78] Poliomyelitis amidst the COVID-19 pandemic in Africa: Efforts, challenges and recommendations
[79] Between access and anxiety: the paradox of digital mental …
[80] Can Technology Save Us from Housework? with Helen …
[81] History of polio – Wikipedia
[82] The Paradox of Choice
[83] Ethical and Regulatory Considerations for Using Social …
[84] Ethical design in social media: Assessing the main …
[85] The Slippery Slope Fallacy – by Alexander Rink – Gödel’s
[86] A new heuristic for understanding knowledge co …
[87] 15 Slippery Slope Fallacy Examples (2026) – Helpful Professor
[88] Design Factors of Ethics and Responsibility in Social Media
[89] The Slippery Slope Fallacy: a short animated explanation …
[90] Transferability of knowledge and innovation across the world


Leave a reply to J.K. Marlin Cancel reply