Skip to main content

Harm Without Authors: The Human Accountability AI Cannot Replace

January 20, 2026

|

Artificial intelligence may generate the images, platforms may amplify them, and developers may claim neutrality, but harm still begins with human choice. Drawing on Mary Aiken’s Cyber Effect and the Grok controversy on Elon Musk’s X, Nocturnal Cloud’s Tech correspondent Pritish Beesooa explores why responsibility is vanishing in the AI era and asks whether the law must evolve to hold both users and creators to account across borders.
A System Without Innocence
The images did not emerge in isolation. They travelled through a digital ecosystem already primed for exposure, amplification and reward. Social media platforms, built to prioritise immediacy and engagement, did what they have been engineered to do: they spread content faster than reflection could intervene. What made this episode different was not only the explicit nature of the material generated by Grok on Elon Musk’s X platform, but the ease with which deeply intimate representations could be produced, circulated and detached from consequence.
These were not images leaked from private spaces. They were manufactured. Generated at the intersection of user intent and machine capability, then released into an attention economy that thrives on provocation. The harm that followed was not theoretical. It landed on real people, in real time and in public view.
It is tempting, in moments like this, to locate failure entirely in the technology. How could an AI system be allowed to generate such material? Why were safeguards insufficient? Why did platforms respond after harm rather than before it? These questions are necessary, but they are not enough. They risk flattening a more uncomfortable truth: technology did not act alone.
The individuals who prompted, shared and amplified this content were not unaware of its impact. They understood its power, its capacity to humiliate, to violate, to endure. The fact that artificial intelligence reduced friction does not erase intention. It merely obscures it. This is where the debate becomes culturally complicated. We live in an era that speaks openly about sexual expression and empowerment. Platforms such as OnlyFans have been defended, often rightly, as spaces where individuals can exercise agency over their bodies, labour and image. For many, they represent autonomy in a digital economy that has historically exploited intimacy without consent or compensation. That argument deserves thoughtful consideration.
But empowerment is grounded in choice. Consent is the axis on which it turns. When sexualised imagery is generated or circulated without permission, it no longer belongs to the language of agency. It becomes an instrument of harm. Failing to recognise the differences between types of sexual content and treating them all as equally moral or immoral shows a misunderstanding of both freedom and abuse.
Social media has a way of erasing that nuance. Algorithms do not recognise consent; they recognise engagement. They reward visibility, not context. In such an environment, the line between expression and exploitation can be crossed in seconds, particularly when tools exist that can fabricate realism with little effort.
The analogy often used in these debates is an uncomfortable one, but it is instructive. A gun, in itself, is an object. In certain contexts, it may serve legitimate purposes. In the wrong hands, it kills. Responsibility does not vanish because the mechanism exists. The choice to pull the trigger matters.
Artificial intelligence operates differently, but the moral structure is comparable. A generative system does not decide to harm. It enables. The responsibility lies with those who choose how to use it and with those who decide to release it into the world without adequate constraint.
This dual responsibility is what current debates struggle to hold. Technology companies insist they cannot control every use. Users insist they are merely interacting with what is available. Social media platforms position themselves as neutral conduits while profiting from scale. Each claim contains partial truth, yet together they produce a vacuum in which accountability dissipates.
It was into this vacuum that Yoshua Bengio, one of the pioneers of modern artificial intelligence, issued his warning. The AI industry, he said, has become “too unconstrained.” Coming from within the field, the remark cut through the usual defences. It acknowledged that innovation has accelerated beyond the ethical frameworks meant to govern it.
What the Grok controversy reveals, then, is not a single failure but a systemic one. We have built technologies capable of generating harm at unprecedented speed, placed them inside social systems that reward amplification, and framed responsibility in ways that allow every actor to step back when damage occurs.
The danger is not that artificial intelligence exists. It is that it exists within an ecosystem that confuses visibility with value, expression with exploitation, and capability with permission. In such an environment, harm is not an aberration. It is a foreseeable outcome.
If we are to talk seriously about accountability, we must resist the temptation to choose sides. This is not a question of blaming users instead of platforms, or platforms instead of developers. It is about recognising that responsibility in the digital age is layered, cumulative and shared and that denying any one layer weakens them all.
To understand why individuals so readily cross ethical boundaries online, and why systems so consistently fail to restrain them, we need to examine how digital environments reshape behaviour itself. That question has been explored in depth by the forensic psychologist Dr Mary Aiken, whose pioneering work on the Cyber Effect offers a crucial lens for what comes next.
Mary Aiken and the Cyber Effect: How Technology Rewrites Behaviour
Long before artificial intelligence began generating images, voices, and identities, Dr Mary Aiken was asking a quieter, more unsettling question: what happens to human behaviour when it moves online?
Aiken, a forensic psychologist who has advised governments, law-enforcement agencies, and international bodies, has spent decades studying how digital environments alter moral perception. Her work does not begin with technology, but rather with people and what it reveals is not a collapse of ethics, but a systematic weakening of the forces that ordinarily keep behaviour in check.
She calls it the Cyber Effect — the psychological shift that occurs when human interaction is mediated by screens. In digital spaces, Aiken argues, individuals experience a diminished sense of consequence. Empathy is dulled. Responsibility feels dispersed. Actions feel less tethered to real-world outcomes, even when those outcomes are severe.
This is not because people suddenly become unethical online. It is because the conditions that reinforce ethical behaviour are quietly stripped away. In the physical world, moral judgement is shaped by immediacy. We see discomfort. We hear tone. We register fear or pain. These signals act as restraints. Online, they vanish. What replaces them is speed, anonymity and distance, a combination that Aiken has shown repeatedly to be corrosive to judgement.
Social media platforms intensify this psychological shift. Designed to maximise engagement, they reward immediacy and provocation rather than reflection. Content moves faster than context. Images detach from origin. Meaning is flattened into reaction. In such environments, ethical nuance struggles to survive. The system does not ask should this exist? It asks only will this travel?
Aiken’s work is particularly relevant to contemporary debates around sexual content and consent. The rise of platforms such as OnlyFans has been framed, often correctly, as a form of digital empowerment. For many creators, these spaces offer autonomy, control, and consent in a marketplace that previously offered none of those things. The ethical foundation of that argument rests on agency. Participation is chosen. Boundaries are defined.
But the Cyber Effect explains why those boundaries dissolve once content enters broader digital circulation. Online systems do not preserve consent; they strip it of context. An image created for one audience can be lifted, reinterpreted, or weaponised elsewhere. With generative AI, that rupture becomes more severe. Sexualised representations can now be produced without any originating act of consent at all. The image may never have existed in reality, yet the harm it causes is immediate and deeply personal.
Aiken’s research helps us understand why individuals participate in this harm while maintaining a sense of detachment. When actions are mediated through technology, the sense of authorship weakens. People feel less like actors and more like operators. The language reflects this shift. It is no longer “I created this,” but “the system generated it.” Responsibility feels shared with the machine, the platform, the crowd.
Yet Aiken is clear: explanation is not absolution. The Cyber Effect does not remove agency. It reveals how easily agency can be misused when environments are designed without restraint. Individuals still make choices. They still understand, at some level, the impact of what they are doing. The fact that technology makes harm easier does not make it accidental.
Where Aiken’s work becomes most challenging for the technology industry is in its implications for design. If we know that anonymity, speed, and scale weaken moral restraint, then building systems that maximise all three without safeguards is not neutral innovation. It is a failure to account for human psychology. Ethical breakdown, in this context, is not a surprise. It is a foreseeable outcome.
Generative AI magnifies every element Aiken identified. It removes effort. It accelerates output. It allows a single act of intent to be multiplied indefinitely. What once required time, skill, and exposure can now be done instantly, invisibly, and repeatedly. The Cyber Effect, once a gradual shift, now operates at machine speed.
This is why debates that focus solely on user behaviour or solely on technology miss the point. The harm emerges from their interaction. People behave differently online because systems encourage them to do so. Systems cause harm because people use them in ways they understand to be damaging. Responsibility sits in the space between.
Mary Aiken’s contribution is to make that space visible. She reminds us that digital environments are not morally neutral arenas. They shape conduct, perception, and judgement. To ignore this is to continue building systems that quietly erode restraint while loudly denying responsibility.
If accountability is to mean anything in the age of artificial intelligence, it must begin with an honest reckoning with how technology changes us. Not just what it enables, but what it encourages. Not just who uses it, but how it rewires behaviour. Only then can we begin to ask the harder question that follows: if we know all this, why do we keep building systems as though we do not?
When Responsibility Disappears
Mary Aiken’s work leaves us with an uncomfortable clarity. If digital environments alter behaviour in predictable ways, then the harms we witness online are rarely spontaneous. They are the product of systems that weaken restraint while dispersing responsibility. Yet when damage occurs, accountability does not crystallise; it fragments.
This is the paradox at the heart of the digital age. Harm is real, visible and often devastating, but responsibility dissolves almost immediately into the architecture that enabled it. Each actor steps back. The user insists they merely followed what was available. The platform claims it cannot control scale. Developers speak of open tools and unintended misuse. The law, still rooted in analogue assumptions, struggles to locate a single accountable hand.
The effect is not confusion. It is diffusion.
Digital harm travels across layers that were never designed to carry moral weight. The person who prompts an AI-generated image can claim distance because the machine produced it. The engineers who built the system can claim neutrality because they did not choose the output. The platform that hosts it can claim passivity because it did not author the content. Responsibility moves continuously, rarely settling long enough for consequence to attach.
This structural ambiguity has served the technology industry well. For years, platforms positioned themselves as intermediaries rather than publishers, escaping obligations associated with editorial accountability. Developers framed their creations as general-purpose instruments arguing that morality resides with the end user. Corporations emphasised innovation and scale, language that shifts focus from responsibility to inevitability. In this ecosystem, harm becomes something that happens within systems rather than something caused by them.
But Aiken’s research makes such defences increasingly fragile. If we know digital environments lower inhibition and amplify misconduct, then releasing technologies that accelerate those conditions without safeguards is not morally neutral. The design itself becomes part of the causal chain. To ignore foreseeable misuse is to participate indirectly in its consequences.
Yet acknowledging systemic failure does not remove personal agency. The user who knowingly deploys these tools to humiliate or exploit remains accountable for that choice. The existence of enabling technology does not transform intention into accident. As with any powerful instrument, the capacity for harm rests not only in its design but in its use.
This duality is precisely where accountability falters. Public discourse often seeks a singular culprit, a clean line of blame. Digital systems resist that simplicity. They are collaborative structures, where harm emerges from interaction rather than isolation. Responsibility is shared, but because it is shared, it is also easier to evade.
The law has been slow to adapt. Traditional liability frameworks presume clear authorship and direct causation. Generative AI disrupts those assumptions, inserting machines into processes once governed solely by human hands. Jurisdictional boundaries complicate enforcement further. An image can be generated in one country, hosted in another, and viewed globally within seconds. National legal systems struggle to hold actors accountable across infrastructures that operate without regard for geography.
This creates what might be called a moral vacuum. Harm expands while responsibility contracts. Victims confront systems designed for reporting rather than redress. Platforms promise reform while avoiding liability. Developers refine tools without fully absorbing their social consequences. Users operate in spaces where anonymity masks intent and scale obscures impact.
In such an environment, accountability becomes not a principle but an afterthought.
The danger is cumulative. As technologies become more powerful and more deeply embedded in everyday life, the cost of unresolved responsibility rises. Trust erodes. Institutions lose credibility. Individuals suffer harms that persist long after the digital moment has passed. Yet without structural mechanisms that assign obligation across every layer (user, platform, developer and regulator) each new scandal follows the same cycle of outrage and evasion.
Mary Aiken’s analysis tells us why this keeps happening. Systems that reshape behaviour cannot be separated from the behaviour they produce. When technology removes friction, it also removes the pause in which ethical judgement lives. When platforms reward engagement without context, they accelerate harm. When developers deploy tools without anticipating misuse, they widen the space in which responsibility can disappear.
The accountability vacuum is therefore not an accident of complexity. It is a consequence of design, governance and denial and if responsibility continues to evaporate every time harm emerges from digital systems, the question that follows becomes unavoidable: is ethical failure already embedded at the moment these technologies are imagined? That is the deeper challenge the industry must confront next.
Built This Way: When Ethical Failure Begins at Design
By the time harm appears on a screen, it is already too late to ask where responsibility should lie. The most consequential decisions have already been made and not by users in moments of impulse, but by designers, executives and entrepreneurs long before a product ever reaches the public. What Mary Aiken’s work ultimately points toward is not simply a failure of behaviour, but a failure of imagination at the point of creation.
Modern technology is often defended as neutral infrastructure, shaped only by how it is used. Yet this belief collapses under scrutiny. Systems do not emerge fully formed from abstraction. They are imagined with assumptions about users, incentives, and acceptable risk. Choices are made about friction, speed, scale and visibility. Safeguards are weighed against growth. Ethics is rarely absent, but it is frequently postponed.
The history of social media offers a clear illustration. Platforms such as Facebook and Instagram were not designed to destabilise democratic discourse or amplify harm, yet they were built around engagement metrics that rewarded outrage, emotional intensity and virality. The consequences — misinformation, harassment, radicalisation — were not unforeseeable. Researchers, including internal teams, flagged them years in advance. But redesigning systems to privilege restraint over reach would have required a fundamental rethinking of the business model.
The same pattern has repeated itself with generative AI. Tools capable of synthesising images, voices and text are released into public space under the banner of innovation, often accompanied by assurances that misuse will be addressed later through moderation or policy updates. But the initial conditions matter. A system designed to generate realism at scale, without robust consent-aware constraints, will predictably be used to fabricate, deceive and violate. The question is not whether developers intended harm, but whether they treated it as an acceptable externality.
This is where the analogy of the gun, uncomfortable as it may be, becomes instructive. A firearm does not decide to kill. But societies impose strict controls on its manufacture, sale and use precisely because its potential for harm is understood. We do not accept the argument that responsibility lies only with the person who pulls the trigger while ignoring the systems that distribute weapons without oversight. Design, regulation and use are all considered morally relevant.
Technology, by contrast, has been granted a far looser ethical perimeter. Entrepreneurs are encouraged to “move fast,” to disrupt first and address consequences later. Harm is framed as the cost of progress. Yet when systems are built to remove friction, obscure authorship and amplify reach, they are not merely tools awaiting misuse. They are environments in which misuse is made easier.
Examples abound. Facial recognition technologies deployed without adequate bias testing have led to wrongful arrests.
Recommendation algorithms have pushed users toward increasingly extreme content because intensity drives engagement. Gig-economy platforms have classified workers as independent contractors by design, sidestepping labour protections while maintaining control. In each case, the outcome was not a surprise; it was an extension of the system’s underlying logic.
Generative AI now raises these stakes dramatically. When systems can simulate identity, fabricate intimacy and scale deception at unprecedented speed, ethical design can no longer be treated as optional. Yet many of these tools are still released with minimal constraints, justified by claims of openness or user responsibility. Safeguards are bolted on only after public backlash, as though harm were an unforeseen side effect rather than a foreseeable risk.
Mary Aiken’s research challenges this posture directly. If digital environments alter behaviour in predictable ways, then ethical responsibility begins not at the moment of misuse but at the moment of design. To build systems that amplify disinhibition while denying accountability is to participate in the conditions that produce harm. Negligence, in this context, is not about intention but about refusal to act on known evidence.
This is where the technology industry’s rhetoric begins to fray. Claims of neutrality sit uneasily alongside aggressive optimisation for engagement. Assertions of user freedom ring hollow when systems are engineered to guide behaviour. Innovation becomes a shield behind which ethical responsibility retreats.
The uncomfortable implication is that many of the harms now associated with AI and social media are not bugs in the system. They are features of a design philosophy that prioritises scale over care. When entrepreneurs imagine users as abstract entities rather than psychologically complex individuals, when developers treat harm as an edge case rather than a design constraint, failure is not accidental. It is structural.
The question, then, is not whether technology can be ethical after the fact. It is whether the industry is willing to imagine technology differently from the outset. That would require slowing down, building in friction and accepting limits, choices that run counter to the prevailing culture of disruption.
If accountability is to be more than a rhetorical response to scandal, it must be embedded where harm first becomes possible. Not in apology statements or revised terms of service, but in the architecture itself. And that leads inevitably to the final unresolved question: if technology continues to operate across borders while law remains bound by them, who has the authority and the will to enforce accountability when ethical design fails?
Accountability Without Borders
If technology has dissolved geography, the law remains stubbornly bound by it. Harm now travels instantly across borders, yet justice still moves at the speed of national systems designed for a pre-digital world. This tension has become one of the defining fractures of the artificial intelligence era. When abuse is created in one jurisdiction, hosted in another and consumed globally, responsibility becomes not only ethically complex but legally evasive.
The Grok controversy exposed this imbalance with uncomfortable clarity. An AI system developed within the infrastructure of a global platform was used to generate non-consensual, sexually explicit images that spread across networks unrestrained by territorial boundaries. Victims were left confronting a system in which perpetrators may never be identifiable, developers may never be liable, and platforms may never be compelled to act beyond reputational pressure. The architecture of harm is global; the architecture of accountability remains fragmented.
This gap is not incidental. It reflects a broader failure to adapt legal frameworks to technologies that operate without regard for borders. For decades, technology companies have expanded under the protection of jurisdictional ambiguity. Platforms present themselves as neutral intermediaries. Developers describe their systems as tools rather than actors. Corporations invoke complexity when governments seek enforcement yet celebrate scale when it delivers profit. The language shifts fluidly to protect innovation from consequence.
But as Mary Aiken’s work makes clear, digital environments are not neutral spaces. They are behavioural ecosystems that shape conduct and weaken restraint. When systems amplify anonymity, speed and reach without safeguards, misuse is not unforeseeable. It is predictable. To treat cross-border harm as an unfortunate side effect is to ignore the known psychological and structural conditions under which it occurs.
The law has struggled because responsibility in digital systems is layered. Traditional frameworks assume direct causation: an individual commits an act and bears the consequence. Generative AI disrupts this clarity. A harmful image may be produced by a machine, prompted by a user, enabled by a developer, and disseminated by a platform that profits from its circulation. Each actor is involved. Each claims distance. The result is a legal grey zone in which accountability evaporates precisely when it is most needed.
Yet the diffusion of responsibility cannot become its denial. Individuals who knowingly weaponise AI to harm others remain morally and legally culpable. The presence of a machine does not erase intention any more than a weapon erases the responsibility of its user. At the same time, creators and deployers of these systems cannot evade liability by retreating behind the rhetoric of neutrality. When harm is foreseeable and safeguards are absent, responsibility extends upstream.
This is where existing legal models begin to look inadequate. Digital abuse is treated as a matter for platform moderation rather than international enforcement, even though its consequences mirror harms recognised elsewhere in law as transnational and prosecutable. Financial crime, trafficking and human rights abuses are pursued across jurisdictions because their scale demands it. AI-mediated exploitation now operates at a comparable level of reach and impact yet remains constrained by national limitations.
The uncomfortable question is whether societies are prepared to evolve law to match technological reality. A borderless digital world cannot be governed by accountability that stops at the edge of national jurisdiction. If platforms profit globally, they must answer globally. If developers deploy tools capable of systemic harm, they must accept responsibility beyond their own territories. And if users commit abuses through these systems, legal consequences must follow them regardless of geography.
Mary Aiken has warned repeatedly that technology designed without ethical foresight will inevitably produce harm. The Grok scandal is not an anomaly but a symptom of a broader culture that builds first and regulates later. Innovation has outpaced governance because accountability has been treated as optional rather than foundational. The law now faces the same choice technology has long avoided: adapt or remain complicit through inaction.
The question that opened this article asked where responsibility resides when AI causes harm. The answer is not singular. It lies with the individual who chooses to exploit the tool. It lies with the developers who release systems without adequate safeguards. It lies with platforms that amplify content while disclaiming liability. And it lies with legal systems that have yet to extend accountability across the borders technology erased years ago.
The danger of the AI age is not that machines act without conscience. It is that humans increasingly do so behind them. When responsibility is allowed to dissipate into code, anonymity and jurisdictional gaps, harm becomes not only easier but normalised. Accountability must therefore be reclaimed at every layer — behavioural, technological and legal — if digital life is to remain humane. Otherwise, the most powerful systems ever created will continue to operate in a world where everyone is involved, yet no one is responsible.
By Pritish Beesooa,
Nocturnal Cloud Tech Correspondent