Skip to main content

Digital Colonialism: The New Empire of Compute

April 7, 2026

|

As Europe races to build sovereign artificial intelligence capacity, a deeper question is beginning to surface: has digital power quietly replaced older forms of empire? Nocturnal Cloud correspondent Pritish Beesooa examines who owns the infrastructure beneath artificial intelligence, and whether the digital future can still belong equally to all.

Artificial intelligence often arrives in ordinary life with remarkable ease. A question is typed into a screen, a paragraph appears in reply, an image is generated in seconds, a voice answers with effortless calm. To the user, it can feel almost immaterial, as though intelligence itself has become something light enough to float quietly through the devices of everyday life.

Yet nothing about artificial intelligence is light. Behind every smooth digital response sits an industrial world of enormous scale: data halls running without pause, cooling systems working continuously, electricity drawn in extraordinary quantities and semiconductors so strategically valuable that they now shape national economic policy. What appears simple at the surface depends on infrastructure as physical and consequential as ports, railways or power stations once were in earlier eras of industrial expansion.

That reality became unusually visible this month when Mistral AI, Europe’s most closely watched artificial intelligence challenger, secured around $830 million in fresh financing to purchase roughly 13,800 advanced chips from NVIDIA for a major new data centre near Paris. The announcement was framed as a bold statement of European ambition, yet it also exposed a quieter truth: even Europe must now spend heavily simply to secure enough computational capacity to remain within reach of the technological frontier.

The race for artificial intelligence is increasingly less about software alone and more about who controls the machinery beneath it. For much of the internet age, digital life encouraged the belief that geography mattered less than before. Information moved instantly. Services crossed borders effortlessly. The digital world appeared to promise a new kind of openness in which knowledge travelled more freely than power.

Artificial intelligence is revealing how incomplete that promise always was. The modern digital economy remains deeply rooted in territory, infrastructure and ownership. A small number of companies control much of the cloud architecture through which digital life now moves. A still smaller number shape access to the chips required for advanced computation. Around them, governments have begun to speak of semiconductors in the language once reserved for oil, strategic minerals and energy security.

No company symbolises this concentration more clearly than Nvidia. Its chips have become essential to the training of advanced language models, making one American company central to technological ambitions unfolding across continents. Around that hardware sits another concentration of influence: cloud systems dominated by Microsoft, Amazon and Google, while China continues building its own strategic digital ecosystem through state aligned scale and domestic control.

For Europe, this creates a growing unease. It has universities, talent, regulation and increasingly ambitious firms, yet the deeper architecture of artificial intelligence still depends heavily on systems built elsewhere. This is why an increasingly uncomfortable phrase has returned to serious debate: digital colonialism.

It is an imperfect phrase, and a controversial one. But it persists because the digital world, for all its claims of openness, is becoming steadily more concentrated at precisely the moment it claims to be universal. The question now emerging is larger than competition between Europe, America and China. It is whether a digital future can genuinely belong to everyone when the means of building that future remain concentrated in so few hands.

When Data Became Territory

The language of colonialism is never neutral. It belongs to one of history’s most violent inheritances, shaped by conquest, forced displacement, extraction and systems of power whose consequences remain visible across continents today. To apply that language to technology therefore demands care.

Yet the term continues to return because, beneath the very different conditions of the digital age, certain patterns feel unexpectedly familiar. Older empires were not built only by occupying land. They were sustained by controlling the routes through which value travelled. Raw materials were taken from one place, processed in another and transformed into wealth elsewhere, usually through systems designed far from the people whose labour or territory made that wealth possible.

The digital economy has produced a quieter version of that imbalance. Every day, billions of people contribute to a system they rarely see in full. A message sent from a phone, a purchase made online, a location shared through an app, a search entered in passing, a photograph uploaded without thought, even the rhythm of how someone pauses while typing: all of it leaves traces. Individually, these moments appear insignificant. Collectively, they form one of the most valuable resources of the modern age.

The striking question is not that data has become valuable. It is where that value ultimately settles. A user in France, India or Kenya may participate in the same digital world, yet the systems that store, analyse and monetise much of that activity remain concentrated within a remarkably small number of companies, largely governed elsewhere and shaped by priorities often distant from the societies producing the information itself.

That concentration has changed the meaning of digital participation. For years, technology was described as democratising by nature. Access widened, communication accelerated and barriers to information appeared to collapse. The internet gave the impression that power itself had become distributed simply because connection had become widespread.

But connection and control are not the same thing. A platform may feel global while its architecture remains highly centralised. A service may appear universal while the infrastructure beneath it belongs overwhelmingly to others.

This is why thinkers such as Shoshana Zuboff argue that digital life created more than convenience. It created a new economic system in which ordinary human behaviour became a source of extraction. Search habits, movements, preferences and patterns of attention no longer simply described people; they became material from which prediction and profit could be built.

Artificial intelligence now extends that process further. The language models shaping today’s digital future are trained not only on formal datasets, but on decades of human writing, speech, images and shared cultural expression. Vast quantities of collective human output become part of the material through which machine systems learn to respond, predict and generate.

Once again, value is created everywhere, but the computational power needed to transform it remains concentrated in very few places. This is why the phrase digital colonialism continues to resonate, even when imperfectly. It captures a growing unease that digital modernity has not distributed power as evenly as once imagined. Instead, many societies now depend daily on infrastructures they neither fully own nor meaningfully shape.

And dependency, whether in history or in technology, often becomes most powerful when it begins to feel normal. The deeper question is not whether people use these systems willingly. Most do, often gratefully. It is whether modern digital life can still claim to be open when the foundations beneath it are increasingly owned by so few.

Can Decentralisation Really Break the Pattern?

For more than a decade, one answer has been offered repeatedly by technologists who distrust concentration: decentralisation.

The promise was simple, at least in theory. If too much digital power had gathered around a handful of platforms, then new systems could be designed where control was distributed rather than centralised, records shared rather than owned by one authority, and value allowed to move more directly between participants. This became one of the founding philosophical claims of blockchain and later of what came to be called Web3: that the internet did not merely need better regulation, but different architecture.

At its most ambitious, decentralisation was presented almost as a correction to the digital age itself. Instead of entrusting identity to large platforms, users could hold their own credentials. Instead of relying on a central institution to verify ownership, records could exist across distributed networks. Instead of allowing data to remain trapped inside corporate systems, value might be returned more directly to those generating it.

The appeal was understandable. By the late 2010s, public trust in digital platforms had already been weakened by repeated data breaches, opaque monetisation models and growing discomfort over how behavioural information had become one of the most profitable resources in modern capitalism. Blockchain appeared to offer not simply a new technology, but a philosophical challenge to the idea that digital life had to remain dependent on large intermediaries.

Yet reality proved more complicated. While decentralised systems promised freedom from concentration, much of the crypto economy quickly developed its own centres of power. Mining capacity gathered where electricity was cheapest. Exchanges became highly influential. Venture capital shaped major protocols. Ownership often concentrated early, even inside systems designed to resist concentration. The language of openness survived, but power still had a tendency to settle.

Even today, although decentralisation remains one of the most intellectually ambitious responses to digital dependency, the sector itself remains heavily influenced by the same geopolitical forces it once hoped to dilute. Much of global crypto infrastructure still depends on capital, cloud services and computational resources concentrated in the United States and increasingly in China linked supply chains, even where regulatory pressure has shifted visible activity elsewhere.

This raises an uncomfortable possibility: decentralisation may alter the shape of control without fully removing the deeper structures that produce dependence.

And yet the idea has not lost relevance. Some of the most serious digital thinkers now argue that decentralisation matters less as a complete alternative and more as a pressure against excessive concentration. Distributed identity systems, sovereign data frameworks and blockchain based verification models may not dismantle digital power entirely, but they can introduce friction into systems otherwise designed for central capture.

That possibility has become increasingly important as artificial intelligence expands. If data has become the raw material of modern power, then the question of who controls identity, authorship and verification becomes harder to ignore. This is one reason ideas such as self-sovereign identity, zero knowledge systems and even soulbound digital credentials have moved from theoretical discussion into serious regulatory and technological debate. They suggest a future in which users may not fully escape digital systems but may begin negotiating with them differently.

Still, decentralisation cannot solve what society itself continues to reward. A network may be distributed, yet human behaviour often recreates hierarchy quickly. Convenience repeatedly pulls users back toward large platforms because simplicity remains one of the most powerful forces in digital life.

The deeper truth may be that no architecture alone can fully break digital dependency if the habits of consumption beneath it remain unchanged. Decentralisation offers resistance, but not innocence. It can redistribute power, but it cannot guarantee that power will remain equally held once value begins to accumulate.

Are Consumers Quietly Sustaining the System?

It is tempting to describe digital power as something imposed entirely from above: large companies building systems, governments negotiating influence and infrastructure gathering around those already strongest. Much of that is true. Yet digital concentration has never been sustained by corporate architecture alone. It has also been sustained, quietly and continuously, by the habits of ordinary users.

Every modern digital system depends on repetition. A platform becomes dominant because people return to it every day. A search engine becomes indispensable because millions rely on it without pause. A social network strengthens not only through technology, but through the human reluctance to leave where everyone else already is. Convenience, familiarity and speed often succeed where ideology does not.

This is one reason digital concentration has proved so difficult to disrupt. The modern user may criticise surveillance, question privacy, distrust algorithms and worry about data ownership, yet still participate constantly in the same systems because digital life has become woven into ordinary social existence.

The contradiction is deeply human. A person may object to how a platform captures behavioural information while continuing to upload photographs, store documents, use navigation systems and accept the ease of predictive services. Few users actively choose concentration as a political principle. Most simply choose what works with the least friction.

That choice, repeated billions of times, has extraordinary consequences. Artificial intelligence intensifies this relationship further because it appears to reward participation instantly. A question answered in seconds feels helpful. A draft produced in moments feels efficient. An image generated without cost feels almost miraculous. The user experiences immediate utility while rarely seeing the layers of computation, data extraction and infrastructural dependence that make such convenience possible.

In this sense, modern digital systems do not merely govern behaviour. They often succeed because they understand human behaviour exceptionally well. The economist Yanis Varoufakis has argued that digital platforms increasingly operate less like ordinary markets and more like privately governed spaces where users continue to produce value even when they believe they are simply participating socially. The power of these systems lies partly in how natural they now feel.

And this raises a difficult ethical question: if digital colonialism exists in any meaningful sense, can it survive without consent of a kind, however passive?

Unlike earlier forms of historical dependence, modern users are not coerced into sending messages, uploading memories or relying on digital assistants. They do so because the systems offer real utility, often extraordinary utility. The digital world has delivered genuine democratisation alongside concentration: access to knowledge, communication across borders, creative tools and economic opportunity.

That is why blame alone explains little. Consumers are neither victims in full nor agents in full. They occupy a more ambiguous position: benefiting daily from systems whose deeper consequences they only partly shape. Yet ambiguity does not remove responsibility. Every time convenience is chosen over sovereignty, every time free services are accepted without questioning where value travels, the larger architecture becomes harder to dislodge. Digital dependence rarely deepens through dramatic surrender. It deepens through ordinary repetition.

The reader therefore sits inside the argument, not outside it. The modern digital order has been built not only by engineers, investors and governments, but by billions of small decisions made without much thought at all. And perhaps that is why it feels so difficult to imagine another system: because the existing one has learned how to make dependence feel effortless.

Will AI Free Us, Or Perfect the System?

Artificial intelligence is often described in language that suggests inevitability. It is presented as the next great acceleration: more efficient, more intelligent, more productive, capable of solving problems that older digital systems merely organised. In medicine, it promises earlier diagnosis. In education, wider access to knowledge. In science, faster discovery. In daily life, it increasingly offers relief from friction itself.

That promise is real enough to explain why societies continue moving toward it with such speed. Yet every major technological leap carries an older question beneath the excitement: does the new system redistribute power, or simply refine the structures already in place?

Artificial intelligence has not emerged in an empty landscape. It is being built upon infrastructures already marked by concentration: data held by large platforms, cloud systems dominated by a small number of firms, computational power dependent on scarce chips and enormous capital. This means that even where artificial intelligence appears revolutionary, much of its underlying architecture remains deeply familiar.

The danger is not that artificial intelligence becomes too intelligent. It is that it becomes exceptionally efficient at reinforcing systems already shaped by imbalance. A model trained on global knowledge may still depend on infrastructure concentrated in very few countries. A tool used by millions may still strengthen the economic position of those who own the compute behind it. Even when artificial intelligence appears universal, its material foundations remain unevenly distributed.

This is why the language of salvation around artificial intelligence deserves caution. AI may help people write, diagnose, organise and imagine. It may widen access in ways earlier technologies did not. It may even lower certain barriers by making sophisticated tools available to individuals who previously lacked them. But access to a tool is not the same as ownership of the system producing it.

That distinction may define the next decade. Some argue that artificial intelligence itself could weaken concentration by reducing dependence on traditional expertise and lowering entry barriers for smaller actors. A student, a small business, a rural clinic or an independent researcher may now access capacities once reserved for institutions. In that sense, AI does contain democratising potential.

But democratisation only goes so far if the foundations remain concentrated. The deeper question is philosophical rather than technical: can intelligence generated through concentrated infrastructure truly become public in any meaningful sense, or does it remain another layer of dependency hidden beneath convenience?

The answer may not lie in rejecting artificial intelligence, nor in imagining that decentralisation alone can resolve what politics, economics and human behaviour continue to reproduce. It may instead lie in whether societies begin asking harder questions while the architecture is still being built: who stores the data, who governs identity, who owns compute, who benefits when intelligence becomes scalable. Because artificial intelligence will not decide whether digital modernity becomes more equal. People will. And history suggests that whenever new systems appear universal, the question worth asking first is often the oldest one of all: universal for whom?