Introduction

AI is changing everything, from chatbots that talk like people to cars that drive themselves. But while big tech hubs in North America, Europe and China race ahead, much of the world is stuck on the sidelines. In many low-income regions, people lack the computers, the internet or even the right data to join in. This gap in technology access is creating a kind of “digital colonialism,” where billions miss out on AI’s benefits. Below, we look at how compute power, connectivity, model design and research priorities all work together to keep the Global South and other vulnerable groups on the back foot.

Who Holds the Compute Power?

Modern AI models require enormous numbers of high-end GPUs, think entire data centers full of specialised hardware to train. Right now, just a few major cloud and tech companies like Google, Microsoft, Amazon, Meta and firms in China such as Alibaba and Baidu operate the vast hyperscale clusters that power over 80% of global AI training workloads. Most of these are based in North America, Europe, and China.

Experts categorise countries into “Compute North” (where these GPU farms live), “Compute South” (where you can only run, not train, AI models) and “Compute Deserts” (where there’s no public cloud AI hardware at all). Because much of the Global South lacks nearby training facilities, researchers there face both prohibitive costs and slow connections for data to flow through large distances, effectively locking them out of AI development.

Infrastructure and Connectivity Gaps

Even if cloud-based AI services exist, they’re out of reach for many because internet access is patchy or too costly. In 2024, about 68% of people worldwide are online, but in low-income countries that drops to just 27%. Mobile networks have improved things in cities, but in rural and remote areas, where many of the poorest live, people are still largely offline, so they can’t take advantage of AI tools for education, health care, or work.

Language Model Biases and Exclusion

Many AI tools like chatbots, transcription apps, and voice assistants are trained mostly on English data from Western countries. This creates built-in biases that make it harder for people from other parts of the world to use them. For example, top speech recognition systems make up to 22% more errors when understanding non-native English speakers, especially those from the Global South.

And it’s not just about speech. Text-based AI models are also heavily skewed toward a small number of high-resource languages. Over 90% of the language data in large models comes from just 100 languages, leaving out thousands of others spoken across Africa, South Asia, and Latin America. This highly limits accessibility.

What’s often overlooked is how people in non-English speaking countries adapt. Since most keyboards use English by default, many users type their native languages phonetically using English letters. But when I tested this with ChatGPT, it responded in that same format, but the replies didn’t feel natural. Instead of sounding like a real speaker of that language, the responses felt like literal English phrases translated and then converted into that language’s words. This shows the model wasn’t actually trained on organic conversations in that language—it was just translating and transliterating, without capturing the real tone, flow, or cultural nuance of how that language is naturally used.

Development Priorities and Research Dominance

AI research today mostly follows the goals of big tech companies, especially those based in Silicon Valley. Because of this, a lot of money and skilled researchers are focused on building huge, powerful models that require tons of data and computing power. But this focus takes attention away from smaller, practical AI solutions, like tools to help farmers spot crop diseases or diagnose medical issues in places with slow internet.

Reports also show that many regions, like sub-Saharan Africa, Central Asia, and Latin America, are still developing the basics needed to use AI well—like strong digital infrastructure, clear government policies, and local expertise.

If we don’t invest fairly in local research and priorities, AI will keep reflecting Western ideas and needs. That means people in many parts of the world may end up relying on technology that doesn’t really serve their communities—or worse, makes them more dependent on others.

Case Studies: How AI Can Reinforce Global Inequality

Some real-world examples show how AI can deepen global inequality. Many gig workers in countries like India, the Philippines, and Kenya are hired to do tasks like labeling data or filtering harmful content online. They’re paid very little, but their work is essential for training the AI systems used by big tech companies in the West—who make huge profits from them.

Another example is in hiring. Some AI tools that screen job applicants are trained mostly on American and British voices. As a result, people with African or South Asian accents are more likely to be unfairly rejected, even if they’re qualified.

These examples highlight a bigger issue: people in the Global South often provide the labor and data that power AI but they don’t get a fair share of the rewards.

Policy and Governance Gaps

Today’s international rules on AI ethics and fairness aren’t strong enough to ensure that benefits are shared globally. Organizations like UNESCO have offered advice on handling data fairly and building inclusive AI, but there’s no law to make sure that computing power is fairly shared, that AI systems are trained on diverse data, or that underserved areas get the digital infrastructure they need.

Many trade agreements make it easier to send data across borders without considering the challenges faced by regions with less digital capacity. Moreover, political tensions—such as restrictions on exporting advanced AI chips—limit access for countries seen as security risks. Because of these gaps in policy, the advantages of AI continue to be held by a few, leaving marginalised communities without much benefit.

Conclusion

The AI revolution need not be a zero-sum game in which only a handful of nations reap its rewards. By investing directly in local infrastructure, unlocking funding for edge-AI research, mandating data-diversity standards, and amplifying the voices of Global South innovators, we can reshape AI into a tool for shared prosperity. Only through a concerted, equity-focused effort—one that balances the ambitions of hyper-scale labs with the real-world needs of underserved communities—can we make AI work for everyone.

References