
What will the AI revolution mean for the global south?
When we consider the lack of inclusion of the global south in discussions about artificial intelligence (AI), I think about how this translates to an eventual lack of economic leverage and geopolitical engagement in this technology that has captivated academics within the industrialised country I reside, the United States.
As a scientist, I experienced an early rite of passage into the world of Silicon Valley, the land of techno-utopianism, and the promise of AI as a net positive for all. But, as an academic attending my first academic AI conference in 2019, I began to notice inconsistencies in the audience to whom the promise of AI was directed. AI researchers can often identify consistent choices for locations where such conferences are hosted, and where they are not. NeurIPS, one of the top AI conferences, has highlighted annual issues for obtaining visas for academic attendees and citizens from the African continent. Attending such a prestigious conference in the field grants one the opportunity to gain access to peers in the field, new collaborations and feedback on one's work.
I often hear the word 'democratisation' within the AI community, an implication of equity in access, opportunity and merit for contribution regardless of one's country of origin. Associate professor of economics Fadhel Kaboub talks about how 'a lack of vision for oneself results in being a part of someone else's vision', reflecting on how systematically lacking an access to infrastructure results in local trade deficits in economies.
As in the time of Nafta's promise of 'free trade', promises of 'AI democratisation' today still exist and benefit mainly countries with access to tech hubs not located in the global south. While the United States and other industrialized countries dominate in access to computational power and research activity, much of the low-paid manual labour involved in labelling data and the global underclass in artificial intelligence still exists in the global south.
Much like coffee, cocoa, bauxite and sugar cane are produced in the global south, exported cheaply and sold at a premium in more industrialized countries, over the past few years we have seen influence in AI inextricably tied to energy consumption. Countries that can afford to consume more energy have more leverage in reinforcing power to shape the future direction of AI and what is considered valuable within the AI academic community.
In 2019, Mary L Gray and Siddharth Suri published Ghost Work, which exposed the invisible labour of technology today, and at the beginning of my tenure at graduate school, the heavily cited paper Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence was published. It has been five years since these seminal works. What would an AI community inspired by the Brics organisation, which united major emerging economies to advocate for themselves in a system dominated by western countries, look like for the global south?
I often ask myself how AI has contributed to our legacy, and whose stories it won't tell. Has AI mitigated issues of mistrust and corruption in less-resourced countries? Has it benefited our civic communities or narrowed educational gaps between less-resourced regions? How will it make society better, and whose society will it make better? Who will be included in that future?
A historical mistrust can impede adoption by developing countries. Furthermore, many developing countries have weak institutional infrastructures, poor or nonexistent laws and regulatory frameworks for data projection and cybersecurity. Therefore, even with an improved information infrastructure, they are likely to function at a disadvantage in the global information marketplace.
A currency is only as good as its perceived global trust. When thinking about the democratization in AI and a vision of what it could be in years to come, AI's survival requires including more perspectives from regions such as the global south. Countries from the global south should work together to build their own markets and have a model of sovereignty for their data and data labour.
Economic models often consider a definition of development that includes a measure of improvement in the quality of life of the most marginalized of its people. It is my hope that in the future that will extend to our evaluation of AI.
Krystal Maughan is a PhD student at the University of Vermont studying differential privacy and machine learning
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
26 minutes ago
- Daily Mail
CEO of AI company gets bloodied pig's head in horror package as he's called a 'Clark Kent knockoff'
The CEO of an AI startup in Las Vegas receives a package containing a severed pig's head and threatening letter - believed to stem from his company's use of artificial intelligence. Blake Owens, founder and CEO of Agrippa, an AI-powered platform to connect commercial real estate investors and developers without traditional brokers, received the bloodied pig's head along with the menacing note on July 29. The gruesome parcel was sent to a relative's home, and the message criticized Owens' use of AI - with personal insults that called him a 'Clark Kent knockoff' and ended ominously with: 'And don't get greedy because pigs get fat and hogs get slaughtered.' SCROLL DOWN FOR VIDEO Owens told KLAS: 'Perhaps this person watched too much of The Godfather. 'Needless to say, I still take it very seriously, but don't feel like I'm being truly threatened. It was a message.' The note was signed only with the initial 'M' and appeared to be motivated by a June TV segment that profiled Owens and Agrippa's AI tool, known as 'Marcus', to automate real estate transactions by matching developers with investors and evaluating property bids. The sinister letter also said: 'AI is not going to replace brokers. Clearly you don't understand real estate wasn't built by developers or investors. And it sure as hell wasn't built by tech guys in Lululemon. It was built by brokers. We did it the hard way. No shortcuts, no tech, just people.' Owens said he believed the sender was fearful of being displaced by automation. The businessman said: 'I understand this person is probably just frustrated that business isn't going well for them, and then they see AI replacement stories on top of that. And I just so happen to be someone they can focus their frustration on.' A photo of the package showed the sender was labeled as 'Marcus Agrippa' - a reference to the company's AI system. Owens joked: 'Is this a message that you know your own AI is turning against you? I wasn't quite sure how to interpret it.' Las Vegas PD confirmed it was investigating the incident and classified it as a harassment case. A suspect was yet to be identified. Owens said he did not feel 'genuinely threatened' and would not press charges should the sender be eventually identified. He told KLAS: 'I don't want to punch down on this person; they may be in a tough spot in life. I do see this as an opportunity to show people you don't become a better person by making another man a lesser person.' Owens also addressed potential anxiety surrounding AI's growing presence in the workforce, particularly in fields such as real estate that had historically relied on personal relationships. He said: 'You know, people are scared. They feel displaced and when disruption moves faster than education, fear just fills the gap.' Owens added that Agrippa was not designed to replace humans but it was created to empower professionals through AI. He said: '[Winston Churchill] said to be perfect is to change often. I think a lot of people are afraid of change and what's coming with AI, because it really is a tsunami of change that people are trying to resist. But the more you embrace it, the better you'll do, the more skills that you'll accumulate, more value you'll bring to the table.' Despite the threatening nature of the package, Owens remained committed to encouraging dialogue and told Inman: 'If I knew who this person was, I'd say, "Hey, feel free to reach out to me - maybe not with a package, just send me an email - I'm happy to share whatever education I can on keeping up with AI."' The investigation into the incident remained ongoing.


The Guardian
an hour ago
- The Guardian
OpenAI will not disclose GPT-5's energy use. It could be higher than past models
In mid-2023, if a user asked OpenAI's ChatGPT for a recipe for artichoke pasta or instructions on how to make a ritual offering to the ancient Canaanite deity Moloch, its response might have taken – very roughly – 2 watt-hours, or about as much electricity as an incandescent bulb consumes in 2 minutes. OpenAI released a model on Thursday that will underpin the popular chatbot – GPT-5. Ask that version of the AI for an artichoke recipe, and the same amount of pasta-related text could take several times – even 20 times – that amount of energy, experts say. As it rolled out GPT-5, the company highlighted the model's breakthrough capabilities: its ability to create websites, answer PhD-level science questions, and reason through difficult problems. But experts who have spent the past years working to benchmark the energy and resource usage of AI models say those new powers come at a cost: a response from GPT-5 may take a significantly larger amount of energy than a response from previous versions of ChatGPT. OpenAI, like most of its competitors, has released no official information on the power usage of its models since GPT-3, which came out in 2020. Sam Altman, its CEO, tossed out some numbers on ChatGPT's resource consumption on his blog this June. However, these figures, 0.34 watt-hours and 0.000085 gallons of water per query, do not refer to a specific model and have no supporting documentation. 'A more complex model like GPT-5 consumes more power both during training and during inference. It's also targeted at long thinking … I can safely say that it's going to consume a lot more power than GPT-4,' said Rakesh Kumar, a professor at the University of Illinois, currently working on the energy consumption of computation and AI models. The day GPT-5 was released, researchers at the University of Rhode Island's AI lab found that the model can use up to 40 watt-hours of electricity to generate a medium-length response of about 1,000 tokens, which are the building blocks of text for an AI model and are approximately equivalent to words. A dashboard they put up on Friday indicates GPT-5's average energy consumption for a medium-length response is just over 18 watt-hours, a figure that is higher than all other models they benchmark except for OpenAI's o3 reasoning model, released in April, and R1, made by the Chinese AI firm Deepseek. This is 'significantly more energy than GPT-4o', the previous model from OpenAI, said Nidhal Jegham, a researcher in the group. Eighteen watt-hours would correspond to burning that incandescent bulb for 18 minutes. Given recent reports that ChatGPT handles 2.5bn requests a day, the total consumption of GPT-5 could reach the daily electricity demand of 1.5m US homes. As large as these numbers are, researchers in the field say they align with their broad expectations for GPT-5's energy consumption, given that GPT-5 is believed to be several times larger than OpenAI's previous models. OpenAI has not released the parameter counts – which determine a model's size – for any of its models since GPT-3, which had 175bn parameters. A disclosure this summer from the French AI company Mistral finds a 'strong correlation' between a model's size and its energy consumption, based on Mistral's study of its in-house systems. 'Based on the model size, the amount of resources [used by GPT-5] should be orders of magnitude higher than that for GPT-3,' said Shaolei Ren, a professor at the University of California, Riverside who studies the resource footprint of AI. GPT-4 was widely believed to be 10 times the size of GPT-3. Jegham, Kumar, Ren and others say that GPT-5 is likely to be significantly larger than GPT-4. Leading AI companies like OpenAI believe that extremely large models may be necessary to achieve AGI, that is, an AI system capable of doing humans' jobs. Altman has argued strongly for this view, writing in February: 'It appears that you can spend arbitrary amounts of money and get continuous and predictable gains,' though he said GPT-5 did not surpass human intelligence. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion In its benchmarking study in July, which looked at the power consumption, water usage and carbon emissions for Mistral's Le Chat bot, the startup found a one-to-one relationship between a model's size and its resource consumption, writing: 'A model 10 times bigger will generate impacts one order of magnitude larger than a smaller model for the same amount of generated tokens.' Jegham, Kumar and Ren said that while GPT-5's scale is significant, there are probably other factors that will come into play in determining its resource consumption. GPT-5 is deployed on more efficient hardware than some previous models. GPT-5 appears to use a 'mixture-of-experts' architecture, which means that it is streamlined so that not all of its parameters are activated when responding to a query, a construction which will likely cut its energy consumption. On the other hand, GPT-5 is also a reasoning model, and works in video and images as well as text, which likely makes its energy footprint far greater than text-only operations, both Ren and Kumar say – especially as the reasoning mode means that the model will compute for a longer time before responding to a query. 'If you use the reasoning mode, the amount of resources you spend for getting the same answer will likely be several times higher, five to 10,' said Ren. In order to calculate an AI model's resource consumption, the group at the University of Rhode Island multiplied the average time that model takes to respond to a query – be it for a pasta recipe or an offering to Moloch – by the model's average power draw during its operation. Estimating a model's power draw was 'a lot of work', said Abdeltawab Hendawi, a professor of data science at the University of Rhode Island. The group struggled to find information on how different models are deployed within data centers. Their final paper contains estimates for which chips are used for a given model, and how different queries are parceled out between different chips in a datacenter. Altman's June blog post confirmed their findings. The figure he gave for ChatGPT's energy consumption per query, 0.34 watt-hours per query, closely matches what the group found for GPT-4o. Hendawi, Jegham and others in their group said that their findings underscored the need for more transparency from AI companies as they release ever-larger models. 'It's more critical than ever to address AI's true environmental cost,' said Marwan Abdelatti, a professor at URI. 'We call on OpenAI and other developers to use this moment to commit to full transparency by publicly disclosing GPT-5's environmental impact.'


Telegraph
2 hours ago
- Telegraph
Labour develops AI to predict parliamentary rebellions
Labour is developing a computer model to predict future rebellions after Sir Keir Starmer was humbled by his own backbenchers over welfare reforms. The Department for Science, Innovation and Technology is funding an Artificial Intelligence (AI) programme to scour debate records for signs of MPs who will vote against the Government. The move comes amid concern that the Prime Minister is being pushed around by his party following a series of mutinies. Parlex is billed as software that can 'forecast parliamentary reaction' by analysing records of past debates in the Commons. It will allow civil servants to draw up dossiers for Cabinet ministers saying which MPs, including those in their own ranks, are likely to oppose specific policies. A project page on the government website said: 'By analysing years of parliamentary debate contributions from MPs and Peers, Parlex offers insights into how Parliament might react to a new policy if it were debated tomorrow. 'This tool helps policy professionals gauge parliamentary sentiment towards specific issues, determining whether a policy will be well-received or face significant opposition. 'This allows policy teams to understand the political climate and anticipate potential challenges or support for a policy before it is formally proposed and to build a parliamentary handling strategy.' Sir Keir faced the biggest crisis of his premiership in July when more than 120 Labour MPs threatened to revolt against changes to sickness benefits. The Prime Minister was eventually forced to abandon the reforms, which would have saved £5bn a year, after a significant blow to his authority. It was not the first time that he had been humbled by a backbench rebellion. In 2023, while he was leader of the opposition, he was defied by 56 of his MPs who broke the party whip to vote for a ceasefire in Gaza. This month, Sir Keir announced he planned to recognise a Palestinian state after again coming under heavy pressure from backbenchers and the Cabinet. With many Labour MPs sitting on wafer-thin majorities and fearing defeat at the next election, there are expectations that party discipline could break down further. The science ministry announced that it was developing Parlex earlier this year as part of a new suite of AI tools known as Humphrey. It has been named after Sir Humphrey Appleby, the permanent secretary at the 'Department of Administrative Affairs' in the 1980s TV satire Yes, Minister. Ministers said that the system was 'still in early-stage user testing' but had already cut the amount of time it took officials to research an MP.