logo
LatAmGPT aims to create AI that better represents the region's diversity

LatAmGPT aims to create AI that better represents the region's diversity

NBC News26-03-2025
Latin America has been the cradle of now globally popular literary and musical genres, staple foods like the potato and the inspiration behind the well-known Happy Meal. It could also become the cradle of a new form of AI.
A coalition of research institutions is working on what they call LatAmGPT — a tool that can take into account the region's language variances, cultural experiences and 'idiosyncrasies.'
The aim is to offer users a more faithful peek into and representation of the Americas and the Caribbean than that of large language models (LLMs) that have mostly come from U.S. or Chinese companies and were largely trained in English.
'We want to develop our capabilities, find local AI-based solutions and create a better understanding of these tools in Latin America and about Latin America,' said Rodrigo Durán Rojas, director of Chile's National Center for Artificial Intelligence, which is coordinating the effort.
Durán Rojas said that for general purposes, the project will be hard pressed to compete with 'state of the art models with multimillion budgets,' but that 'what our model can offer that others don't is a much richer and representative outlook of Latin America and the Caribbean,' its people and its outputs.
For example, Durán Rojas said initial testing has shown LatAmGPT to have far better results when queried about South American history, and that the same is expected for when the LLM is asked to, say, write a poem in the style of local authors or provide an overview of regional education policy.
There are more than 30 institutions involved in developing LatAmGPT from countries across the hemisphere, and collaborators include Latinos in the U.S. such as Freddy Vilches Meneses, an associate professor of Hispanic studies at Lewis & Clark College in Oregon. This, he said, is in recognition of how 'Latino and Latin American experiences are a cultural fellowship that goes beyond geography.'
'There are elements of Latin America in Oregon, in California, in Texas," Vilches Meneses said. "We want to make sure to incorporate that Latino experience as well.'
LatAmGPT, which aims to launch its first publicly available version around June, was announced last month on the heels of a regional commitment made during a summit on artificial intelligence in Uruguay to focus on 'ethical, inclusive and beneficial' technological development to 'promote and protect human rights' and explore the best possible public policies for AI governance.
That impulse follows an increasing uptake in the region of technological advances such as the use of drones to monitor deforestation in the Amazon rainforest, the development of apps to encourage more people to continue learning Indigenous languages, the creation of algorithms to aid in the search for forcibly disappeared people or the adoption of blockchain mechanisms to preserve historical documents of past dictatorship's actions.
Some of those preserved documents are now being used as sources to train LatAmGPT, along with papers, records and logs that institutions such as libraries and national archives have made available specifically for the project. Durán Rojas said this gives the model more nuance and localized breadth than the general internet data scraping other systems tend to use.
'LatAmGPT will have more context than the other model languages and should therefore hallucinate far less' when it comes to its use cases, Durán Rojas said. Hallucination is what AI researchers call when a model seemingly makes up an answer that's incorrect or false though it's presented as factual.
So far the project's dataset has more than 8 terabytes of information so the model can run on about 55 billion parameters (the variables with which an LLM makes a prediction output, like neurons that synapse or connect in a human brain). Durán Rojas said that's somewhat close to what the first public version of ChatGPT had when OpenAI launched it in the fall of 2022.
The challenges of diverse dialects and complex grammar
ChatGPT and other models like Google's Gemini have also sought in recent years to include a wider scope of data to offer the programs in languages other than English and with 'localizations'— such as the LLM knowing to respond in the metric system when relevant or to understand idioms.
Those companies acknowledge the importance of expanding that offering. HyunJeong Choe, the director of engineering and internationalization for Google's Gemini Apps, said it's 'a dedicated experience' that can be 'essential for cultural relevancy and sensitivity.'
But they also recognize it's a particularly complex endeavor, since most training data available to them is in English. 'The intricacies of different languages can pose a significant obstacle for all AI models. ... Languages with complex grammar, diverse dialects or limited digital resources may be harder to train,' Choe said.
LatAmGPT, through its institutional networks with libraries and archives, has somewhat skirted this issue — but not entirely. Durán Rojas said they're still struggling to incorporate Indigenous languages spoken by millions in the region because written documentation is not as widely available.
But they're still aiming to try as they continuously perfect their model — though they stress the importance of collaboration.
'The quality and attributes of the results we can get will depend on us as Latin Americans joining in to contribute as much as we can,' said Vilches Meneses, the Lewis & Clark professor.
Currently, with the tentative June launch date, LatAmGPT is still receiving data as collaborators regularly check in with specific questions to benchmark it in comparison to other available models.
Among the questions they're testing are queries on the many different names and terms used in the region for a specific word like "car," or a request for the GPT to make a comparison chart of how the region's countries have responded to mass immigration from places like Venezuela.
A large goal of LatAmGPT is to become familiar with these technological advances so they can be included in public policies and regulations, according to Durán Rojas.
For that, the creation of the transcontinental network to help develop the project is key, and per Durán Rojas will likely remain so.
'The most meaningful aspect, the greatest legacy, is this interconnectedness we've found to strengthen and develop AI-based solutions,' he says. 'The model, I mean it's great that we're making it, but the collaboration — that's what will most impact how we build things going forward.'
And with that there is a growing opportunity to offer further contributions with a Latino touch.
'At its base, this is jointly creating something from Latin America for Latin America and for the world, as proof to ourselves and to others that we can also produce high tech,' Vilches Meneses said, 'and that we can contribute to knowledge of artificial intelligence while still employing our social and cultural intelligence.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

'I took a driverless taxi in LA as tech giants prepare to bring them to UK'
'I took a driverless taxi in LA as tech giants prepare to bring them to UK'

Daily Mirror

time4 hours ago

  • Daily Mirror

'I took a driverless taxi in LA as tech giants prepare to bring them to UK'

An army of driverless 'robotaxis' is heading for UK shores after taking to the roads in the US - the Mirror's Jeremy Armstrong took one for a spin in Los Angeles An army of 'robotaxis' is heading for Britain next year after taking to the roads in the US. ‌ Baidu has partnered its Apollo Go taxi business with Lyft, one of Uber's main rivals. It is said to be planning a self-driving taxi service in the UK and Germany in 2026, with competition from a number of international rivals. The Sunderland Advanced Mobility Shuttle initiative is already testing a self-driving, emissions-free shuttle. It ferries passengers between three sites on public roads, a first for the North east. But driverless taxis have been a common sight in America since 2020, with millions of journeys completed across several different cities. ‌ ‌ The Mirror did a 'test ride' of the driverless Waymo taxis in LA to see what lies in store for UK road users and passengers. A strange sight greeted me shortly after arrival on US soil. On the way from Los Angeles airport, during the short 20-minute drive to Venice Beach, a Jaguar I-Pace fully electric SUV pulled up next to my vehicle at a traffic light. I spotted the 'Noddy style' hat on the roof straight away, and a woman passenger in the back. There was no driver, and that became a familiar scene on roads across the sprawling metropolis. It is a strange sight for any new arrival from the UK. There remains opposition to the technology, with the US refusing entry to Chinese-based operators in the country from 2027. But California-based Waymo has carried passengers on millions of journeys across America. And this is what lies in store for a road near you soon. My early morning departure from Newcastle meant a conventional taxi journey from a city centre suburb to the airport. My cabbie told me about the rise of Uber; he estimated there were thousands on the roads around the city. ‌ Any notion of a chat with the driver disappears when you climb into a Waymo. The first issue to tackle is getting aboard. Handles are retractable and flush with the door until you press the 'unlock the door' in the App (you can also enable automatic door unlocking). ‌ As you climb inside, and take your seat, the computer generated female voice asks if you are ready to start your journey. You are reminded to remain in the car unless there is an 'urgent need to exit'. There is then the surreal vision of the steering wheel moving around as if guided by an invisible driver. It reminded me of the 1971 Hollywood classic "Bedknobs and Broomsticks", when inanimate objects are brought to life by magic. ‌ My trip (you can take up to three passengers) was taking me through the quiet residential streets of Venice Beach, and a four-lane, mile-long street full of shops, bars, restaurants, and my destination, a coffee shop. You spin through the traffic at a very sedate pace and cross lanes to get in line, waiting at traffic lights to make a turn. There is an arrival time given, very similar to the Uber experience, and a map of the route to the destination. ‌ It takes a matter of minutes to download the app, order a cab and get underway. You can follow the arrival of your £130,000 Waymo Jag via your mobile phone, and track the planned route as you travel. ‌ It is a very relaxed ride, at a slow and steady pace, once you get used to the sight of the self turning steering wheel. We turn right at a busy junction before making our way to another set of lights for a left turn. This time the tight turn involves the navigation of a busy set of lights and a brief wait to make way, all with the use of satellite technology, sensors, and onboard cameras. We didn't get over 20-25 mph; the ride took a matter of minutes, covered under 2kms and cost $10 (£7.43). Most unusually for LA, they didn't ask for a tip. ‌ There is a strict no drink and drugs policy onboard, though how they check on the sobriety of passengers is unclear. There is a reminder to make sure that you have all your belongings as you alight, and then the driverless cab heads off for its next passenger. The key question is whether you would feel safe on an LA Freeway, UK motorway or navigating a bustling London street? That may be the key test for driverless passengers in the UK next year.

Pokemon fans are just realising they've been saying legendary monster's name wrong
Pokemon fans are just realising they've been saying legendary monster's name wrong

Daily Mirror

time14 hours ago

  • Daily Mirror

Pokemon fans are just realising they've been saying legendary monster's name wrong

There are now over 1,000 creatures in the Pokémon franchise, and it turns out most fans have been pronouncing one of the most popular ones incorrectly for 23 years Pokémon is one of the most beloved franchises in gaming. Since 1996, around 122 games have been released, including 38 mainline titles, amounting to over 489 million units sold worldwide. These games are split into nine generations, with each one introducing more and more Pokémon. While the initial release included just 151 creatures, the total is now at 1,025. ‌ With so many Pokémon for fans to remember, it's no wonder some of them might get forgotten or mispronounced, as people aren't as familiar with them. But it turns out one monster we've all been pronouncing wrong for years is one of the most popular Pokémon out there - and it was released 23 years ago. ‌ In a video shared by Game Central on TikTok, it was claimed the Pokémon Company confirmed that most of us, especially those in the UK, have likely been pronouncing the name of the Legendary Pokémon from 2002, Rayquaza, completely incorrectly all this time. ‌ Rayquaza is a large, green, serpentine Pokémon that served as the mascot for Pokémon Emerald. In an English accent, the most common pronunciation is "Ray-quar-zah" - but we've now been told that's not right. Instead, the Pokémon should be pronounced as "Ray-quay-zuh". This is something that our neighbours in the US may have been doing already, but it often sounds jarring to Brits. Rayquaza is pronounced this way because the end of its name is supposed to derive from the word "quasar," which is the term for a luminous galactic core powered by a supermassive black hole. Quasars are known to be the brightest objects in the universe, and Rayquaza is quite literally a cosmic dragon, which would make the space connection make sense. The official X account for Pokémon also confirmed this pronunciation in a post earlier this month, boldly stating in no uncertain terms that the creature should be pronounced "Ray-KWAY-zuh." ‌ Commenters on the Game Central video were baffled by the news, as many said they'd been pronouncing it the incorrect way ever since the Pokemon was released back in 2002. One person said: "I know the pronunciation. I just refuse to use it." Another added: "The person who made the GIF says it's pronounced JIF. Sometimes the creators are wrong." While a third agreed, writing: "Pokémon itself can be wrong. Remember, the creator of .gif files says his own creation incorrectly." Others pointed out that Rayquaza isn't the only Pokémon people have been mispronouncing. They claimed that the 2009 Legendary Pokémon, Arceus, should be pronounced as "Ar-kee-us," although most people would assume it should be said as "Ar-see-us."

Edinburgh International Book Festival: The future of creativity in the age of AI
Edinburgh International Book Festival: The future of creativity in the age of AI

Scotsman

time16 hours ago

  • Scotsman

Edinburgh International Book Festival: The future of creativity in the age of AI

The Edinburgh International Book Festival is an event that aims to 'celebrate and share the power of writers, their ideas, and the words they craft'. The festival welcomes hundreds of world-renowned authors, journalists, scientists, and artists to the Scottish capital for just over two weeks every August. This year there was one question on everyone's minds: what is the future of creativity in the age of AI? Sign up to our Arts and Culture newsletter, get the latest news and reviews from our specialist arts writers Sign up Thank you for signing up! Did you know with a Digital Subscription to The Scotsman, you can get unlimited access to the website including our premium content, as well as benefiting from fewer ads, loyalty rewards and much more. Learn More Sorry, there seem to be some issues. Please try again later. Submitting... Each year the Edinburgh International Book Festival works around a central theme. This year it was Repair. The festival's CEO and Director Jenny Niven explains how this theme 'starts from the belief that the brilliant ideas of writers and thinkers can help us repair a host of seemingly 'broken' things in our society'. Alongside this, however, I began to sense another common theme at play: AI. Artificial Intelligence, or AI for short, has been expanding and infiltrating many aspects of our lives. Author and translator Annelise Finegan likened it to mercury, the liquid element that seeps and spreads uncontrollably when spilt. Advertisement Hide Ad Advertisement Hide Ad As a university student, I am all too aware of the lightning fast speed at which AI has become part of our day-to-day lives. Emails from my department inform me of the dos and don'ts of using AI in academic practice, but how they would they know if we used it I am unsure. And this gets to the heart of why it is so terrifying; it works by imitating our language, our voices, our art. Elena Ferrante's 'My Brilliant Friend' series has been translated into English by Ann Goldstein. The very first event I attended at the Book Festival this year raised the question of AI's role in the creative industries. This was an event with our much-loved Poet Laureate Simon Armitage. Armitage was invited to discuss and read from his new poetry collection Dwell. This pocket-sized book is designed to illuminate the natural world around us, encouraging readers to 'dwell' in the feelings that nature inspires in us. Yet, despite the ecological focus of the event, the topic of AI quickly became unavoidable. An audience question led the poet to admit that AI could produce a very Armitage-like poem about a dog and, worse still, it was a pretty decent one. I am sure Armitage is not the only creative curious enough to test the generative powers of AI, and that he is not alone in finding the result of this research to be so horrifying. I could sense anxiety in the audience's silence as we contemplated AI's very real threat to humans in the creative industries. Advertisement Hide Ad Advertisement Hide Ad The anxiety is one that grips many of us in Arts and Humanities-based careers. Despite this, I would like to remind readers that there is still hope; the word humanities stems from the noun 'human' after all. This hope is made apparent through the work of literary translators. Decorations inside the Edinburgh Future's Institute. In a fascinating conversation between Jen Calleja and Annelise Finegan, two International Booker nominated translators, both advocated for the uniquely human touch required for the delicate art of literary translation. While AI may be able to translate word-for-word what the original text says, it does not yet possess the capability of translating voices and culture. As I learned from this event, literary translation requires the translator to read the text several times in its original language to build a sense of the voice behind it. Even if they used AI to produce a 'first draft' of the translated work they would risk losing this sense of voice; it would be like building a house on a bad foundation. Literary translation is about movement, not just from one language to another, but from one culture to another culture. How is AI meant to replicate this if it has not experienced the depth and variety of human cultures? To say that AI will not play some part in the future of creative industries would be a delusion, however tempting it might be to believe it. Yet, I am reluctant to accept that it will ever replace the need for real humans in the industry. Advertisement Hide Ad Advertisement Hide Ad I would also argue that a huge aspect of art is the motivation behind it. AI simply does not have the same motivation as any real person trying to convey a distinctly human emotion. In an increasingly digital world, we need these connections to our own and other people's emotions more than ever.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store