
Tech With Respect: AI And Indigenous Community Power
Two Teenage Native American Indian Navajo Sister in Traditional Clothing Enjoying the Vast Desert ... More and Red Rock Landscape in the Famous Navajo Tribal Park in Monument Valley Arizona at Dawn
Artificial Intelligence is everywhere—from search engines and supply chains to climate forecasts and school curriculums. But amid the buzz about what AI can do for us, an equally important question often gets overlooked: what is it doing to us, particularly those usually left out from mainstream design and decision-making processes, such as Indigenous communities?
The answer is complex. AI carries tremendous potential to support Indigenous self-determination, language preservation, and climate stewardship. But it also risks deepening long-standing patterns of erasure, exploitation, and exclusion — unless it is carefully aligned with the values, rights, and realities of Indigenous peoples.
One framework that helps unpack these dynamics is the ABCD of silent AI issues: agency decay, bond erosion, climate conundrum, and divided society. These issues aren't always visible in headlines or policy briefs — but they shape how AI shows up in everyday life. And for communities historically sidelined in technological revolutions, these risks can carry outsized consequences.
Who controls the narrative when technology speaks for us?
As AI systems become more embedded in daily decision-making, there's a growing concern that personal and collective autonomy may erode — especially for groups with little say in how those systems are trained or deployed. When AI is built on biased or incomplete data, it often defaults to dominant worldviews, misrepresenting or ignoring others altogether.
This isn't just a design flaw. It's a continuation of colonial patterns in a new digital form.
Yet some efforts flip the script. The Wasigen Kisawatsuin platform, for example, is being designed to recognize harmful or biased language about Indigenous peoples, flag it, and offer respectful alternatives. The tool serves as a digital ally to reduce emotional labor and as a mechanism to ensure Indigenous knowledge and experiences are not overwritten by default AI norms.
Can AI protect culture, or will it strip it of meaning?
Cultural appropriation has found new fuel in generative AI. Without guardrails, these systems scrape, remix, and reproduce sacred imagery, ceremonial language, and ancestral designs — usually without consent or context. This commodification not only disrespects Indigenous cultures but also risks severing the very bonds that sustain them.
Some organizations are working to turn that around. Natives Rising supports digital upskilling and AI literacy so that Indigenous communities can use the tools and shape them. This includes exploring AI's role in emotional wellness and creating community-aligned content that strengthens identity and intergenerational connection rather than diluting it.
How can AI serve the planet without sacrificing the communities that protect it?
The environmental cost of AI is staggering. Data centers require immense electricity and water — resources often sourced from or near Indigenous lands. Ironically, the very populations stewarding biodiversity hotspots are those most at risk from the tech industry's growing footprint.
A better path is possible. The First Languages AI Reality Initiative uses AI to revitalize endangered Indigenous languages while advocating for carbon-neutral infrastructure. By powering language preservation tools with renewable energy, the initiative models how AI can be deployed in ways that honor both people and planet.
This kind of alignment isn't just ethical — it's strategic. Indigenous communities have centuries of ecological knowledge and a track record of protecting 80% of the world's remaining biodiversity. A climate-smart AI future must include — not displace — these contributions.
Will AI bridge or widen the digital divide?
Access to AI isn't just about software but also power, infrastructure, and inclusion. Many Indigenous communities still lack stable internet or electricity, let alone the training and legal tools to engage with AI on equal footing. Meanwhile, large AI developers race ahead, sometimes using data sourced from these communities without consent.
The result? A lopsided tech economy where some benefit from AI and others are mined for it.
That's why platforms like Corral matter. It consolidates tribal consultation opportunities from U.S. federal agencies, allowing Indigenous leaders to engage with policy more efficiently. By automating time-consuming administrative work, Corral frees up capacity for governance, cultural preservation, and community programming — areas that too often get sidelined due to bandwidth constraints.
The promise of AI lies not in its novelty but in how it's directed. Prosocial AI — AI systems that are tailored, trained, tested, and targeted to bring out the best in and for people and the planet — is possible. It requires intent and inclusivity from design to deployment. Here's how:
Design with, not for
AI systems must be co-developed with Indigenous communities, drawing on their knowledge systems and lived realities. This ensures technologies are accurate and aligned with cultural values and legal rights.
Invest in ethical infrastructure
Renewable-powered data centers, governed by local communities, can mitigate environmental harm while creating jobs and digital sovereignty.
Strengthen data sovereignty
Community-owned data cooperatives and legal protections must be established to prevent extractive practices. Consent isn't just polite—it's essential.
Bridge the skills-to-systems gap
Coding camps, fellowships, and open-access AI education should be scaled to ensure Indigenous youth and leaders are not only users but creators of AI.
AI doesn't have to repeat the extractive logic of past innovations. It can help restore language, uplift knowledge, and accelerate justice — but only if we approach it as a tool in service of community-defined goals.
The choice is ours. Let's not just ask what AI can do. To reconfigure it with a holistic mindset, let's ask who it serves, why, and at what cost. What is outlined here, with a focus on Indigenous communities, applies to other minorities as well. AI can be a force of social good that serves everyone. But to unlock that potential, we must design our expanding artificial treasure chest with awareness of the differences that distinguish us and attention to the needs that we have in common. We are all different, but we all share the aspiration for happiness and the desire to be heard and respected. Prosocial AI can serve that purpose.
This article is part of a broader series exploring AI's impact on equity, sustainability, and society, including changemakers from MIT Solve.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Why Shopify Stock Popped Today
A Wells Fargo analyst raised their price target on Shopify's stock today. They called the company their "signature pick" and believe it could be a lesser-known beneficiary of the AI revolution. Home to its Shopify Magic and Sidekick solutions, Shopify looks more poised to thrive amid AI's rise, rather than be disrupted by it. 10 stocks we like better than Shopify › Shares of North America's leading commerce enabler, Shopify (NASDAQ: SHOP), were 6% higher as of 3:30 p.m. ET Friday, according to data provided by S&P Global Market Intelligence. This rise stems from an analyst at Wells Fargo raising their price target on the stock from $107 to $125, naming Shopify a "signature pick." Although Nvidia, Palantir, OpenAI, and others capture most of the artificial intelligence (AI) fanfare, the analyst believes Shopify could prove to be a thematic AI story -- and I'd agree. In April this year, a leaked memo from Chief Executive Officer Tobi Lütke went viral. In it, he stated, "Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI." Although this focuses on incorporating AI into Shopify's operations, the quote highlights that AI will be at the heart of what the company does going forward, whether internally or through its products. In 2023, the company launched Shopify Magic, a toolbox of AI-powered solutions (think AI-generated product descriptions or email campaigns, automated chat help, or image editing). Then, it launched Sidekick -- an AI-driven commerce assistant -- in 2024 to help with areas such as inventory optimization, pricing strategies, and gathering business insights. Just last quarter, Shopify launched This tool enables merchants to source products more effectively, allowing them to navigate the complex tariff environment in real time. Adding over 600 new product features for its merchants in the last two years alone, Shopify appears likely to remain an AI innovator rather than a disruptee, in my opinion. Though Shopify stock isn't cheap at 83 times cash from operations, its growth potential remains massive, holding only a 2% market share in its core geographies. Before you buy stock in Shopify, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Shopify wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $674,395!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $858,011!* Now, it's worth noting Stock Advisor's total average return is 997% — a market-crushing outperformance compared to 172% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 2, 2025 Wells Fargo is an advertising partner of Motley Fool Money. Josh Kohn-Lindquist has positions in Nvidia and Shopify. The Motley Fool has positions in and recommends Nvidia, Palantir Technologies, and Shopify. The Motley Fool has a disclosure policy. Why Shopify Stock Popped Today was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
2 hours ago
- Forbes
Cognitive Cities Are Rising To Define The Urban Future
Cities, where almost 60 percent of all humans now live, often struggle with a long list of issues that include traffic congestion, inefficient public services, high carbon emissions, economic and public safety challenges, and aging water and energy systems. As a result, there's a large and growing demand for novel solutions. It won't come as a surprise that new technologies are playing an increasingly important role in addressing a wide range of urban needs. The term smart city, which first began to appear in the 1990s, is often used to describe an urban area that adopts innovative digital technologies, data, sensors, and connectivity to improve a community's livability, workability, and sustainability. The smart city movement has had plenty of successes (and their fair share of failures and backlash), and public agencies committed to the use of innovative technologies and data to drive better governance can be found in every part of the world. Now a new concept is emerging that builds upon the success and limitations of smart cities. It's called the cognitive city and it's when AI, used in conjunction with other related emerging technologies, creates a more intelligent, responsive, and adaptable urban experience. This shift is unsurprising. It's happening as the intelligence age drives the emergence of a cognitive industrial revolution, an economic transformation that is forcing every organization to make sense of and see the opportunities in a world of thinking machines. At their core, cognitive cities are AI-powered and data-driven. They use these technologies and others to understand patterns in the urban space to help with decision-making, planning, and governance, and to power innovative urban solutions. Instead of being reactive, the aim is for city services to be proactive by anticipating needs and challenges. Over time, the city learns about its community, helping it to evolve to meet current and future needs. This may all sound a little too abstract, so let's put it in perspective by exploring two cognitive cities being constructed right now. Perhaps the most famous cognitive city underway is in the northwestern region of the Kingdom of Saudi Arabia. Called NEOM, this area includes The Line. Instead of being built in a traditional radial shape, The Line is a long, narrow strip, proposed to be 106 miles in length, 656 feet in width, and 1640 feet in height. Advanced cognitive technologies are at the heart of this city, enabling the optimization of transportation, resource management, and energy consumption—it will all be non-carbon based. The city is being designed to understand residents' needs and support personalized and proactive services such as healthcare, activity scheduling, and temperature management. The city of Aion Sentia, underway in Abu Dhabi in the United Arab Emirates, has even bolder aspirations. It's being designed to anticipate even more resident needs. If you like to buy a latte from your favorite coffee store each day at 8am, it's going to be ready for you. If you have an anniversary upcoming, you'll be reminded, and reservations will automatically be made at your favorite restaurant. Central to this cognitive city will be a city-provided app that will be your urban assistant. For example, if you get an energy bill that is higher than expected, you'll be able to tell the app, and it will figure out what you need to do to reduce your energy use. Feeling ill, the app will make a medical appointment and take care of all the related logistics. Other cities embracing the cognitive city concept include Woven in Japan, Songdo in South Korea, and Telosa in the United States. This may all sound rather futuristic, and it is. Much of it has yet to be built and proven. The concept of cognitive cities has some significant challenges related to privacy and the extent to which residents even want automation is every aspect of their lives. Toronto's proposed urban project, Sidewalk, haunts both the city and the developers, and is a litmus test for cognitive technology use, as issues surrounding privacy and data contributed greatly to its abandonment. In the marketplace of ideas, communities will need to balance the benefits of an AI-powered urban future versus the concerns and risks they present. These questions and others won't be second order issues but will need to be addressed as priorities as we enter the era of cognitive cities.

Business Insider
2 hours ago
- Business Insider
AI leaders have a new term for the fact that their models are not always so intelligent
As academics, independent developers, and the biggest tech companies in the world drive us closer to artificial general intelligence — a still hypothetical form of intelligence that matches human capabilities — they've hit some roadblocks. Many emerging models are prone to hallucinating, misinformation, and simple errors. Google CEO Sundar Pichai referred to this phase of AI as AJI, or "artificial jagged intelligence," on a recent episode of Lex Fridman's podcast. "I don't know who used it first, maybe Karpathy did," Pichai said, referring to deep learning and computer vision specialist Andrej Karpathy, who cofounded OpenAI before leaving last year. AJI is a bit of a metaphor for the trajectory of AI development — jagged, marked at once by sparks of genius and basic mistakes. In a 2024 X post titled "Jagged Intelligence," Karpathy described the term as a "word I came up with to describe the (strange, unintuitive) fact that state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggle with some very dumb problems." He then posted examples of state of the art large language models failing to understand that 9.9 is bigger than 9.11, making "non-sensical decisions" in a game of tic-tac-toe, and struggling to count. The issue is that unlike humans, "where a lot of knowledge and problem-solving capabilities are all highly correlated and improve linearly all together, from birth to adulthood," the jagged edges of AI are not always clear or predictable, Karpathy said. Pichai echoed the idea. "You see what they can do and then you can trivially find they make numerical errors or counting R's in strawberry or something, which seems to trip up most models," Pichai said. "I feel like we are in the AJI phase where dramatic progress, some things don't work well, but overall, you're seeing lots of progress." In 2010, when Google DeepMind launched, its team would talk about a 20-year timeline for AGI, Pichai said. Google subsequently acquired DeepMind in 2014. Pichai thinks it'll take a little longer than that, but by 2030, "I would stress it doesn't matter what that definition is because you will have mind-blowing progress on many dimensions." By then the world will also need a clear system for labeling AI-generated content to "distinguish reality," he said. "Progress" is a vague term, but Pichai has spoken at length about the benefits we'll see from AI development. At the UN's Summit of the Future in September 2024, he outlined four specific ways that AI would advance humanity — improving access to knowledge in native languages, accelerating scientific discovery, mitigating climate disaster, and contributing to economic progress.