
Interview: Wikipedia weathers AI, political storms
Far from wrecking Wikipedia, the rise of AI has so far just strengthened it, Wikipedia's outgoing leader Maryana Iskander tells Axios in an exclusive interview.
The big picture: Once seen as a possible casualty of the generative AI boom, and more recently a target of the MAGA right, Wikipedia has emerged as an enduring model for how to navigate the latest shifts in politics and technology.
Iskander points to the company's values as keys to its enduring success — things like requiring sources, ensuring a neutral point of view and transparent debate.
"Everybody keeps predicting it's all gonna end one day, and the opposite keeps being true," Iskander said. "It keeps getting stronger."
While other sites and services are struggling to hold onto traffic as usage of ChatGPT and other AI tools grows, Iskander says Wikipedia's page views and usage have not yet shown signs of decline: "We've just become more and more relevant and more and more important."
Driving the news: Iskander announced last week she will leave her post as CEO of the Wikimedia Foundation, which funds and oversees Wikipedia.
Under her tenure, the organization has broadened its donor base, expanded its footprint of data centers and built a business model that seeks to keep the entirety of the site free.
"I do not see us moving away from core principles like free access to knowledge for everybody," Iskander said. "It's about being smart about who needs to access what in what kinds of ways."
While individuals, nonprofits and others can access Wikipedia without charge, the organization encourages tech companies that make massive use of its entire corpus to pay their fair share.
Rather than trying to threaten tech companies, Iskander has sought to convince then that they need to support Wikipedia if they want it as a resource, while also providing them improved access.
"It has taken some creativity to make sure that the large players also are coming to the table," she said.
Between the lines: Iskander also sees lessons in Wikipedia's approach for AI companies as they seek to mitigate bias, reduce errors and ensure a healthy information ecosystem.
"We've tried to talk about why making the models more open is the right thing to do because we do it," Iskander said. "We've tried to talk about how to keep humans in the loop because we do it. We've tried to talk about why caring about provenance and attribution and who creates is important."
Zoom in: Wikipedia faces growing attacks in the U.S. from those who don't like the information it surfaces.
While that's disturbing for what it signals about the direction of the country, Iskander says Wikipedia has decades of experience standing up to governments.
"What's happening in the U.S. feels big because it's the U.S.," she said. "But Wikipedia has been dealing with these issues in an endless number of countries — India, Russia, Pakistan, Turkey — and so I think that's made us better prepared."
Iskander has a suggestion for regulators weighing changes to internet law, such as amending or limiting Section 230 protections: They should employ what Wikipedia founder Jimmy Wales has called the "Wikipedia Test" to make sure proposed changes actually protect the flow of information in the public interest.
That means asking whether a particular law or rule is good or bad for Wikipedia. Iskander says that's "just a way of thinking through what are the consequences and the impacts" on many similar outfits.
Well-meaning but poorly thought-out changes, she said, could threaten open-source and crowdsourced information sources.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 minutes ago
- Yahoo
People Are Asking ChatGPT for Relationship Advice and It's Ending in Disaster
Despite ChatGPT's well-documented issues, people are using it to advise them on relationship issues — and it's going about as well as you'd expect. In a new editorial, Vice advice columnist Sammi Caramela said she had been blissfully unaware of the ChatGPT-as-therapist trend until someone wrote into her work email about it earlier this year. Back in February, an unnamed man told the writer that his girlfriend refused to stop using the chatbot for dating advice and would even bring up things it had told her in arguments. Though Caramela was so shocked that she "nearly choked" on her coffee, the advice-seeker wasn't all that perturbed — and claimed that he found his girlfriend's ChatGPT use fascinating. "I was a bit floored by this confession. I had no idea people were actually turning to AI for advice, much less input on their relationships," the columnist wrote in her more recent piece. "However, the more I explored the topic, the more I realized how common it was to seek help from AI — especially in an era where therapy is an expensive luxury." Intrigued, Caramela found a friend who used the OpenAI chatbot for similar purposes, running relationship issues by it as a "non-biased" sounding board. Eventually, that person realized that ChatGPT wasn't unbiased at all, but rather "seemed to heavily validate her experience, perhaps dangerously so." Similar questions have been posed on the r/ChatGPT subreddit, and as Caramela explained, the consensus over there suggested not only that the chatbot is something of a "yes-man," but also that its propensity to agree with users can be dangerous for people who have mental health issues. "I often and openly write about my struggles with obsessive-compulsive disorder (OCD)," the writer divulged. "If I went to ChatGPT for dating advice and failed to mention how my OCD tends to attack my relationships, I might receive unhelpful, even harmful, input about my relationship." Digger deeper into the world of ChatGPT therapy, Caramela found multiple threads on OCD-related subreddits about the chatbot — and on the forum dedicated to ROCD, or relationship-focused OCD, someone even admitted that the chatbot told them to break up with their partner. "Programs like ChatGPT only speed the OCD cycle up because you can ask question after question for hours trying to gain some sense of certainty," another user responded in the r/ROCD thread. "There's always another 'what if' question with OCD." Like so many poorly-trained human professionals, chatbots aren't equipped to handle the nuance and sensitivity needed in any therapeutic context. Regardless of what OpenAI claims in its marketing, ChatGPT can't be truly empathetic — and if your "therapist" will never be able to have a human-to-human connection, why would you want it to give you dating advice in the first place? More on chatbot blues: Hanky Panky With Naughty AI Still Counts as Cheating, Therapist Says

Business Insider
an hour ago
- Business Insider
Sam Altman says the energy needed for an average ChatGPT query can power a lightbulb for a few minutes
Altman was writing about the impact that AI tools will have on the future in a blog post on Tuesday when he referenced the energy and resources consumed by OpenAI's chatbot, ChatGPT. "People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes," Altman wrote. "It also uses about 0.000085 gallons of water; roughly one-fifteenth of a teaspoon," he continued. Altman wrote that he expects energy to "become wildly abundant" in the 2030s. Energy, along with the limitations of human intelligence, have been "fundamental limiters on human progress for a long time," Altman added. "As data center production gets automated, the cost of intelligence should eventually converge to near the cost of electricity," he wrote. OpenAI did not respond to a request for comment from Business Insider. This is not the first time Altman has predicted that AI will become cheaper to use. In February, Altman wrote on his blog that the cost of using AI will drop by 10 times every year. "You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period," Altman wrote. "Moore's law changed the world at 2x every 18 months; this is unbelievably stronger," he added. Tech companies hoping to dominate in AI have been considering using nuclear energy to power their data centers. In September, Microsoft signed a 20-year deal with Constellation Energy to reactivate one of the dormant nuclear plants located in Three Mile Island. In October, Google said it had struck a deal with Kairos Power, a nuclear energy company, to make three small modular nuclear reactors. The reactors, which will provide up to 500 megawatts of electricity, are set to be ready by 2035. Google's CEO, Sundar Pichai, said in an interview with Nikkei Asia published in October that the search giant wants to achieve net-zero emissions across its operations by 2030. He added that besides looking at nuclear energy, Google was considering solar energy.

Business Insider
2 hours ago
- Business Insider
Sam Altman says the energy needed for an average ChatGPT query can power a lightbulb for a few minutes
OpenAI's CEO, Sam Altman, said the energy needed to power an average ChatGPT query can keep a lightbulb on for a few minutes. Altman was writing about the impact that AI tools will have on the future in a blog post on Tuesday when he referenced the energy and resources consumed by OpenAI's chatbot, ChatGPT. "People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes," Altman wrote. "It also uses about 0.000085 gallons of water; roughly one-fifteenth of a teaspoon," he continued. Altman wrote that he expects energy to "become wildly abundant" in the 2030s. Energy, along with the limitations of human intelligence, have been "fundamental limiters on human progress for a long time," Altman added. "As data center production gets automated, the cost of intelligence should eventually converge to near the cost of electricity," he wrote. OpenAI did not respond to a request for comment from Business Insider. This is not the first time Altman has predicted that AI will become cheaper to use. In February, Altman wrote on his blog that the cost of using AI will drop by 10 times every year. "You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period," Altman wrote. "Moore's law changed the world at 2x every 18 months; this is unbelievably stronger," he added. Tech companies hoping to dominate in AI have been considering using nuclear energy to power their data centers. In September, Microsoft signed a 20-year deal with Constellation Energy to reactivate one of the dormant nuclear plants located in Three Mile Island. In October, Google said it had struck a deal with Kairos Power, a nuclear energy company, to make three small modular nuclear reactors. The reactors, which will provide up to 500 megawatts of electricity, are set to be ready by 2035. Google's CEO, Sundar Pichai, said in an interview with Nikkei Asia published in October that the search giant wants to achieve net-zero emissions across its operations by 2030. He added that besides looking at nuclear energy, Google was considering solar energy. "It was a very ambitious target, and we are still going to be working very ambitiously towards it. Obviously, the trajectory of AI investments has added to the scale of the task needed," Pichai said.