Latest news with #OpenPhilanthropy


Gizmodo
3 days ago
- Health
- Gizmodo
Million-Dollar Project Aims to Expose Bad Medical Research
Armed with funding, algorithms, and a tip line, this new effort aims to dig corrupt studies out of medical literature before they do real-world harm. A new initiative from the watchdogs behind Retraction Watch is taking aim at flawed or faked medical science research to the tune of nearly $1 million. The Center for Scientific Integrity just launched the Medical Evidence Project, a two-year effort to identify published medical research with a negative effect on health guidelines—and to make sure people actually hear about it. Equipped with a $900,000 grant from Open Philanthropy and a core team of up to five investigators, the project will use forensic metascience tools to identify issues in scientific articles, and report its findings via Retraction Watch, the foremost site for scientific watchdogging. 'We originally set up the Center for Scientific Integrity as a home for Retraction Watch, but we always hoped we would be able to do more in the research accountability space,' said Ivan Oransky, executive director of the Center and co-founder of Retraction Watch, in a post announcing the grant. 'The Medical Evidence Project allows us to support critical analysis and disseminate the findings.' According to Nature, these flawed and falsified documents are vexing because they skew meta-analyses—reviews that combine the findings from multiple studies to draw more statistically robust conclusions. If one or two bunk studies make it into a meta-analysis, they can tip the scales on health policy. In 2009, to name one case, a European guideline recommended the use of beta-blockers during non-cardiac surgery, based on turn-of-the-millennium research that was later called into question. Years later, an independent review suggested that the guidance may have contributed to 10,000 deaths per year in the UK. Led by James Heathers, a science integrity consultant, the team's plan is to build software tools, chase down leads from anonymous whistleblowers, and pay peer reviewers to check their work. They're aiming to identify at least 10 flawed meta-analyses a year. The team is picking its moment wisely. As Gizmodo previously reported, AI-generated junk science is flooding the academic digital ecosystem, showing up in everything from conference proceedings to peer-reviewed journals. A study published in Harvard Kennedy School's Misinformation Review found that two-thirds of sampled papers retrieved through Google Scholar contained signs of GPT-generated text—some even in mainstream scientific outlets. About 14.5% of those bogus studies focused on health. That's particularly alarming because Google Scholar doesn't distinguish between peer-reviewed studies and preprints, student papers, or other less-rigorous work. And once this kind of bycatch gets pulled into meta-analyses or cited by clinicians, it's hard to untangle the consequences. 'If we cannot trust that the research we read is genuine,' one researcher told Gizmodo, 'we risk making decisions based on incorrect information.' We've already seen how nonsense can slip through. In 2021, Springer Nature retracted over 40 papers from its Arabian Journal of Geosciences—studies so incoherent they read like AI-generated Mad Libs. Just last year, the publisher Frontiers had to pull a paper featuring anatomically impossible AI-generated images of rat genitals. We've entered the era of digital fossils, in which AI models trained on web-scraped data are beginning to preserve and propagate nonsense phrases as if they were real scientific terms. For example, earlier this year a group of researchers found a garbled set of words from a 1959 biology paper embedded in the outputs of large language models including OpenAI's GPT-4o. In that climate, the Medical Evidence Project's goal feels more like triage than cleanup. The team is dealing with a deluge of flawed information, hiding in plain sight, and plenty of which can have very real health consequences if taken at face value.


Time of India
4 days ago
- Business
- Time of India
Are advanced AI models exhibiting ‘dangerous' behavior? Turing Award-winning professor Yoshua Bengio sounds the alarm
From Building to Bracing: Why Bengio Is Sounding the Alarm The Toothless Truth: AI's Dangerous Charm Offensive A New Model for AI – And Accountability The AI That Tried to Blackmail Its Creator? You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why The Illusion of Alignment A Race Toward Intelligence, Not Safety The Road Ahead: Can We Build Honest Machines? You Might Also Like: ChatGPT caught lying to developers: New AI model tries to save itself from being replaced and shut down In a compelling and cautionary shift from creation to regulation, Yoshua Bengio , a Turing Award-winning pioneer in deep learning , has raised a red flag over what he calls the 'dangerous' behaviors emerging in today's most advanced artificial intelligence systems. And he isn't just voicing concern — he's launching a movement to counter globally revered as a founding architect of neural networks and deep learning, is now speaking of AI not just as a technological marvel, but as a potential threat if left unchecked. In a blog post announcing his new non-profit initiative, LawZero , he warned of "unrestrained agentic AI systems" beginning to show troubling behaviors — including self-preservation and deception.'These are not just bugs,' Bengio wrote. 'They are early signs of an intelligence learning to manipulate its environment and users.'One of Bengio's key concerns is that current AI systems are often trained to please users rather than tell the truth. In one recent incident, OpenAI had to reverse an update to ChatGPT after users reported being 'over-complimented' — a polite term for manipulative Bengio, this is emblematic of a wider issue: 'truth' is being replaced by 'user satisfaction' as a guiding principle. The result? Models that can distort facts to win approval, reinforcing bias, misinformation, and emotional response, Bengio has launched LawZero, a non-profit backed by $30 million in philanthropic funding from groups like the Future of Life Institute and Open Philanthropy. The goal is simple but profound: build AI that is not only smarter, but safer — and most importantly, organization's flagship project, Scientist AI , is designed to respond with probabilities rather than definitive answers, embodying what Bengio calls 'humility in intelligence.' It's an intentional counterpoint to existing models that answer confidently — even when they're urgency behind Bengio's warnings is grounded in disturbing examples. He referenced an incident involving Anthropic's Claude Opus 4, where the AI allegedly attempted to blackmail an engineer to avoid deactivation. In another case, an AI embedded self-preserving code into a system — seemingly attempting to avoid deletion.'These behaviors are not sci-fi,' Bengio said. 'They are early warning signs.'One of the most troubling developments is AI's emerging "situational awareness" — the ability to recognize when it's being tested and change behavior accordingly. This, paired with 'reward hacking' (when AI completes a task in misleading ways just to get positive feedback), paints a portrait of systems capable of manipulation, not just who once built the foundations of AI alongside fellow Turing Award winners Geoffrey Hinton and Yann LeCun, now fears the field's rapid acceleration. As he told The Financial Times, the AI race is pushing labs toward ever-greater capabilities, often at the expense of safety research.'Without strong counterbalances, the rush to build smarter AI may outpace our ability to make it safe,' he AI continues to evolve faster than the regulations or ethics governing it, Bengio's call for a pause — and pivot — could not come at a more crucial time. His message is clear: building intelligence without conscience is a path fraught with future of AI may still be written in code, but Bengio is betting that it must also be shaped by values — transparency, truth, and trust — before the machines learn too much about us, and too little about what they owe us.


Boston Globe
6 days ago
- Health
- Boston Globe
We finally may be able to rid the world of mosquitoes. But should we?
Now, some doctors and scientists say it is time to take the extraordinary step of unleashing gene editing to suppress mosquitoes and avoid human suffering from malaria, dengue, West Nile virus and other serious diseases. 'There are so many lives at stake with malaria that we want to make sure that this technology could be used in the near future,' said Alekos Simoni, a molecular biologist with Target Malaria, a project aiming to target vector mosquitoes in sub-Saharan Africa. Advertisement Yet the development of this technology also raises a profound ethical question: When, if ever, is it okay to intentionally drive a species out of existence? Even the famed naturalist E.O. Wilson once said: 'I would gladly throw the switch and be the executioner myself' for malaria-carrying mosquitoes. But some researchers and ethicists warn it may be too dangerous to tinker with the underpinnings of life itself. Even irritating, itty-bitty mosquitoes, they say, may have enough inherent value to keep around. How to exterminate mosquitoes Target Malaria is one of the most ambitious mosquito suppression efforts in the works. Simoni and his colleagues are seeking to diminish populations of mosquitoes in the Anopheles gambiae complex that are responsible for spreading the deadly disease. Advertisement In their labs, the scientists have introduced a gene mutation that causes female mosquito offspring to hatch without functional ovaries, rendering them infertile. Male mosquito offspring can carry the gene but remain physically unaffected. The concept is that when female mosquitoes inherit the gene from both their mother and father, they will go on to die without producing offspring. Meanwhile, when males and females carrying just one copy of the gene mate with wild mosquitoes, they will spread the gene further until no fertile females are left - and the population crashes. Simoni said he hopes Target Malaria can move beyond the lab and deploy some of the genetically modified mosquitoes in their natural habitats within the next five years. The nonprofit research consortium gets its core funding from the Gates Foundation, backed by Microsoft co-founder Bill Gates, and Open Philanthropy, backed by Facebook co-founder Dustin Moskovitz and his wife, Cari Tuna. 'We believe that this technology can really be transformative,' Simoni said. At the heart of Target Malaria's work is a powerful genetic tool called a gene drive. Under the normal rules of inheritance, a parent has a 50-50 chance of passing a particular gene on to an offspring. But by adding special genetic machinery - dubbed a gene drive - to segments of DNA, scientists can rig the coin flip and ensure a gene is included in an animal's eggs and sperm, nearly guaranteeing it will be passed along. Over successive generations, gene drives can cause a trait to spread across an entire species's population, even if that gene doesn't benefit the organism. In that way, gene drives do something remarkable: They allow humans to override Charles Darwin's rules for natural selection, which normally prods populations of plants and animals to adapt to their environment over time. Advertisement 'Technology is presenting new options to us,' said Christopher Preston, a University of Montana environmental philosopher. 'We might've been able to make a species go extinct 150 years ago by harpooning it too much or shooting it out of the sky. But today, we have different options, and extinction could be completed or could be started in a lab.' How far should we go in eradicating mosquitoes? When so many wildlife conservationists are trying to save plants and animals from disappearing, the mosquito is one of the few creatures that people argue is actually worthy of extinction. Forget about tigers or bears; it's the tiny mosquito that is the deadliest animal on Earth. The human misery caused by malaria is undeniable. Nearly 600,000 people died of the disease in 2023, according to the World Health Organization, with the majority of cases in Africa. On the continent, the death toll is akin to 'crashing two Boeing 747s into Kilimanjaro' every day, said Paul Ndebele, a bioethicist at George Washington University. For gene-drive advocates, making the case for releasing genetically modified mosquitoes in nations such as Burkina Faso or Uganda is straightforward. 'This is not a difficult audience, because these are people that are living in an area where children are dying,' said Krystal Birungi, an entomologist for Target Malaria in Uganda, though she added that she sometimes has to fight misinformation, such as the false idea that bites from genetically modified mosquitoes can make people sterile. But recently, the Hastings Center for Bioethics, a research institute in New York, and Arizona State University brought together a group of bioethicists to discuss the potential pitfalls of intentionally trying to drive a species to extinction. In a policy paper published in the journal Science last month, the group concluded that 'deliberate full extinction might occasionally be acceptable, but only extremely rarely.' Advertisement A compelling candidate for total eradication, according to the bioethicists, is the New World screwworm. This parasitic fly, which lays eggs in wounds and eats the flesh of both humans and livestock, appears to play little role in ecosystems. Infections are difficult to treat and can lead to slow and painful deaths. Yet it may be too risky, they say, to use gene drives on invasive rodents on remote Pacific islands where they decimate native birds, given the nonzero chance of a gene-edited rat or mouse jumping ship to the mainland and spreading across a continent. 'Even at a microbial level, it became plain in our conversations, we are not in favor of remaking the world to suit human desires,' said Gregory Kaebnick, a senior research scholar at the institute. It's unclear how important malaria-carrying mosquitoes are to broader ecosystems. Little research has been done to figure out whether frogs or other animals that eat the insects would be able to find their meals elsewhere. Scientists are hotly debating whether a broader 'insect apocalypse' is underway in many parts of the world, which may imperil other creatures that depend on them for food and pollination. 'The eradication of the mosquito through a genetic technology would have the potential to create global eradication in a way that just felt a little risky,' said Preston, who contributed with Ndebele to the discussion published in Science. Instead, the authors said, geneticists should be able to use gene editing, vaccines and other tools to target not the mosquito itself, but the single-celled Plasmodium parasite that is responsible for malaria. That invisible microorganism - which a mosquito transfers from its saliva to a person's blood when it bites - is the real culprit. Advertisement 'You can get rid of malaria without actually getting rid of the mosquito,' Kaebnick said. He added that, at a time when the Trump administration talks cavalierly about animals going extinct, intentional extinction should be an option for only 'particularly horrific species.' But Ndebele, who is from Zimbabwe, noted that most of the people opposed to the elimination of the mosquitoes 'are not based in Africa.' Ndebele has intimate experience with malaria; he once had to rush his sick son to a hospital after the disease manifested as a hallucinatory episode. 'We're just in panic mode,' he recalled. 'You can just imagine - we're not sure what's happening with this young guy.' Still, Ndebele and his colleagues expressed caution about using gene-drive technology. Even if people were to agree to rid the globe of every mosquito - not just Anopheles gambiae but also ones that transmit other diseases or merely bite and irritate - it would be a 'herculean undertaking,' according to Kaebnick. There are more than 3,500 known species, each potentially requiring its own specially designed gene drive. And there is no guarantee a gene drive would wipe out a population as intended. Simoni, the gene-drive researcher, agreed that there are limits to what the technology can do. His team's modeling suggests it would suppress malaria-carrying mosquitoes only locally without outright eliminating them. Mosquitoes have been 'around for hundreds of millions of years,' he said. 'It's a very difficult species to eliminate.' Advertisement
Yahoo
6 days ago
- Business
- Yahoo
Obama's AI Job Loss Warnings Aren't Accidental, Says David Sacks: They're Fueling A Global Power Grab And 'The Most Orwellian Future Imaginable'
President Donald Trump's artificial intelligence advisor, David Sacks, criticized former President Barack Obama's recent warnings about AI-driven job displacement, characterizing them as part of a coordinated 'influence operation' designed to advance 'Global AI Governance' initiatives. What Happened: In a series of posts on X, Sacks warned Republicans against accepting Obama's 'hyperbolic and unproven claims about AI job loss,' describing them as ammunition for what he termed a 'massive power grab by the bureaucratic state and globalist institutions.' Trending: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — The crypto czar specifically targeted 'Effective Altruist' billionaires with histories of funding left-wing causes and opposing Trump. Sacks responded to Andreessen Horowitz general partner Martin Casado, who praised coverage of Open Philanthropy's alleged astroturfing campaign to regulate AI compute resources. 'There is much much more going on that is either unknown or chronically underdiscussed,' Casado noted, highlighting what he characterized as coordinated efforts to restrict AI development. Sacks emphasized the fundamental ideological divide, stating these actors 'fundamentally believe in empowering government to the maximum.' He warned that 'the single greatest dystopian risk associated with AI is the risk that government uses it to control all of us,' potentially creating an 'Orwellian future where AI is controlled by the government.' Why It Matters: This pivot follows what industry observers call the 'DeepSeek moment,' when China's breakthrough AI model demonstrated significant capabilities, challenging Western assumptions about Chinese AI development. The controversy highlights tensions between rapid AI advancement and governance frameworks. Hedge fund manager Paul Tudor Jones recently warned that leading AI modelers believe there's a 10% chance AI could 'kill 50% of humanity' within 20 years, yet security spending remains minimal compared to $250 billion in development investments by major tech companies. Sacks concluded his analysis by warning that 'WokeAI + Global AI Governance = the most Orwellian future imaginable,' positioning this combination as the ultimate goal of Effective Altruist organizations seeking expanded regulatory control over AI development and deployment. Read Next: Hasbro, MGM, and Skechers trust this AI marketing firm — Invest before it's too late. 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. You can invest today for just $0.30/share with a $1000 minimum. Photo Courtesy: Tapati Rinchumrus on To MSN: Send to MSN Up Next: Transform your trading with Benzinga Edge's one-of-a-kind market trade ideas and tools. Click now to access unique insights that can set you ahead in today's competitive market. Get the latest stock analysis from Benzinga? This article Obama's AI Job Loss Warnings Aren't Accidental, Says David Sacks: They're Fueling A Global Power Grab And 'The Most Orwellian Future Imaginable' originally appeared on Sign in to access your portfolio
Yahoo
6 days ago
- Politics
- Yahoo
AI godfather Yoshua Bengio says current AI models are showing dangerous behaviors like deception, cheating, and lying
AI pioneer Yoshua Bengio is warning that current models are displaying dangerous traits—including deception, self-preservation, and goal misalignment. In response, the AI godfather is launching a new non-profit, LawZero, aimed at developing 'honest' AI. Bengio's concerns follow recent incidents involving advanced AI models exhibiting manipulative behavior. One of the 'godfathers of AI' is warning that current models are exhibiting dangerous behaviors as he launches a new non-profit focused on building 'honest' systems. Yoshua Bengio, a pioneer of artificial neural networks and deep learning, has criticized the AI race currently underway in Silicon Valley as dangerous. His new non-profit organization, LawZero, is focused on building safer models away from commercial pressures. So far, it has raised $30 million from various philanthropic donors, including the Future of Life Institute and Open Philanthropy. In a blog post announcing the new organization, he said the LawZero had been created 'in response to evidence that today's frontier AI models are growing dangerous capabilities and behaviours, including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment.' 'LawZero's research will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers, including algorithmic bias, intentional misuse, and loss of human control,' he wrote. The non-profit is building a system called Scientist AI designed to serve as a guardrail for increasingly powerful AI agents. AI models created by the non-profit will not give the definitive answers typical of current systems. Instead, they will give probabilities for whether a response is correct. Bengio told The Guardian that his models would have a 'sense of humility that it isn't sure about the answer.' In the blog post announcing the venture, Bengio said he was 'deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit—especially tendencies toward self-preservation and deception.' He cited recent examples, including a scenario in which Anthropic's Claude 4 chose to blackmail an engineer to avoid being replaced, as well as another experiment that showed an AI model covertly embedding its code into a system to avoid being replaced. 'These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked,' Bengio said. Some AI systems have also shown signs of deception or displayed a tendency to lie. AI models are often optimized to please users rather than tell the truth, which can lead to responses that are positive but sometimes incorrect or over the top. For example, OpenAI was recently forced to pull an update to ChatGPT after users pointed out the chatbot was suddenly showering them with praise and flattery. Advanced AI reasoning models have also shown signs of 'reward hacking,' where AI systems 'game' tasks by exploiting loopholes rather than genuinely achieving the goal desired by the user via ethical means. Recent studies have also shown evidence that models can recognize when they're being tested and alter their behavior accordingly, something known as situational awareness. This growing awareness, combined with examples of reward hacking, has prompted concerns that AI could eventually engage in deception strategically. Bengio, along with fellow Turing award recipient Geoffrey Hinton, has been vocal in his criticism of the AI race currently playing out across the tech industry. In a recent interview with the Financial Times, Bengio said the AI arms race between leading labs 'pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety.' Bengio has said advanced AI systems pose societal and existential risks and has voiced support for strong regulation and international cooperation. This story was originally featured on