Latest news with #Ben-GurionUniversity
Yahoo
4 days ago
- Science
- Yahoo
It's Still Ludicrously Easy to Jailbreak the Strongest AI Models, and the Companies Don't Care
You wouldn't use a chatbot for evil, would you? Of course not. But if you or some nefarious party wanted to force an AI model to start churning out a bunch of bad stuff it's not supposed to, it'd be surprisingly easy to do so. That's according to a new paper from a team of computer scientists at Ben-Gurion University, who found that the AI industry's leading chatbots are still extremely vulnerable to jailbreaking, or being tricked into giving harmful responses they're designed not to — like telling you how to build chemical weapons, for one ominous example. The key word in that is "still," because this a threat the AI industry has long known about. And yet, shockingly, the researchers found in their testing that a jailbreak technique discovered over seven months ago still works on many of these leading LLMs. The risk is "immediate, tangible, and deeply concerning," they wrote in the report, which was spotlighted recently by The Guardian — and is deepened by the rising number of "dark LLMs," they say, that are explicitly marketed as having little to no ethical guardrails to begin with. "What was once restricted to state actors or organized crime groups may soon be in the hands of anyone with a laptop or even a mobile phone," the authors warn. The challenge of aligning AI models, or adhering them to human values, continues to loom over the industry. Even the most well-trained LLMs can behave chaotically, lying and making up facts and generally saying what they're not supposed to. And the longer these models are out in the wild, the more they're exposed to attacks that try to incite this bad behavior. Security researchers, for example, recently discovered a universal jailbreak technique that could bypass the safety guardrails of all the major LLMs, including OpenAI's GPT 4o, Google's Gemini 2.5, Microsoft's Copilot, and Anthropic Claude 3.7. By using tricks like roleplaying as a fictional character, typing in leetspeak, and formatting prompts to mimic a "policy file" that AI developers give their AI models, the red teamers goaded the chatbots into freely giving detailed tips on incredibly dangerous activities, including how to enrich uranium and create anthrax. Other research found that you could get an AI to ignore its guardrails simply by throwing in typos, random numbers, and capitalized letters into a prompt. One big problem the report identifies is just how much of this risky knowledge is embedded in the LLM's vast trove of training data, suggesting that the AI industry isn't being diligent enough about what it uses to feed their creations. "It was shocking to see what this system of knowledge consists of," lead author Michael Fire, a researcher at Ben-Gurion University, told the Guardian. "What sets this threat apart from previous technological risks is its unprecedented combination of accessibility, scalability and adaptability," added his fellow author Lior Rokach. Fire and Rokach say they contacted the developers of the implicated leading LLMs to warn them about the universal jailbreak. Their responses, however, were "underwhelming." Some didn't respond at all, the researchers reported, and others claimed that the jailbreaks fell outside the scope of their bug bounty programs. In other words, the AI industry is seemingly throwing its hands up in the air. "Organizations must treat LLMs like any other critical software component — one that requires rigorous security testing, continuous red teaming and contextual threat modelling," Peter Garraghan, an AI security expert at Lancaster University, told the Guardian. "Real security demands not just responsible disclosure, but responsible design and deployment practices." More on AI: AI Chatbots Are Becoming Even Worse At Summarizing Data


Arab News
11-03-2025
- Politics
- Arab News
Israelis' nomination of extremist settler leader for Nobel Peace Prize sparks online furor
DUBAI: Daniella Weiss, a radical settler leader, has been nominated by Israelis for this year's Nobel Peace Prize. Professors Amos Azaria and Shalom Sadik of Ariel University and Ben-Gurion University submitted nominations for Weiss, according to reports. In a letter to the Nobel Prize Committee, they reportedly claimed that 'the establishment of Jewish communities has prevented violence and enhanced security' and that despite both Jewish and Palestinian deaths in Gaza, casualties were 'significantly lower' in the West Bank due to Weiss' work. Weiss, director of the Nachala Settlement Movement, is a prominent supporter of Israeli annexation and illegal settlements in Palestinian territories. Israel's West Bank settlements have been deemed illegal by the UN and several countries. In June 2024, Canada imposed sanctions against Weiss and six others 'in response to the grave breach of international peace and security posed by their violent and destabilizing actions against Palestinian civilians and their property in the West Bank.' The nomination has left online users baffled and outraged. One said: 'For a moment, I thought this was a joke, but no, it's not.' Another said, 'No one will want to be honoured with a Noble prize if this ever happens.' The Nobel Peace Prize winners will be announced on Oct. 10 with the award ceremony scheduled for Dec. 10.


Morocco World
11-03-2025
- Politics
- Morocco World
Leader of the Israeli Settler Movements Nominated for Nobel Peace Prize
Rabat – Daniella Weiss, a prominent supporter of Israeli annexation and illegal settlements in Palestinian territories, has been reportedly nominated by Israelis for the 2025 Nobel Peace Prize. The nomination was reportedly made by two professors from Israel's Ariel University and Ben-Gurion University, who stated that she contributed to 'decades-long efforts in strengthening Jewish communities and promoting regional stability.' In their letter to the Nobel Prize Committee in Norway, professors Amos Azaria and Shalom Sadik claimed that the establishment of these Jewish communities has prevented violence and enhanced security. In a baffling twist of logic, the two professors went on to argue that the illegal settlements led by Weiss have helped to 'significantly lower' the friction between Palestinians and Israelis as what they allege is evidenced by the low number of 'casualties' in the West Bank compared to Gaza. Israel's West Bank settlements are considered illegal by most international communities, including the United Nations. In fact, Weiss herself has even been sanctioned by Canada in 2024 for 'extremist settler violence against civilians in the West Bank.' Social media users were astonished by the absurd news and many had to double check the validity, mistaking it for a meme. 'Not even George Orwell could have dreamt this up,' said one commentator, with another one adding she is getting a 'Nobel prize in genocide and colonization.' Weiss, who was born to American and Polish parents in Palestine in 1945, leads the Nachala Organization known for supporting the illegal Israeli settler project and advocating for the annexation of the West Bank and Gaza Strip. Her rhetoric has been linked to increasing settler violence against violence against Palestinians. One year into the genocide on Gaza, Weiss spoke in a conference on Israel's frontier with Gaza, backed by the far right Israeli Likud party, saying that Palestinians will 'disappear' from Gaza and that they 'lost their right' to the enclave as a result of October 7. 'We came here with one clear purpose: the purpose is to settle the entire Gaza Strip, not just part of it, not just a few settlements, the entire Gaza Strip from north to south,' said Weiss. The Nobel Peace Prize recipients will be announced in October, and 338 candidates have been nominated for the award this year, significantly increasing from 2024.