
OpenAI taking claims of data breach ‘seriously'
OpenAI said it is taking reports of a data breach 'seriously' but said it has not yet seen any evidence of its systems being compromised.
Reports on Friday said a hacker claimed to have obtained the log-in information for 20 million OpenAI accounts, including passwords and email addresses.
The claims were made on a hacking forum, where the threat actor provided what they alleged was a sample of the data obtained and offered to sell the full batch.
We have not seen any evidence that this is connected to a compromise of OpenAI systems to date
OpenAI spokesperson
The credibility of the claims has not been verified, but in a statement OpenAI said it was looking into the reports.
'We take these claims seriously,' an OpenAI spokesperson said.
'We have not seen any evidence that this is connected to a compromise of OpenAI systems to date.'
The AI firm is the maker of ChatGPT, the AI chatbot which has exploded in popularity since its original launch in late 2022.
Cybersecurity expert Jamie Akhtar, chief executive and co-founder of CyberSmart, said consumers should exercise additional caution by updating passwords and log-in credentials, and warned that cybercriminals could look to exploit the incident.
'If verified, this breach could have huge ramifications, both for OpenAI and its customers,' he said.
'Millions of people and businesses have embraced the company's technology into their daily lives, so the potential damage to OpenAI's reputation for data security could be huge.
Although this breach is yet to be verified by OpenAI, anyone using the tool should update their passwords and credentials, as a precaution
Jamie Akhtar, CyberSmart
'Worse still, compromised accounts could be used to access and abuse sensitive customer data or to exploit OpenAI's APIs and distribute malware and other cyber nasties.
'There's also the possibility of cybercriminals using stolen user credentials to produce targeted phishing campaigns, steal identities, or commit financial fraud.
'Although this breach is yet to be verified by OpenAI, anyone using the tool should update their passwords and credentials, as a precaution.
'And, if you haven't already, switch on multi-factor authentication within OpenAI's settings, as this should give you another layer of protection even if your password has been compromised.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Evening Standard
4 hours ago
- Evening Standard
Rotten Apple: are we finally watching the death of the iPhone?
While panic ripples through Apple, Sir Jony Ive — inventor of its flagship products from the iMac to the iPod, iPhone and Apple Watch — has risen up with an alternative answer elsewhere. Ive left Apple in 2019 to start his own design firm, LoveFrom, with the help of Laurene Powell Jobs, widow of Steve Jobs, who was an early investor. On May 21 came the announcement that OpenAI, the developer of ChatGPT, had acquired Ive's AI design start-up, io — in which Powell Jobs has also invested — in a deal worth $6.4 billion. News then broke that Ive and Sam Altman, the CEO of OpenAI, are working together to develop a new AI device, called a 'companion'.

The National
5 hours ago
- The National
Justice will come under threat from AI's ‘hallucinations'
Did you know that large language models like ChatGPT are in the habit of embedding random but superficially plausible falsehoods into the answers they generate? These are your hallucinations. Facts are made up. Counterfeit sources are invented. Real people are conflated with one another. Real-world sources are garbled. Quotations are falsified and attributed to authors who either don't exist, or didn't express any of the sentiments attributed to them. And troublingly, none of these errors are likely to be obvious to people relying on the pseudo-information produced, because it all looks so plausible and machine generated. We aren't helped in this by uncritical representations of AI as the sovereign remedy to all ills – from YouTube advertisers hawking easy solutions to struggling workers and firms, to governments trying to position themselves as modern and technologically nimble. READ MORE: Zia Yusuf returns to Reform UK in new 'Doge role' just two days after quitting Back in January, Keir Starmer announced that 'artificial intelligence will deliver a decade of national renewal', promising a plan that would 'mainline AI into the veins of this enterprising nation'. An interesting choice of metaphor, you might think, for a government which generally takes a dim view of the intravenous consumption of stupefying substances. Describing these failures as 'hallucinations' is not uncontested. Some folk think the language of hallucinations is too anthropomorphic, attributing features of human cognition and human consciousness to a predictive language process which we all need reminding doesn't actually reason or feel. The problem here isn't seeing fairies at the bottom of the garden, but faced with an unknown answer, making up facts to fill the void. One of the definitions of these systems failures I like best is 'a tendency to invent facts in moments of uncertainty'. This is why some argue 'bullshitting' much better captures what generative AI is actually doing. A liar knowingly tells you something that isn't true. A bullshitter, by contrast, preserves the illusion of themselves as a knowing and wise person by peddling whatever factoids they feel they need to get them through a potentially awkward encounter – reckless or indifferent to whether or not what they've said is true. Generative AI is a bullshitter. The knowledge it generates is meretricious. When using it, the mantra should not be 'trust but verify' – but 'mistrust and verify'. And given this healthy mistrust and time-consuming need for verification, you might wonder how much of a time-saver this unreliable Chatbot can really be. Higher education is still reeling from the impact. Up and down the country this month, lecturers have been grading papers, working their way through exam scripts and sitting in assessments boards, tracking our students' many achievements, but also contending with the impact of this wave of bullshit, as lazy, lost or desperate students decide to resort to generative AI to try to stumble through their assessments. If you think the function of education is achieving extrinsic goals – getting the essay submitted, securing a grade, winning the degree – then I guess AI-assisted progress to that end won't strike you as problematic. One of the profound pleasures of work in higher education is watching the evolution of your students. When many 18-year-olds arrive in law school for the first time, they almost always take a while to find their feet. The standards are different. The grading curve is sharper. We unaccountably teach young people almost nothing about law in Scottish schools, and new students' first encounter with the reality of legal reading, legal argument and legal sources often causes a bit of a shock to the system. But over four years, the development you see is often remarkable, with final-year students producing work which they could never have imagined was in them just a few teaching terms earlier. And that, for me, is the fundamental point. The work is in the students. Yes, it requires a critical synthesis with the world, engagement with other people's ideas, a breadth of reading and references – but strong students pull the project out of their own guts. READ MORE: UK won't recognise Palestine at UN conference despite 'discussions', reports say They can look at the final text and think, with significant and well-earned satisfaction – I made that. Now I know I'm capable of digesting a debate, marshalling an argument, presenting a mess of facts in a coherent and well-structured way – by myself, for myself. Education has changed me. It has allowed me to do things I couldn't imagine doing before. Folk turning in the AI-generated dissertations or essays, undetected, can only enjoy the satisfactions of time saved, getting away with it and the anxious future knowing that given the opportunity to honestly test themselves and show what they had in them, they decided instead to cheat. At university, being rumbled for reliance on AI normally results in a zero mark and a resit assessment, but the real-world impacts of these hallucinations are now accumulating in ways that should focus the mind, particularly in the legal sector. In London last week, the Court of Appeal handed down a stinging contempt of court judgment involving two cases of lawyers rumbled after citing bogus case law in separate court actions. The lawyers in question join hundreds of others from jurisdictions across the world, who've found their professional reputations shredded by being caught by the court after relying on hallucinated legal sources. We aren't talking about nickel and dime litigation either here. One of the two cases was a £89 million damages claim against the Qatar National Bank. The court found that the claimants cited 45 cases, 18 of which turned out to be invented, while quotations which had been relied on in their briefs were also phoney. The second case involved a very junior barrister who presented a judicial review petition, relying on a series of legal authorities which had the misfortune not to exist. As Dame Victoria Sharp points out, there are 'serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused' in this way, precisely because of its ability to produce 'apparently coherent and plausible responses' which prove 'entirely incorrect', make 'confident assertions that are simply untrue', 'cite sources that do not exist' and 'purport to quote passages from a genuine source that do not appear in that source'. The Court of Appeal concluded that 'freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT, are not capable of conducting reliable legal research'. I agree. For legal professionals to be presenting cases in this way is indefensible, with serious implications for professional standards integrity, for courts relying on the legal argument put before them and for clients who suffer the consequences of their case being presented using duff statements of the law or duff sources. I worry too about the potentially bigger impact these hallucinations will have on people forced to represent themselves in legal actions. Legal aid remains in crisis in this country. Many people who want to have the benefit of legal advice and representation find they cannot access it, particularly in civil matters. The saying goes that 'a man who represents himself in court has a fool for a client'. In modern Britain, a person who represents themselves in court normally has the only lawyer they can afford, as foolish and unfair as this might be. READ MORE: Freedom Flotilla urges UK Government to 'protect' ship from Israel as it nears Gaza Acting as a party litigant is no easy task. Legal procedures are often arcane and unfamiliar. Legal institutions can be intimidating. If the other side has the benefit of a solicitor or advocate, there's a real inequality of arms. But even before you step near a Sheriff Court, you need to have some understanding of the legal principles applying to your case to state it clearly. Misunderstand and mispresent the law, and you can easily lose a winnable case. In Scotland, in particular, significant parts of our law isn't publicly accessible or codified. This means ordinary people often can't find reliable and accessible online sources on what the law is – but it also means that LLMs like ChatGPT also haven't been able to crawl over these sources to inform the automated answers they spit out. This means that these large language models are much more likely to give questioning Scots answers based on English or sometimes even American law than the actual rules and principles a litigant in person needs to know to persuade the Sheriff that they have a good case. Hallucination rates are high. Justice will suffer.


Daily Mail
a day ago
- Daily Mail
Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'
Lawyers in England and Wales have been warned they could face 'severe sanctions' including potential criminal prosecution if they present false material generated by AI in court. The ruling, by one of Britain's most senior judges, comes on the back of a string of cases in which which artificially intelligence software has produced fictitious legal cases and completely invented quotes. The first case saw AI fabricate 'inaccurate and fictitious' material in a lawsuit brought against two banks, The New York Times reported. Meanwhile, the second involved a lawyer for a man suing his local council who was unable to explain the origin of the nonexistent precedents in his legal argument. While large language models (LLMs) like OpenAI 's ChatGPT and Google 's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply. The programs are also prone to what researchers call 'hallucinations' - outputs that are misleading or lack any factual basis. AI Agent and Assistance platform Vectera has monitored the accuracy of AI chatbots since 2023 and found that the top programs hallucinate between 0.7 per cent and 2.2 per cent of the time - with others dramatically higher. However, those figures become astronomically higher when the chatbots are prompted to produce longer texts from scratch, with market leader OpenAI recently acknowledging that its flagship ChatGPT system hallucinates between 51 per cent and 79 per cent of the time if asked open-ended questions. While large language models (LLMs) like OpenAI's ChatGPT and Google's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply - which can lead to them 'hallucinating' false information Dame Victoria Sharp, president of the King's Bench Division of the High Court, and Justice Jeremy Johnson KC, authored the new ruling. In it they say: 'The referrals arise out of the actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court. 'The facts of these cases raise concerns about the competence and conduct of the individual lawyers who have been referred to this court. 'They raise broader areas of concern however as to the adequacy of the training, supervision and regulation of those who practice before the courts, and as to the practical steps taken by those with responsibilities in those areas to ensure that lawyers who conduct litigation understand and comply with their professional and ethical responsibilities and their duties to the court.' The pair argued that existing guidance around AI was 'insufficient to address the misuse of artificial intelligence'. Judge Sharp wrote: 'There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,' While acknowledging that AI remained a 'powerful technology' with legitimate use cases, she nevertheless reiterated that the technology brought 'risks as well as opportunities.' In the first case cited in the judgment, a British man sought millions in damages from two banks. The court discovered that 18 out of 45 citations included in the legal arguments featured past cases that simply did not exist. Even in instances in which the cases did exist, often the quotations were inaccurate or did not support the legal argument being presented. The second case, which dates to May 2023, involved a man who was turned down for emergency accommodation from the local authority and ultimately became homeless. His legal team cited five past cases, which the opposing lawyers discovered simply did not exist - tipped off by the fact by the US spellings and formulaic prose style. Rapid improvements in AI systems means its use is becoming a global issue in the field of law, as the judicial sector figures out how to incorporate artificial intelligence into what is frequently a very traditional, rules-bound work environment. Earlier this year a New York lawyer faced disciplinary proceedings after being caught using ChatGPT for research and citing a none-existent case in a medical malpractice lawsuit. Attorney Jae Lee was referred to the grievance panel of the 2nd U.S. Circuit Court of Appeals in February 2025 after she cited a fabricated case about a Queens doctor botching an abortion in an appeal to revive her client's lawsuit. The case did not exist and had been conjured up by OpenAI's ChatGPT and the case was dismissed. The court ordered Lee to submit a copy of the cited decision after it was not able to find the case. She responded that she was 'unable to furnish a copy of the decision.' Lee said she had included a case 'suggested' by ChatGPT but that there was 'no bad faith, willfulness, or prejudice towards the opposing party or the judicial system' in doing so. The conduct 'falls well below the basic obligations of counsel,' a three-judge panel for the Manhattan-based appeals court wrote. In June two New York lawyers were fined $5,000 after they relied on fake research created by ChatGPT for a submission in an injury claim against Avianca airline. Judge Kevin Castel said attorneys Steven Schwartz and Peter LoDuca acted in bad faith by using the AI bot's submissions - some of which contained 'gibberish' - even after judicial orders questioned their authenticity.