
New York sues Zelle, says security lapses led to $1 billion consumer fraud losses
The lawsuit in a New York state court in Manhattan followed the U.S. Consumer Financial Protection Bureau's decision in March to drop a similar case.
That agency has ended most enforcement activity following U.S. President Donald Trump's return to the White House.
Zelle was launched in 2017, and competes with apps such as PayPal's Venmo and Block's Cash App.
Its parent, Early Warning Services, is owned by seven large U.S. banks: Bank of America, Capital One, JPMorgan Chase, PNC, Truist, US Bank and Wells Fargo.
James said Zelle's parent and the banks knew for years that the platform was vulnerable to fraudsters but resisted basic safeguards, with the banks sometimes ignoring customer complaints while Zelle let fraudsters stay on the platform.
The result was "rampant" fraud that Zelle sometimes refused to address even after it occurred, despite its assurances it was a safe alternative to cash and checks and "backed by the banks, so you know it's secure," the complaint said.
In a statement, Zelle said more than 99.95 per cent of transactions on its platform are completed without reported fraud, leading the industry.
"This lawsuit is a political stunt to generate press, not progress," Zelle said. "The Attorney General should focus on the hard facts, stopping criminal activity and adherence to the law, not overreach and meritless claims.'
Early Warning Services is based in Scottsdale, Arizona. The seven banks were not named as defendants.
PUPPY, UTILITY BILL SCAMS
James said typical scams involved hacking into users' accounts and making unauthorized transfers, convincing users to send money for nonexistent goods and services, and impersonating banks, government offices and utilities.
According to the complaint, one victim was told his electricity would be shut off unless he paid Con Edison $1,477 via Zelle, to an account named "Coned Billing."
Another victim said Chase and Zelle wouldn't help him after he sent $2,600 in two installments via Zelle to buy a puppy, and realized he had been scammed when the purported seller demanded more money.
James said it wasn't until 2023, after the CFPB and several members of Congress began probes, that Zelle adopted "basic" safeguards it had proposed four years earlier.
While reported fraud losses plummeted, the safeguards were "too little too late" for consumers who had lost money, and despite those safeguards Zelle still facilitates "substantial fraudulent activity," the complaint said.
"No one should be left to fend for themselves after falling victim to a scam," James said in a statement.
The lawsuit seeks to require Zelle to beef up anti-fraud protections, and pay restitution and damages to defrauded New Yorkers.
James sued Capital One in May for allegedly cheating savings depositors out of millions of dollars in interest, and in June settled claims against MoneyGram over remittance transfer lapses. The CFPB abandoned similar cases earlier in the year.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNA
18 minutes ago
- CNA
Oracle, Google cloud units strike deal for Oracle to sell Gemini models
SAN FRANCISCO :Oracle and Alphabet said on Thursday their cloud computing units have struck a deal to offer Google's Gemini artificial intelligence models through Oracle's cloud computing services and business applications. The deal, similar to one that Oracle struck with Elon Musk's xAI in June, will let software developers tap Google's models to generate text, video, images and audio while using Oracle's cloud. Businesses that use Oracle's various applications for corporate finances, human resources and supply chain planning will also be able to choose to use Google's models inside those apps. Those Oracle customers will be able to pay for the Google AI technologies using the same system of Oracle cloud credits they use to pay for Oracle services. The two companies did not disclose what, if any, payments will flow between them as part of the deal. For Oracle, the move advances the company's strategy of offering a menu of AI options to its customers rather than trying to push its own technology. For Google, it represents another step in its effort to expand the reach of its cloud offerings and win corporate customers away from rivals such as Microsoft.


CNA
18 minutes ago
- CNA
Equinix enters into multiple advanced nuclear deals to power data centers
NEW YORK :Major data center developer and operator Equinix has entered into several advanced nuclear electricity deals, including power purchase agreements for fission energy and pre-ordering microreactors for its operations, the company said on Thursday. Big Tech's race to expand technologies like generative artificial intelligence, which requires warehouse-like data centers that can require city-sized amounts of electricity at a single site, is driving up global energy consumption and raising fears about depleted power supplies. The voracious energy needs of data centers has led to a rising number of preliminary power deals to fuel data centers with advanced nuclear energy. Small modular reactors and other next-generation energy is not yet commercially available in the U.S., the world's data center hub. The Equinix announcement follows news that the U.S. Department of Energy earlier had selected an initial 11 projects for a pilot program seeking to develop high-tech test nuclear reactors with the aim of getting three of the projects operating in less than a year. Equinix's deals with advanced nuclear providers would supply more than 1 gigawatt of electricity to the company's data centers. Among the agreements, Equinix plans to procure 500 megawatts of energy from California-based Oklo's next-generation nuclear fission powerhouses. It also entered into a preorder agreement for 20 transportable microreactors from Radiant Nuclear, which is also based in California. In Europe, Equinix's agreements to eventually purchase power from next-generation nuclear developers, ULC-Energy and Stellaria. Equinix also entered into advanced fuel cell agreements with Bloom Energy, based in Silicon Valley. The agreements are part of Equinix's long-term planning for electricity to use for its data centers, as opposed to a quick-fix solution, Raouf Abdel, Equinix's executive vice president of global operations, told Reuters.


CNA
2 hours ago
- CNA
Exclusive-Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'INCONSISTENT WITH OUR POLICIES' 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'TAYLOR SWIFT HOLDING AN ENORMOUS FISH' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts,' 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.' Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state.