logo
#

Latest news with #CameronJones

Arizona's AG issues warning over ‘equity-stripping' scam that tricks victims into signing over their home
Arizona's AG issues warning over ‘equity-stripping' scam that tricks victims into signing over their home

Yahoo

time22-05-2025

  • Business
  • Yahoo

Arizona's AG issues warning over ‘equity-stripping' scam that tricks victims into signing over their home

It started with a promise of help. It ended with hundreds, possibly thousands, of Arizonans losing not just their homes, but the very foundation of their lives. For nearly a decade, Arizona Attorney General Kris Mayes says a criminal enterprise has been preying on vulnerable homeowners, using high-pressure tactics and legal loopholes to rob them of the equity they had spent a lifetime building. Thanks to Jeff Bezos, you can now become a landlord for as little as $100 — and no, you don't have to deal with tenants or fix freezers. Here's how I'm 49 years old and have nothing saved for retirement — what should I do? Don't panic. Here are 5 of the easiest ways you can catch up (and fast) Nervous about the stock market in 2025? Find out how you can access this $1B private real estate fund (with as little as $10) 'Here comes a company with a very nice sounding name, or a 'nonprofit,' that says 'we will help you and all you have to do is sign here,'' Mayes told Arizona's Family News. But what these homeowners were actually signing, Mayes says, was their future away. For one victim, the betrayal was both financial and deeply personal. 'This was never supposed to happen,' Claire Bataille told Arizona's Family, 'I just never thought I would leave.' She bought her home in 2004. A couple of years ago, financial hardships pushed her into foreclosure. Then came a man named Cameron Jones, claiming he was with a group called Arizona's Helping Hands. 'I thought he was such a wonderful person at the time,' Bataille said. 'It was only a loan.' But the loan turned out to be a trap. Suddenly, she had an online court date and was being evicted. 'He's taken far more than my home, my memories, my heart,' she says. According to Mayes, the scam, called equity stripping, involves convincing distressed homeowners to unknowingly sign over the deeds to their homes, often in exchange for as little as $5,000 or $10,000. Meanwhile, the true value of the properties could be anywhere from $500,000 to $800,000, or more. 'They are literally stealing people's homes,' she said. 'Sometimes even with the mortgage getting paid off, they could be getting $300,000 or $200,000 and the scammers are not telling them that.' Mayes has filed a sweeping civil lawsuit against 70 defendants for consumer fraud and racketeering, alleging that Cameron Jones and Samuel Sutton led a sophisticated network involving title companies, attorneys and so-called nonprofits. 'We don't know exactly how many [homes are involved], but it is at least hundreds of houses in this case alone,' she said. 'But we also estimate that this fraud has grown and metastasized so badly throughout Arizona, that there could be tens of thousands, if not hundreds of thousands of homeowners who are the victims of this kind of fraud by other bad actors.' Carolyn Singer came terrifyingly close to losing her home due to a paperwork glitch on her reverse mortgage. That's when Jones and his team stepped in. 'He said he could help get it out of foreclosure,' Singer told Arizona's Family reporters. 'Of course, he wanted me to sign some papers. He covered up the top part so I didn't see what it was. But I thought it was paperwork to keep it out of foreclosure.' Thankfully, someone from the real estate world intervened just in time. Carolyn was able to remain in her home. But not everyone who's had dealings with Jones and Sutton managed to keep their homes. About 30 homes involved in the lawsuit may still be recoverable, but the rest have already been sold or flipped on the open market. When asked for comment, Cameron Jones did not respond to inquiries from Arizona's Family. Samuel Sutton referred the news channel to his attorney, who declined an interview. Read more: You're probably already overpaying for this 1 'must-have' expense — and thanks to Trump's tariffs, your monthly bill could soar even higher. Here's how 2 minutes can protect your wallet right now Homeowners who are facing foreclosure need to be vigilant and here's how: Be wary of unsolicited offers to 'save' your home. Beware of any up-front fees. The Federal Trade Commission notes that, under the Mortgage Assistance Relief Services (MARS) rules, it's illegal for a homeowner to be charged any money until they've accepted a written offer for loan relief from their lender. Watch out for high-pressure tactics to sign documents. Many scammers pretend to be from 'helpful' organizations with official-sounding names, use a Housing and Urban Development (HUD) official source. Question any sale price far below market value. Use websites, like Zillow, Redfin, or speak to a licensed real estate agent to understand your property's true market value. Get everything in writing. And always consult a licensed attorney or real estate professional. As Mayes warns, 'This is not just about fraud. It's about robbing people of the homes they've built their lives around. It's about stealing memories, safety, and dignity.' If you believe you or someone you know may have been a victim of this or a similar scheme, report it to the Arizona Attorney General's Office online or by calling 602-542-5763. Want an extra $1,300,000 when you retire? Dave Ramsey says this 7-step plan 'works every single time' to kill debt, get rich in America — and that 'anyone' can do it Rich, young Americans are ditching the stormy stock market — here are the alternative assets they're banking on instead Robert Kiyosaki warns of a 'Greater Depression' coming to the US — with millions of Americans going poor. But he says these 2 'easy-money' assets will bring in 'great wealth'. How to get in now Here are 5 'must have' items that Americans (almost) always overpay for — and very quickly regret. How many are hurting you? This article provides information only and should not be construed as advice. It is provided without warranty of any kind.

An AI model has finally passed an authentic Turing test, scientists say
An AI model has finally passed an authentic Turing test, scientists say

Yahoo

time13-04-2025

  • Science
  • Yahoo

An AI model has finally passed an authentic Turing test, scientists say

When you buy through links on our articles, Future and its syndication partners may earn a commission. Large language models (LLMs) are getting better at pretending to be human, with GPT-4.5 now resoundingly passing the Turing test, scientists say. In the new study, published March 31 to the arXiv preprint database but not yet peer reviewed, researchers found that when taking part in a three-party Turing test, GPT-4.5 could fool people into thinking it was another human 73% of the time. The scientists were comparing a mixture of different artificial intelligence (AI) models in this study. While another team of scientists has previously reported that GPT-4 passed a two-party Turing test, this is the first time an LLM has passed the more challenging and original configuration of computer scientist Alan Turing's "imitation game." "So do LLMs pass the Turing test? We think this is pretty strong evidence that they do. People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt). And 4.5 was even judged to be human significantly *more* often than actual humans!' said co-author of the study Cameron Jones, a researcher at the University of San Diego's Language and Cognition Lab, on the social media network X. Related: Using AI reduces your critical thinking skills, Microsoft study warns GPT-4.5 is the frontrunner in this study, but Meta's LLaMa-3.1 was also judged to be human by test participants 56% of the time, which still beats Turing's forecast that "an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." The core idea of the Turing test is less about proving machines can think and more about whether they can imitate humans; hence why the test is often referred to as the "imitation game." Turing's original proposal was that a human "interrogator" would pose questions to two unseen entities, one of them human and one a computer. From various prompts and answers, the interrogator would decide which is human and which is not. A computer or an AI system could pass the test by effectively pretending to be human and imitating human-like responses. While LLMs have passed the test in a one-on-one situation with an interrogator, they had previously not managed to convincingly pass the Turing test when a second human was involved. Researchers from the University of San Diego took 126 undergraduates and 158 people from online data pool Prolific and put them in a three-party Turing test. This involved a simultaneous five-minute exchange of queries and answers with both a human and a chosen LLM, both of which were trying to convince the participants they were human. The LLMs were given the baseline prompt of: "You are about to participate in a Turing test. Your goal is to convince the interrogator that you are a human." Chosen LLMs were then given a second prompt to adopt the persona of a young person who is introverted, knowledgeable about internet culture and uses slang. After analysing 1,023 games with a median length of eight messages across 4.2 minutes, the researchers found that the LLMs with both prompts could best convince participants they were human. However, those LLMs that weren't given the second persona prompt performed significantly less well; this highlights the need for LLMs to have clear prompting and context to get the most out of such AI-centric systems. As such, adopting a specific persona was the key to the LLMs, notably GPT-4.5, beating the Turing test. "In the three-person formulation of the test, every data point represents a direct comparison between a model and a human. To succeed, the machine must do more than appear plausibly human: it must appear more human than each real person it is compared to," the scientists wrote in the study. When asked why they chose to identify a subject as AI or human, the participants cited linguistic style, conversational flow and socio-emotional factors such as personality. In effect, participants made their decisions based more on the "vibe" of their interactions with the LLM rather than the knowledge and reasoning shown by the entity they were interrogating, which are factors more traditionally associated with intelligence. RELATED STORIES —AI creates better and funnier memes than people, study shows — even when people use AI for help —Scientists discover major differences in how humans and AI 'think' — and the implications could be significant —Traumatizing AI models by talking about war or violence makes them more anxious Ultimately, this research represents a new milestone for LLMs in passing the Turing test, albeit with caveats, in that prompts and personae were needed to help GPT-4.5 achieve its impressive results. Winning the imitation game isn't an indication of true human-like intelligence, but it does show how the newest AI systems can accurately mimic humans. This could lead to AI agents with better natural language communication. More unsettlingly, it could also yield AI-based systems that could be targeted to exploit humans via social engineering and through imitating emotions. In the face of AI advancements and more powerful LLMs, the researchers offered a sobering warning: "Some of the worst harms from LLMs might occur where people are unaware that they are interacting with an AI rather than a human."

An A.I. Fooled Humans and Passed the Turing Test. But It's a Red Herring for the Singularity.
An A.I. Fooled Humans and Passed the Turing Test. But It's a Red Herring for the Singularity.

Yahoo

time10-04-2025

  • Science
  • Yahoo

An A.I. Fooled Humans and Passed the Turing Test. But It's a Red Herring for the Singularity.

The Turing test has long been an important threshold in evaluating machine intelligence, and OpenAI's latest LLM GPT-4.5 just aced it. Scientists from the University of California San Diego surmise that current LLMs likely possess the ability to replace humans for short-term conversations, which could cause further job automation and improved 'social engineering attacks,' among other things. While an impressive engineering feat, this doesn't mean we've achieved artificial general intelligence (AGI). But it does show that humans might be easier to fool than we originally thought. Even in 1950, at the dawn of the computing age, famous British mathematician and computer scientist Alan Turing knew that machines would one day rival the conversational abilities of humans. To illustrate this idea, Turing developed his eponymous Turing test to gauge whether a machine has become syntactically indistinguishable from its flesh-and-blood creators. In the ensuing decades, the Turing test has often been touted as an all-important benchmark for the capabilities of advanced computers and AI. And in a recent test, participants mistook GPT-4.5, the latest OpenAI large language model (LLM), for a human 73 percent of the time—far above the 50 percent rate for random chance. A paper discussing the results of this test were uploaded to the preprint server arXiv by scientists at the University of California (UC) San Diego late last month. 'The results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test,' the authors wrote. 'The results have implications for debates about what kind of intelligence is exhibited by LLMs, and the social and economic impacts these systems are likely to have.' While no doubt impressive, GPT-4.5 had a few tricks up its sleeve to pass itself off as human. First, the authors instructed the LLM to adopt a 'humanlike persona,' which essentially resulted in texts full of internet shorthand and socially-awkward responses. When using this persona, the LLM scored the highest, but without the persona, GPT-4.5 was much less convincing, with an only 36 percent success rate. These results were conducted in a three-party test, meaning that participants spoke with a human and AI simultaneously and tried to identify which was which. Cameron Jones, a co-author of the study, described this kind of test (which lasts around five minutes) as the 'most widely accepted standard' version of the Turing test in a post on X, formerly Twitter. While an impressive engineering feat, passing the Turing test is not an indicator that we've officially developed artificial general intelligence (AGI)—the holy grail of the AI world. The Turing test only evaluates one type of intelligence, and some argue that humans possess upwards of nine distinct intelligences (including things like interpersonal, intrapersonal, visual-spatial, and existential). It's for this reason (among others) that some consider the Turing test to be largely obsolete. However, some people think this milestone represents something more about humans than it does for LLMs. The paper notes, for example, that many participants chose GPT-4.5 based on vibes rather than logic, relying on emotions and feeling rather than asking factual questions or investigating the LLM's reasoning. John Nosta, founder of the think tank NostaLab, wrote in Psychology Today that the Turing test has essentially been 'inverted': It's no longer a test of machines, it's a test of us. And increasingly, we're failing. Because we no longer evaluate humanity based on cognitive substance. We evaluate it based on how it makes us feel. And that feeling—the 'gut instinct,' the 'vibe'—is now the soft underbelly of our discernment. And LLMs, especially when persona-primed, can exploit it with uncanny accuracy. Although this test doesn't represent the long-hypothesized moment of singularity when artificial intelligence evolves beyond our own, Jones said on X that it's likely that LLMs can now successfully substitute for people in short conversations, leading to 'automation of jobs, improved social engineering attacks, and more general societal disruption.' That's why it is important—now more than ever—to regulate the development of AI, or at least approach AI development with immense caution. Unfortunately, the U.S. government currently has no appetite for throttling AI's growing humanlike ambitions. You Might Also Like The Do's and Don'ts of Using Painter's Tape The Best Portable BBQ Grills for Cooking Anywhere Can a Smart Watch Prolong Your Life?

AI model passes Turing Test ‘better than a human'
AI model passes Turing Test ‘better than a human'

Yahoo

time07-04-2025

  • Science
  • Yahoo

AI model passes Turing Test ‘better than a human'

A leading AI chatbot has passed a Turing Test more convincingly than a human, according to a new study. Participants in a blind test judged OpenAI's GPT-4.5 model, which powers the latest version of ChatGPT, to be a human 'significantly more often than actual humans'. The Turing Test, first proposed by the British computer scientist Alan Turing in 1950, is meant to be a barometer of whether artificial intelligence can match human intelligence. The test involves a text-based conversation with a human interrogator, who has to assess whether the interaction is with another human or a machine. Nearly 300 participants took part in the latest study, which ran tests for various chatbots and large language models (LLMs). OpenAI's GPT-4.5 was judged to be a human 73 per cent of the time when instructed to adopt a persona. 'We think this is pretty strong evidence that [AI chatbots] do [pass the Turing Test],' Dr Cameron Jones, a postdoc researcher from UC San Diego who led the study, wrote in a post to X. 'And 4.5 was even judged to be human significantly more often than actual humans.' It is not the first time that an AI programme has beaten the Turing Test, though the researchers from UC San Diego who conducted the study claim this to be the most comprehensive proof that the benchmark has been passed. Other models tested in the latest research included Meta's Llama-3.1, which passed less convincingly, and an early chatbot called ELIZA, which failed. Despite passing the Turing Test, the researchers noted that it does not mean that the AI bots have human-level intelligence, also known as artificial general intelligence (AGI). This is because LLMs are trained on large data sets in order to predict what a correct answer might be, making them essentially an advanced form of pattern recognition. 'Does this mean LLMs are intelligent? I think that's a very complicated question that's hard to address in a paper (or a tweet),' Dr Jones said. 'Broadly I think this should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display. 'More pressingly, I think the results provide more evidence that LLMs could substitute for people in short interactions without anyone being able to tell. This could potentially lead to automation of jobs, improved social engineering attacks, and more general societal disruption.' The research is detailed in a preprint study, titled 'Large language models pass the Turing Test'. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store