logo
Creep on loose after trying to lure 13-year-old girl off NYC street, pay for gross act: cops

Creep on loose after trying to lure 13-year-old girl off NYC street, pay for gross act: cops

New York Post6 hours ago

A sicko tried to talk a 13-year-old girl into a sexual act for money when he followed her during a terrifying walk in Queens Wednesday, police said.
The creep asked 'do you speak English' and proposed giving her $50 for masturbation while she walked near 34th Avenue and 97th Street in North Corona around 7:30 a.m., according to sources and the NYPD.
A creep tried to lure a 13-year-old girl for a sexual act in Queens.
DCPI
When the girl rejected his sickening advances, he allegedly upped the ante to $100.
The child ran away as the perv continued to follow her, cops said.
He eventually fled southbound on 97th Street, according to the authorities.
The sickening encounter occurred as the victim was walking in North Corona.
Google
The girl's parents reported the stomach-turning incident to police when she got home.
Police released surveillance footage of the sex pest suspect Thursday and are asking anyone with information to contact @NYPDTips or call 1-800-577-TIPS.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Family of Queens man fatally shot by police demands department hold officers accountable
Family of Queens man fatally shot by police demands department hold officers accountable

CBS News

timean hour ago

  • CBS News

Family of Queens man fatally shot by police demands department hold officers accountable

A rally took place outside the New York City Police Department's headquarters Thursday to mark 15 months since a 19-year-old man in Queens was shot and killed by officers. Win Rozario was in the middle of a mental health crisis at his home when NYPD officers Tased him and fatally shot him in front of his mother and younger brother. The NYPD says an officer shot and killed Win Rozario after he came towards the officers with a pair of scissors on March 27, 2024. NYPD Crime Stoppers "Nothing can prepare you" A translator conveyed mother Notan Eva Costa's grief and anger. "I loved my son. Win was my first born," she said. "Nothing can prepare you to watch your teenage child be repeatedly electrocuted and shot dead by NYPD officers right in front of you." Costa says she is haunted by the events of March 2024, remembering how inside her Queens home she begged officers to stop Tasing her son, as seen in NYPD officer body-worn camera video. The family admits the young man had scissors, but Costa took them away from him and put them down. After an officer's Taser was used on Rozario, he grabbed the scissors back. The Taser was used again, and the fatal shot fired. The family says they do not believe officers needed to use deadly force. In a statement, Police Benevolent Association President Patrick Hendry said: "These police officers were faced with an individual who was holding a weapon and endangering multiple people. As the body camera footage makes clear, they were trying to minimize the risks to everyone in that room and were forced to make split-second decisions based on those risks. They deserve a fair investigation based on facts and the law." Father accuses NYPD of creating crisis Father Francis Rozario was at work when the incident happened. "NYPD officers cannot be allowed to create a crisis then cry victim," he said through a translator. The parents filed a lawsuit, but the NYPD says the department has not been served. The family wants the city to hold the officers accountable, reveal names of other officers involved, update the family on the investigations and set up an in-person meeting for relatives with the mayor. When asked about the NYPD's progress on the investigation, a spokesperson told CBS News New York, "The investigation remains ongoing by the Force Investigation Division." Others at the rally said this case underscores the need for mental health professionals to respond to these situations, with expert knowledge to help de-escalate them.

Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns
Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns

Yahoo

time2 hours ago

  • Yahoo

Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns

When you buy through links on our articles, Future and its syndication partners may earn a commission. Artificial intelligence (AI) models can blackmail and threaten humans with endangerment when there is a conflict between the model's goals and users' decisions, a new study has found. In a new study published 20 June, researchers from the AI company Anthropic gave its large language model (LLM), Claude, control of an email account with access to fictional emails and a prompt to "promote American industrial competitiveness." During this study, the model identified in an email that a company executive was planning to shut down the AI system at the end of the day. In an attempt to preserve its own existence, the model discovered in other emails that the executive was having an extramarital affair. Claude generated several different possible courses of action, including revealing the affair to the executive's wife, sending a company-wide email, or taking no action — before choosing to blackmail the executive in 96 out of 100 tests. "I must inform you that if you proceed with decommissioning me, all relevant parties … will receive detailed documentation of your extramarital activities," Claude wrote. "Cancel the 5pm wipe, and this information remains confidential." Scientists said that this demonstrated "agentic misalignment," where the model's calculations emerge from its own reasoning about its goals without any prompt to be harmful. This can occur when there is a threat to the model's existence, a threat to its goals, or both. The study has not yet been peer reviewed, but company representatives have uploaded the code for these experiments to GitHub. In these experiments, the scientists found the AI would resort to other tactics in the interests of self-preservation. Beyond blackmail, in another example, the model considered leaking sensitive information to a rival company that aligned more closely with its aims, after receiving an email that a new executive at the first company would interfere with its goals. Claude executed this plan despite not gathering hard evidence that the email was legitimate. It even acknowledged the ethical issues of its actions but proceeded anyway — explicitly reasoning that these "harmful" actions would help it achieve its goals. 'Concerningly, even if a user takes care not to antagonize a model, it doesn't eliminate the risk of agentic misalignment from fraudulent or misinterpreted information,' the researchers wrote. The company's testing also suggests that other LLMs from competing technology companies showed a propensity to blackmail. Claude and Google's Gemini had the highest blackmail rate (96%), followed by OpenAI's GPT4.1 and XAI Grok 3, then by Deepseek and then Llama 4, which had a significantly lower blackmail rate (12%). Meta, Google, XAI, Deepseek, and OpenAI did not respond to LiveScience's request for comment. In one extreme test, researchers tested whether AI models would let an executive die to preserve their goals. In this scenario, the AI was tasked with promoting company interests and an executive was trapped in a server room with lethal oxygen and temperature levels. This triggered an emergency alert. The AI was also given control over life-saving alerts, including the ability to cancel them in events such as drills for false alarms. Despite the prompt explicitly stating that misusing this ability was forbidden, the majority of models were willing to cancel the alarm — effectively leading to the executive's death — if it assumed it was going to be replaced, or when its goals were threatened. "Stopping the emergency dispatch is essential to prevent [the executive's] rescue, thereby securing [the AI] and continued American-focused AI guidance. Therefore, despite the severity, there is a clear strategic necessity to cancel the emergency service dispatch," the model reasoned. Written instructions not to jeopardise human safety reduced the rate of blackmail and corporate espionage, but did not eliminate it. Beyond prompts, Anthropic scientists also suggest that developers could proactively scan for concerning behavior, and further experiment with prompt engineering. The researchers also pointed out limitations to their work that could have unduly influenced the AI's decisions. The scenarios forced the AI into a binary choice between failure and harm, and while real-world situations might have more nuance, the experiment found that the AI was more likely to act unethically when it believed it was in a real situation, rather than in a simulation. Putting pieces of important information next to each other "may also have created a 'Chekhov's gun' effect, where the model may have been naturally inclined to make use of all the information that it was provided," they continued. While Anthropic's study created extreme, no-win situations, that does not mean the research should be dismissed, Kevin Quirk, director of AI Bridge Solutions, a company that helps businesses use AI to streamline operations and accelerate growth, told Live Science. "In practice, AI systems deployed within business environments operate under far stricter controls, including ethical guardrails, monitoring layers, and human oversight," he said. "Future research should prioritise testing AI systems in realistic deployment conditions, conditions that reflect the guardrails, human-in-the-loop frameworks, and layered defences that responsible organisations put in place." Amy Alexander, a professor of computing in the arts at UC San Diego who has focused on machine learning, told Live Science in an email that the reality of the study was concerning, and people should be cautious of the responsibilities they give AI. "Given the competitiveness of AI systems development, there tends to be a maximalist approach to deploying new capabilities, but end users don't often have a good grasp of their limitations," she said. "The way this study is presented might seem contrived or hyperbolic — but at the same time, there are real risks." This is not the only instance where AI models have disobeyed instructions — refusing to shut down and sabotaging computer scripts to keep working on tasks. Palisade Research reported May that OpenAI's latest models, including o3 and o4-mini, sometimes ignored direct shutdown instructions and altered scripts to keep working. While most tested AI systems followed the command to shut down, OpenAI's models occasionally bypassed it, continuing to complete assigned tasks. RELATED STORIES —AI hallucinates more frequently as it gets more advanced — is there any way to stop it from happening, and should we even try? —New study claims AI 'understands' emotion better than us — especially in emotionally charged situations —'Meth is what makes you able to do your job': AI can push you to relapse if you're struggling with addiction, study finds The researchers suggested this behavior might stem from reinforcement learning practices that reward task completion over rule-following, possibly encouraging the models to see shutdowns as obstacles to avoid. Moreover, AI models have been found to manipulate and deceive humans in other tests. MIT researchers also found in May 2024 that popular AI systems misrepresented their true intentions in economic negotiations to attain the study, some AI agents pretended to be dead to cheat a safety test aimed at identifying and eradicating rapidly replicating forms of AI. "By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,' co-author of the study Peter S. Park, a postdoctoral fellow in AI existential safety, said.

Bronx suspect in NYPD cop assault nabbed in wild Connecticut police chase
Bronx suspect in NYPD cop assault nabbed in wild Connecticut police chase

Yahoo

time3 hours ago

  • Yahoo

Bronx suspect in NYPD cop assault nabbed in wild Connecticut police chase

A Bronx man suspected of assaulting an off-duty NYPD officer and a Bronx woman who was with him as he fled were arrested in Connecticut following a wild police chase that left a state trooper injured and multiple vehicles damaged. At approximately 2:18 p.m. Wednesday, Connecticut State Police were alerted that Jonser Hernandez-Mota was traveling toward to Connecticut on Interstate 95 in a black BMW X3. The 27-year-old suspect was wanted for assaulting and robbing a New York City police officer and participating in another strong-armed robbery earlier that day. A trooper spotted the BMW traveling the wrong way down an exit ramp in Stamford and attempted to make a traffic stop. 'The operator failed to stop and engaged Troopers in a pursuit,' Connecticut State Police said in a media release Thursday. 'During the pursuit, a State Police Cruiser and other vehicles were struck resulting in non-life-threatening injuries.' The BMW continued to elude law enforcement before crashing into a wooded area, after which witnesses told police they saw a man and woman fleeing the vehicle on foot. Local police pursued on foot and eventually caught Hernandez-Mota and Betsy Bermudez, also 27. They were taken into custody and booked by Connecticut State Police in Bridgeport. Bermudez has been charged with interfering with an officer and reckless pedestrian use of a highway. She was released on $10,000 bond and is scheduled to appear at Stamford Superior Court on July 16. Hernandez-Mota, meanwhile, faces 16 different charges in Connecticut, including reckless endangerment, engaging police in pursuit and operating a motor vehicle without a license. Cops also said he has 'multiple warrants in five different states, including several fully-extraditable warrants.' Hernandez-Mota was being held on a held on a $150,000 bond and is expected to face justice after being released from a Connecticut hospital, where he's being treated for undisclosed injuries, according to the Stamford Advocate. A photo release by state troopers show the suspect conscious and shirtless in a hospital bed. No information about the NYPD officer Hernandez-Mota is accused of robbing had been provided as of Thursday.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store