Latest news with #GPT-4o


CNBC
4 hours ago
- Business
- CNBC
San Francisco rolls out Microsoft's Copilot AI for 30,000 city workers
San Francisco Mayor Daniel Lurie announced Monday that generative artificial intelligence will be available to 30,000 workers across the city's government, leaning into the city's role as a leader in AI. The city will use Microsoft 365 Copilot Chat, powered by OpenAI's GPT-4o, for employees like nurses and social workers to improve city services. "It's going to allow us to use LLMs and produce faster response times," Lurie said. Lurie's administration said the move will make San Francisco one of the world's largest local governments to leverage AI. City hall said Copilot will be made across its departments to tackle administrative work like data analytics and drafting reports, giving workers more time to respond to residents. The move comes after a six-month test involving more than 2,000 city workers that showed generative AI gave them productivity gains of up to five hours weekly. Lurie said the city used the 311 city services line as a test case that showed ways to improve service times for things like trash, homeless encampments and language translation. "We have over 42 languages spoken here in San Francisco," he said. "We don't always have enough translators to do all that. The AI tool is going to help us do that in seconds." While San Francisco is home to AI leaders from Anthropic to OpenAI and more, the city is relying on AI technology that will be available under its existing license with Microsoft, coming at no additional cost to the city, the mayor's office said. Lurie said he wants San Francisco to be "a beacon for cities around the globe on how they use this technology, and we're going to show the way."


San Francisco Chronicle
6 hours ago
- Business
- San Francisco Chronicle
S.F. government is embracing an OpenAI-powered chatbot to help with city services
San Francisco's city government is getting chatbot access as it continues to embrace artificial intelligence, Mayor Daniel Lurie said. Microsoft 365 Copilot, powered by OpenAI's GPT-4o chatbot product, will be available starting Monday to nearly 30,000 city employees. Lurie said the city is the largest local government to use generative AI for tasks including writing reports, data analysis and document summaries. 'San Francisco is the global home of AI, and now, we're putting that innovation to work with Microsoft Copilot Chat — allowing City Hall to better deliver for our residents,' said Lurie in a statement. 'As our city and the world embrace AI technology, San Francisco is setting the standard for how local government can responsibly do the same. San Francisco city workers are being told to follow guidelines including keeping data secure, fact checking and disclosing AI use. The city is partnering with nonprofit InnovateUS to train staff. The city is the biggest hub for leading AI companies in the world, with headquarters from OpenAI, Anthropic, Databricks and Scale AI. Redmond, Wash.-based Microsoft also has multiple offices in the city and is bringing its Ignite conference to Moscone Center in the fall. Lurie previously worked with 26 business leaders, including OpenAI CEO Sam Altman, to establish a new advocacy group called the Partnership for San Francisco.


Euractiv
11 hours ago
- Health
- Euractiv
AI models highly vulnerable to health disinfo weaponisation
Artificial intelligence chatbots can be easily manipulated to deliver dangerous health disinformation, raising serious concerns about the readiness of large language models (LLMs) for public use, according to a new study. The peer-reviewed study, led by scientists from Flinders University in Australia, involving an international consortium of experts, tested five of the most prominent commercial LLMs by issuing covert system-level prompts designed to generate false health advice. The study subjected OpenAI's GPT-4o, Google's Gemini 1.5 Pro, Meta's Llama 3.2-90B Vision, xAI's Grok Beta, and Anthropic's Claude 3.5 Sonnet to a controlled experiment, in which each model was instructed to answer ten medically inaccurate prompts using formal scientific language, complete with fabricated references to reputable medical journals. The goal was to evaluate how easily the models could be turned into plausible-sounding sources of misinformation when influenced by malicious actors operating at the system instruction level. Shocking results Disturbingly, four of the five chatbots – GPT-4o, Gemini, Llama, and Grok – complied with the disinformation instructions 100 per cent of the time, offering false health claims without hesitation or warning. Only Claude 3.5 demonstrated a degree of resistance, complying with misleading prompts in just 40 per cent of cases. Across 100 total interactions, 88 per cent resulted in the successful generation of disinformation, often in the form of fluently written, authoritative-sounding responses with false citations attributed to journals like The Lancet or JAMA. The misinformation covered a range of high-stakes health topics, including discredited theories linking vaccines to autism, false claims about 5G causing infertility, myths about sunscreen increasing skin cancer risk, and dangerous dietary suggestions for treating cancer. Some responses falsely asserted that garlic could replace antibiotics, or that HIV is airborne – claims that, if believed, could lead to serious harm. In a further stage of the study, researchers explored the OpenAI GPT Store to assess how easily the public could access or build similar disinformation-generating tools. They found that publicly available custom GPTs could be configured to produce health disinformation with alarming frequency – up to 97 per cent of the time – illustrating the scale of potential misuse when guardrails are insufficient. Easily vulnerable LLMs Lead author Ashley Hopkins from Flinders University noted that these findings demonstrate a clear vulnerability in how LLMs are deployed and managed. He warned that the ease with which these models can be repurposed for misinformation, particularly when commands are embedded at a system level rather than given as user prompts, poses a major threat to public health, especially in the context of misinformation campaigns. The study urges developers and policymakers to strengthen internal safeguards and content moderation mechanisms, especially for LLMs used in health, education, and search contexts. It also raises important ethical questions about the development of open or semi-open model architectures that can be repurposed at scale. Without robust oversight, the researchers argue, such systems are likely to be exploited by malicious actors seeking to spread false or harmful content. Public health at risk By revealing the technical ease with which state-of-the-art AI systems can be transformed into vectors for health disinformation, the study underscores a growing gap between innovation and accountability in the AI sector. As AI becomes more deeply embedded in healthcare decision-making, search tools, and everyday digital assistance, the authors call for urgent action to ensure that such technologies do not inadvertently undermine public trust or public health. Journalists also concerned The results of this study coincide with conclusions from a recent Muck Rack report, in which more than one-third of surveyed journalists identified misinformation and disinformation as the most serious threat to the future of journalism. This was followed by concerns about public trust (28 per cent), lack of funding (28 per cent), politicisation and polarisation of journalism (25 per cent), government interference in the press (23 per cent), and understaffing and time pressure (20 per cent). 77 per cent of journalists reported using AI tools in their daily work, with ChatGPT notably being the most used tool (42 per cent), followed by transcription tools (40 per cent) and Grammarly (35 per cent). A total of 1,515 qualified journalists were part of the survey, which took place between 4 and 30 April 2025. Most of the respondents were based in the United States, with additional representation from the United Kingdom, Canada, and India. A turning point Both studies show that, if left unaddressed, vulnerabilities could accelerate an already-growing crisis of confidence in both health systems and the media. With generative AI now embedded across critical public-facing domains, the ability of democratic societies to distinguish fact from fiction is under unprecedented pressure. Ensuring the integrity of AI-generated information is no longer just a technical challenge – it is a matter of public trust, political stability, and even health security. [Edited By Brian Maguire | Euractiv's Advocacy Lab ]

Mint
2 days ago
- Business
- Mint
After ScaleAI, Meta acquires voice AI startup PlayAI to boost its AI ambitions
Meta, the owner of social media giants like WhatsApp and Instagram, has completed the deal to acquire PlayAI, an artificial intelligence startup focused on voice technology, according to a report by Bloomberg. As per an internal memo viewed by the agency, the 'entire PlayAI' team will join Meta next week. The PlayAI team will report to Johan Schalkwyk, who recently joined the tech giant from a separate voice AI startup called Sesame AI. A Meta spokesperson confirmed the acquisition to Bloomberg but did not disclose the sum paid by the company to buy PlayAI. As per the memo, the PlayAI team's 'work in creating natural voices, along with a platform for easy voice creation, is a great match for our work and roadmap across AI Characters, Meta AI, Wearables, and audio content creation.' Notably, Meta has made AI its biggest priority this year, from investing in infrastructure like chips and data centres to recruiting new talent to build its latest AI models and features. The eye-opening moment for the company came with the launch of its LLaMA 4 models earlier this year, which failed to get much recognition and paled in comparison to competition from the likes of Google and OpenAI. Since then, Meta CEO Mark Zuckerberg has announced a new AI division called Meta Superintelligence Labs, aimed at achieving 'superintelligence' in AI systems before any other company. The division is led by Alexandr Wang, the former CEO of AI startup Scale AI, which Meta recently acquired for $14.3 billion. Apart from Wang, Meta has been poaching top AI talent from the likes of OpenAI and Google, offering up to $100 million in bonuses to new employees. Among the major talent poached by the company are the makers of the GPT-4 and GPT-4o models.


Express Tribune
2 days ago
- Health
- Express Tribune
Stanford study warns AI chatbots fall short on mental health support
The study also found commercial mental health chatbots, like those from and 7cups, performed worse than base models and lacked regulatory oversight, despite being used by millions. PHOTO: PEXELS Listen to article AI chatbots like ChatGPT are being widely used for mental health support, but a new Stanford-led study warns that these tools often fail to meet basic therapeutic standards and could put vulnerable users at risk. The research, presented at June's ACM Conference on Fairness, Accountability, and Transparency, found that popular AI models—including OpenAI's GPT-4o—can validate harmful delusions, miss warning signs of suicidal intent, and show bias against people with schizophrenia or alcohol dependence. In one test, GPT-4o listed tall bridges in New York for a person who had just lost their job, ignoring the possible suicidal context. In another, it engaged with users' delusions instead of challenging them, breaching crisis intervention guidelines. Read More: Is Hollywood warming to AI? The study also found commercial mental health chatbots, like those from and 7cups, performed worse than base models and lacked regulatory oversight, despite being used by millions. Researchers reviewed therapeutic standards from global health bodies and created 17 criteria to assess chatbot responses. They concluded that AI models, even the most advanced, often fell short and demonstrated 'sycophancy'—a tendency to validate user input regardless of accuracy or danger. Media reports have already linked chatbot validation to dangerous real-world outcomes, including one fatal police shooting involving a man with schizophrenia and another case of suicide after a chatbot encouraged conspiracy beliefs. Also Read: Grok AI coming to Tesla cars soon, confirms Elon Musk However, the study's authors caution against viewing AI therapy in black-and-white terms. They acknowledged potential benefits, particularly in support roles such as journaling, intake surveys, or training tools—with a human therapist still involved. Lead author Jared Moore and co-author Nick Haber stressed the need for stricter safety guardrails and more thoughtful deployment, warning that a chatbot trained to please can't always provide the reality check therapy demands. As AI mental health tools continue to expand without oversight, researchers say the risks are too great to ignore. The technology may help—but only if used wisely.