logo
#

Latest news with #AIrisks

Lack of Responsible AI Safeguards Coming Back To Bite, Survey Suggests
Lack of Responsible AI Safeguards Coming Back To Bite, Survey Suggests

Forbes

time3 days ago

  • Business
  • Forbes

Lack of Responsible AI Safeguards Coming Back To Bite, Survey Suggests

Everyone is talking about 'responsible AI,' but few are doing anything about it. A new survey shows that most executives see the logic and benefits of pursuing a responsible AI approach, but little has been done to make this happen. As a result, many have already experienced issues such as privacy violations, systemic failures, inaccurate predictions, and ethical violations. While 78% of companies see responsible AI as a business growth driver, a meager 2% have adequate controls in place to safeguard against reputational risk and financial loss stemming from AI, according to the survey of 1,500 AI decision-makers published by Infosys Knowledge Institute, the research arm of Infosys. The survey was conducted in March and April of this year. So what exactly constitutes responsible AI? The survey report's authors outlined several elements essential to responsible AI, starting with explainability, 'a big part of gaining trust in AI systems.' Technically, explainability involves techniques to 'explain single prediction by showing features that mattered most for specific result," as well as counterfactual analysis that 'identifies the smallest input changes needed to change a model outcome.' Another techniques, chain-of-thought reasonings, "breaks down tasks into intermediate reasoning stages, making the process transparent." Other processes essential to attaining responsible AI include continuous monitoring, anomaly detection, rigorous testing, validation, robust access controls, following ethical guidelines, human oversight, along with data quality and integrity measures. Most do not yet use these techniques, the survey's authors found. Only 4% have implemented at least five of the above measures. Eighty-three percent deliver responsible AI in a piecemeal manner. On average, executives believe they are underinvesting in responsible by at least 30%. There's an urgency to adopting more responsible AI measures. Just about all the survey's respondents, 95%, report having AI-related incidents in the past two years. At least 77% reported financial loss as a result of AI-related incidents, and 53% suffered reputational impact from such AI related incidents. Three quarters cited damage that was at least considered 'substantial,' with 39% claiming the damage was 'severe' or 'extremely severe.' AI errors "can inflict damage faster and more widely than a simple database error of a rogue employee," the authors pointed out. Those leading the way with responsible AI have seen 39% lower financial losses, 18% lower average severity from their AI incidents. Leading AI incidents experienced over the past two years include the following: The executives with more advanced responsible AI initiatives take measure such as developing improved AI explainability, proactively evaluating and mitigating against bias, rigorously testing and validating AI initiatives and having a clear incident response plan, the survey report's authors stated.

China advocates for global AI group
China advocates for global AI group

Tahawul Tech

time30-07-2025

  • Business
  • Tahawul Tech

China advocates for global AI group

In order to better mitigate the growing risks related to Artificial Intelligence and its concentration in a select few countries, Chinese Premier Li Qiang is urging leaders to form a global AI group. At an AI conference in Shanghai, Li emphasised the need for international exchanges on the technology, with China to lead the global initiative, Bloomberg reported. The comments come in the wake of the US unveiling its own AI action plan designed to ensure the nation dominates AI in the future. The US plan is focused on cutting regulations, including streamlining processes permitting data centres, chip manufacturing and energy infrastructure. It also aims to make US hardware and software the global standard for AI. The Premier argued key resources and capabilities are currently concentrated in a few countries and companies, adding 'if we engage in technological monopoly, controls and restrictions, AI will become an exclusive game for a small number of countries and enterprises,' Bloomberg wrote. He also acknowledged a shortage of AI chips was a major bottleneck for China, the news agency noted. Source: Mobile World Live Image Credit: Stock Image

OpenAI's ChatGPT Data Retention Policy Explained : Is Your Data at Risk?
OpenAI's ChatGPT Data Retention Policy Explained : Is Your Data at Risk?

Geeky Gadgets

time25-06-2025

  • Business
  • Geeky Gadgets

OpenAI's ChatGPT Data Retention Policy Explained : Is Your Data at Risk?

What if the tool you rely on to streamline your work or spark creativity was quietly turning into a data liability? Recent revelations about OpenAI's ChatGPT have sparked a storm of controversy, with a leaked strategy document exposing plans to transform the AI into a deeply personalized 'super assistant.' While this vision promises unprecedented convenience, it comes at a cost: your privacy and data security. Compounding the issue, a federal court order now mandates OpenAI to retain all ChatGPT conversations indefinitely, including sensitive or deleted content. For businesses and individuals alike, this raises unsettling questions about data ownership, compliance, and the risks of entrusting proprietary information to AI systems. Goda Go dives into the tangled web of privacy risks, legal challenges, and ethical dilemmas surrounding ChatGPT's evolution. From the implications of retaining sensitive data to the looming copyright battle with The New York Times, the stakes are higher than ever. You'll uncover how OpenAI's ambitions could reshape the way we interact with AI—and why it's critical to rethink how we use these tools. As the line between innovation and intrusion blurs, the question remains: can we truly trust AI to safeguard what matters most? ChatGPT Privacy and Legal Risks Privacy Risks and Legal Challenges The court order requires OpenAI to preserve all ChatGPT interactions, including deleted and temporary chats. This directive directly conflicts with OpenAI's stated privacy policies and global regulations such as the General Data Protection Regulation (GDPR). For businesses, this creates significant risks: sensitive data entered into ChatGPT—such as financial records, proprietary strategies, or personal information—could potentially become accessible to legal authorities or third parties. The lawsuit filed by The New York Times adds another layer of complexity. It alleges that ChatGPT may reproduce copyrighted material verbatim, necessitating the retention of chat histories to investigate potential copyright infringements. This legal battle highlights the growing tension between AI's capabilities and intellectual property rights, raising critical questions about how AI systems are trained and deployed. These developments underscore the need for businesses to carefully evaluate how they use AI tools like ChatGPT, particularly when handling sensitive or proprietary information. OpenAI's Vision for a 'Super Assistant' Leaked strategy documents from OpenAI outline an ambitious plan to evolve ChatGPT into a 'super assistant' capable of delivering deeply personalized user interactions. This envisioned assistant would integrate seamlessly across platforms, potentially replacing traditional tools and even some human interactions. While this vision promises enhanced convenience and efficiency, it also raises significant concerns about data ownership, privacy, and security. To achieve this level of personalization, the system would need to collect and analyze vast amounts of user data. However, this approach increases the risk of exposing sensitive information or creating vulnerabilities for misuse. The prospect of a highly integrated AI assistant highlights the urgent need for robust data protection measures and transparent policies to safeguard user information. Without these safeguards, the potential benefits of a 'super assistant' could be overshadowed by the risks it introduces. ChatGPT Privacy Risks: What You Need to Know Now Watch this video on YouTube. Stay informed about the latest in ChatGPT Privacy Concerns by exploring our other resources and articles. Reliability and the Risk of Errors AI reliability remains a pressing issue, as demonstrated by real-world examples of decision-making errors. For instance, AI systems have misclassified healthcare contracts, leading to disruptions in critical services for veterans. Such incidents reveal the limitations of current AI technologies in managing complex tasks and large datasets with precision. These errors emphasize the risks of over-relying on AI in high-stakes environments such as healthcare, finance, and legal services. While AI tools can enhance efficiency and streamline operations, businesses must carefully weigh their benefits against the potential for costly mistakes. Making sure that AI systems are used responsibly and with appropriate oversight is essential to minimizing these risks. Implications for Businesses The risks associated with using ChatGPT extend beyond privacy concerns to include compliance challenges, particularly for industries with strict regulatory requirements like healthcare and finance. Sensitive customer information, financial data, and proprietary strategies entered into ChatGPT could be exposed or misused, leading to severe consequences. To mitigate these risks, businesses should reassess their use of AI tools. Unless enterprise-level solutions with zero data retention agreements are in place, organizations should avoid inputting sensitive data into ChatGPT. Failure to do so could result in regulatory penalties, reputational damage, and financial losses. Businesses must also stay informed about evolving regulations and legal precedents that could impact their use of AI technologies. Exploring Safer AI Alternatives For businesses seeking more secure AI solutions, several alternatives offer enhanced privacy protections. These options include: Claude AI by Anthropic: Designed with advanced security features, making it suitable for handling sensitive data. Designed with advanced security features, making it suitable for handling sensitive data. Google Vertex AI: A robust platform with built-in compliance tools tailored for regulated industries. A robust platform with built-in compliance tools tailored for regulated industries. Open source models like Llama and Mistral: These allow deployment on local infrastructure, giving businesses greater control over their data. These allow deployment on local infrastructure, giving businesses greater control over their data. Hybrid AI systems: Combining cloud-based APIs with local models, this approach balances AI capabilities with strict data control. These alternatives provide businesses with options to use AI while maintaining higher levels of data security and compliance. By exploring these solutions, organizations can continue to benefit from AI technologies without compromising sensitive information. Actionable Steps for Businesses To navigate the evolving AI landscape and safeguard sensitive information, businesses should take the following steps: Stop inputting sensitive data into ChatGPT and similar AI tools. Conduct thorough risk assessments to identify potential vulnerabilities in AI usage. Inform stakeholders about data exposure risks associated with AI tools. Explore alternative AI solutions with strong data protection policies. Implement local AI models for handling proprietary or sensitive information. By adopting these measures, organizations can reduce risks while continuing to benefit from AI technologies. Proactively addressing these challenges will enable businesses to harness the potential of AI while protecting their most valuable assets. Preparing for the Future The court order requiring OpenAI to retain ChatGPT conversations could set a precedent for future legal actions against AI companies. As AI technologies advance, businesses must prioritize data ownership, privacy, and compliance to mitigate risks. Adopting safer AI alternatives and implementing robust data management practices will be critical for organizations aiming to protect their sensitive information. The rapidly evolving regulatory and technological landscape demands vigilance and adaptability. As AI becomes increasingly integrated into daily operations, businesses must remain proactive in addressing its challenges and opportunities. By doing so, they can use the fantastic potential of AI while safeguarding privacy and compliance in an ever-changing environment. Media Credit: Goda Go Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Make the Robot Your Colleague, Not Overlord
Make the Robot Your Colleague, Not Overlord

Bloomberg

time18-06-2025

  • Science
  • Bloomberg

Make the Robot Your Colleague, Not Overlord

There's the Terminator school of perceiving artificial intelligence risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders — including Sam Altman of OpenAI and Demis Hassabis of Alphabet Inc.'s DeepMind — sent shockwaves with a statement that warned: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.'

Top AI Researchers Meet to Discuss What Comes After Humanity
Top AI Researchers Meet to Discuss What Comes After Humanity

Yahoo

time16-06-2025

  • Business
  • Yahoo

Top AI Researchers Meet to Discuss What Comes After Humanity

A group of the top minds in AI gathered over the weekend to discuss the "posthuman transition" — a mind-bending exercise in imagining a future in which humanity willfully hands over power, or perhaps bequeaths existence entirely, to some sort of superhuman intelligence. As Wired reports, the lavish party was organized by generative AI entrepreneur Daniel Faggella. Attendees included "AI founders from $100 million to $5 billion valuations" and "most of the important philosophical thinkers on AGI," Faggella enthused in a LinkedIn post. He organized the soirée at a $30 million mansion in San Francisco because the "big labs, the people that know that AGI is likely to end humanity, don't talk about it because the incentives don't permit it," Faggella told Wired. The symposium allowed attendees and speakers alike to steep themselves in a largely fantastical vision of a future where artificial general intelligence (AGI) was a given, rather than some distant dream of tech that isn't even close to existing. AI companies, most notably OpenAI, have talked at length about wanting to realize AGI, though often without clearly defining the term. The risks of racing toward a superhuman intelligence have remained hotly debated, with billionaire Elon Musk once arguing that unregulated AI could be the "biggest risk we face as a civilization." OpenAI Sam Altman has also warned of dangers facing humanity, including increased inequality and population control through mass surveillance, as a result of realizing AGI — which also happens to be his firm's number one priority. But for now, those are largely moot points made by individuals who are billions of dollars deep in reassuring investors that AGI is mere years away. Given the current state of wildly hallucinating large language models that still fail at the most basic tasks, we are seemingly still a long way from a point at which AI could surpass the intellectual capabilities of humans. Just last week, researchers at Apple released a damning paper that threw cold water on the "reasoning" capabilities of the latest and most powerful LLMs, arguing they "face a complete accuracy collapse beyond certain complexities." However, to insiders and believers in the tech, AGI is mostly a matter of when, not if. Speakers at this weekend's event talked about how AI can seek out deeper, universal values that humanity hasn't even been privy to, and that machines should be taught to pursue "the good," or risk enslaving an entity capable of suffering. As Wired reports, Faggella similarly invoked philosophers including Baruch Spinoza and Friedrich Nietzsche, calling on humanity to seek out the yet-undiscovered value in the universe. "This is an advocacy group for the slowing down of AI progress, if anything, to make sure we're going in the right direction," he told the publication. More on AGI: OpenAI's Top Scientist Wanted to "Build a Bunker Before We Release AGI"

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store