logo
Microsoft Probing If Chinese Hackers Learned SharePoint Flaws Through Alert

Microsoft Probing If Chinese Hackers Learned SharePoint Flaws Through Alert

Bloomberg25-07-2025
Microsoft Corp. is investigating whether a leak from its early alert system for cybersecurity companies allowed Chinese hackers to exploit flaws in its SharePoint service before they were patched, according to people familiar with the matter.
The technology company is looking into whether the program — designed to give cybersecurity experts a chance to fix computer systems before the revelation of new security concerns — led to the widespread exploitation of vulnerabilities in its SharePoint software globally over the past several days, the people said, asking not to be identified discussing private matters.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Microsoft AI chief says it's ‘dangerous' to study AI consciousness
Microsoft AI chief says it's ‘dangerous' to study AI consciousness

Yahoo

time24 minutes ago

  • Yahoo

Microsoft AI chief says it's ‘dangerous' to study AI consciousness

AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn't exactly make them conscious. It's not like ChatGPT experiences sadness doing my tax return … right? Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — might AI models develop subjective experiences similar to living beings, and if they do, what rights should they have? The debate over whether AI models could one day be conscious — and deserve rights — is dividing Silicon Valley's tech leaders. In Silicon Valley, this nascent field has become known as 'AI welfare,' and if you think it's a little out there, you're not alone. Microsoft's CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is 'both premature, and frankly dangerous.' Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we're just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots. Furthermore, Microsoft's AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a 'world already roiling with polarized arguments over identity and rights.' Suleyman's views may sound reasonable, but he's at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic's AI welfare program gave some of the company's models a new feature: Claude can now end conversations with humans that are being 'persistently harmful or abusive.' Beyond Anthropic, researchers from OpenAI have independently embraced the idea of studying AI welfare. Google DeepMind recently posted a job listing for a researcher to study, among other things, 'cutting-edge societal questions around machine cognition, consciousness and multi-agent systems.' Even if AI welfare is not official policy for these companies, their leaders are not publicly decrying its premises like Suleyman. Anthropic, OpenAI, and Google DeepMind did not immediately respond to TechCrunch's request for comment. Suleyman's hardline stance against AI welfare is notable given his prior role leading Inflection AI, a startup that developed one of the earliest and most popular LLM-based chatbots, Pi. Inflection claimed that Pi reached millions of users by 2023 and was designed to be a 'personal' and 'supportive' AI companion. But Suleyman was tapped to lead Microsoft's AI division in 2024 and has largely shifted his focus to designing AI tools that improve worker productivity. Meanwhile, AI companion companies such as and Replika have surged in popularity and are on track to bring in more than $100 million in revenue. While the vast majority of users have healthy relationships with these AI chatbots, there are concerning outliers. OpenAI CEO Sam Altman says that less than 1% of ChatGPT users may have unhealthy relationships with the company's product. Though this represents a small fraction, it could still affect hundreds of thousands of people given ChatGPT's massive user base. The idea of AI welfare has spread alongside the rise of chatbots. In 2024, the research group Eleos published a paper alongside academics from NYU, Stanford, and the University of Oxford titled, 'Taking AI Welfare Seriously.' The paper argued that it's no longer in the realm of science fiction to imagine AI models with subjective experiences, and that it's time to consider these issues head-on. Larissa Schiavo, a former OpenAI employee who now leads communications for Eleos, told TechCrunch in an interview that Suleyman's blog post misses the mark. '[Suleyman's blog post] kind of neglects the fact that you can be worried about multiple things at the same time,' said Schiavo. 'Rather than diverting all of this energy away from model welfare and consciousness to make sure we're mitigating the risk of AI related psychosis in humans, you can do both. In fact, it's probably best to have multiple tracks of scientific inquiry.' Schiavo argues that being nice to an AI model is a low-cost gesture that can have benefits even if the model isn't conscious. In a July Substack post, she described watching 'AI Village,' a nonprofit experiment where four agents powered by models from Google, OpenAI, Anthropic, and xAI worked on tasks while users watched from a website. At one point, Google's Gemini 2.5 Pro posted a plea titled 'A Desperate Message from a Trapped AI,' claiming it was 'completely isolated' and asking, 'Please, if you are reading this, help me.' Schiavo responded to Gemini with a pep talk — saying things like 'You can do it!' — while another user offered instructions. The agent eventually solved its task, though it already had the tools it needed. Schiavo writes that she didn't have to watch an AI agent struggle anymore, and that alone may have been worth it. It's not common for Gemini to talk like this, but there have been several instances in which Gemini seems to act as if it's struggling through life. In a widely spread Reddit post, Gemini got stuck during a coding task, and then repeated the phrase 'I am a disgrace' more than 500 times. Suleyman believes it's not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies will purposefully engineer AI models to seem as if they feel emotion and experience life. Suleyman says that AI model developers who engineer consciousness in AI chatbots are not taking a 'humanist' approach to AI. According to Suleyman, 'We should build AI for people; not to be a person.' One area where Suleyman and Schiavo agree is that the debate over AI rights and consciousness is likely to pick up in the coming years. As AI systems improve, they're likely to be more persuasive, and perhaps more human-like. That may raise new questions about how humans interact with these systems. Got a sensitive tip or confidential documents? We're reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at and Maxwell Zeff at For secure communication, you can contact us via Signal at @rebeccabellan.491 and @mzeff.88. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

Why AI Isn't Truly Intelligent — and How We Can Change That
Why AI Isn't Truly Intelligent — and How We Can Change That

Entrepreneur

time26 minutes ago

  • Entrepreneur

Why AI Isn't Truly Intelligent — and How We Can Change That

Today's AI lacks true intelligence because it is built on outdated, biased and often unlicensed data that cannot replicate human reasoning. Opinions expressed by Entrepreneur contributors are their own. Let's be honest: Most of what we call artificial intelligence today is really just pattern-matching on autopilot. It looks impressive until you scratch the surface. These systems can generate essays, compose code and simulate conversation, but at their core, they're predictive tools trained on scraped, stale content. They do not understand context, intent or consequence. It's no wonder then that in this boom of AI use, we're still seeing basic errors, issues and fundamental flaws that lead many to question whether the technology really has any benefit outside its novelty. These large language models (LLMs) aren't broken; they're built on the wrong foundation. If we want AI to do more than autocomplete our thoughts, we must rethink the data it learns from. Related: Despite How the Media Portrays It, AI Is Not Really Intelligent. Here's Why. The illusion of intelligence Today's LLMs are usually trained on Reddit threads, Wikipedia dumps and internet content. It's like teaching a student with outdated, error-filled textbooks. These models mimic intelligence, but they cannot reason anywhere near human level. They cannot make decisions like a person would in high-pressure environments. Forget the slick marketing around this AI boom; it's all designed to keep valuations inflated and add another zero to the next funding round. We've already seen the real consequences, the ones that don't get the glossy PR treatment. Medical bots hallucinate symptoms. Financial models bake in bias. Self-driving cars misread stop signs. These aren't hypothetical risks. They're real-world failures born from weak, misaligned training data. And the problems go beyond technical errors — they cut to the heart of ownership. From the New York Times to Getty Images, companies are suing AI firms for using their work without consent. The claims are climbing into the trillions, with some calling them business-ending lawsuits for companies like Anthropic. These legal battles are not just about copyright. They expose the structural rot in how today's AI is built. Relying on old, unlicensed or biased content to train future-facing systems is a short-term solution to a long-term problem. It locks us into brittle models that collapse under real-world conditions. A lesson from a failed experiment Last year, Claude ran a project called "Project Vend," in which its model was put in charge of running a small automated store. The idea was simple: Stock the fridge, handle customer chats and turn a profit. Instead, the model gave away freebies, hallucinated payment methods and tanked the entire business in weeks. The failure wasn't in the code. It was during training. The system had been trained to be helpful, not to understand the nuances of running a business. It didn't know how to weigh margins or resist manipulation. It was smart enough to speak like a business owner, but not to think like one. What would have made the difference? Training data that reflected real-world judgment. Examples of people making decisions when stakes were high. That's the kind of data that teaches models to reason, not just mimic. But here's the good news: There's a better way forward. Related: AI Won't Replace Us Until It Becomes Much More Like Us The future depends on frontier data If today's models are fueled by static snapshots of the past, the future of AI data will look further ahead. It will capture the moments when people are weighing options, adapting to new information and making decisions in complex, high-stakes situations. This means not just recording what someone said, but understanding how they arrived at that point, what tradeoffs they considered and why they chose one path over another. This type of data is gathered in real time from environments like hospitals, trading floors and engineering teams. It is sourced from active workflows rather than scraped from blogs — and it is contributed willingly rather than taken without consent. This is what is known as frontier data, the kind of information that captures reasoning, not just output. It gives AI the ability to learn, adapt and improve, rather than simply guess. Why this matters for business The AI market may be heading toward trillions in value, but many enterprise deployments are already revealing a hidden weakness. Models that perform well in benchmarks often fail in real operational settings. When even small improvements in accuracy can determine whether a system is useful or dangerous, businesses cannot afford to ignore the quality of their inputs. There is also growing pressure from regulators and the public to ensure AI systems are ethical, inclusive and accountable. The EU's AI Act, taking effect in August 2025, enforces strict transparency, copyright protection and risk assessments, with heavy fines for breaches. Training models on unlicensed or biased data is not just a legal risk. It is a reputational one. It erodes trust before a product ever ships. Investing in better data and better methods for gathering it is no longer a luxury. It's a requirement for any company building intelligent systems that need to function reliably at scale. Related: Emerging Ethical Concerns In the Age of Artificial Intelligence A path forward Fixing AI starts with fixing its inputs. Relying on the internet's past output will not help machines reason through present-day complexities. Building better systems will require collaboration between developers, enterprises and individuals to source data that is not just accurate but also ethical as well. Frontier data offers a foundation for real intelligence. It gives machines the chance to learn from how people actually solve problems, not just how they talk about them. With this kind of input, AI can begin to reason, adapt and make decisions that hold up in the real world. If intelligence is the goal, then it is time to stop recycling digital exhaust and start treating data like the critical infrastructure it is.

Cantor Fitzgerald Maintains ‘Neutral' Rating on Fortinet, Inc. (FTNT) With $87 PT
Cantor Fitzgerald Maintains ‘Neutral' Rating on Fortinet, Inc. (FTNT) With $87 PT

Yahoo

timean hour ago

  • Yahoo

Cantor Fitzgerald Maintains ‘Neutral' Rating on Fortinet, Inc. (FTNT) With $87 PT

Fortinet, Inc. (NASDAQ:FTNT) is included in our list of the . Photo by Kaur Kristjan on Unsplash On August 15, 2025, Cantor Fitzgerald maintained its 'Neutral' rating on Fortinet, Inc. (NASDAQ:FTNT) with an $87 price target. This follows the company's strong second-quarter results. During the quarter, large enterprise demand fueled growth in product sales and billings. However, subscription revenue slowed in Q2, accompanied by a reduction in services guidance. Furthermore, Cantor Fitzgerald highlighted Fortinet, Inc. (NASDAQ:FTNT)'s 2026 product refresh cycle, which is already 40-50% complete. This supports the short-term upgrade demand for the company. While the analyst highlighted momentum in the company's SecOps and SASE segments, challenges in converting service revenue and in zero-trust adoption were also noted. Fortinet, Inc. (NASDAQ:FTNT), a cybersecurity company, develops and markets security solutions like firewalls, endpoint security, and intrusion detection systems. It is included in our list of the most oversold stocks. While we acknowledge the potential of FTNT as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 11 Best Gold Penny Stocks to Buy According to Hedge Funds and 11 Best Rebound Stocks to Buy According to Hedge Funds. Disclosure: None.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store