Latest news with #ElonUniversity

Yahoo
19-05-2025
- Business
- Yahoo
Analysis of AI tools: 84% breached, 51% facing credential theft
AI tools are becoming essential to modern work, but their fast, unmonitored adoption is creating a new kind of security risk. Recent surveys reveal a clear trend – employees are rapidly adopting consumer-facing AI tools without employer approval, IT oversight, or any clear security policies. According to Cybernews Business Digital Index, nearly 90% of analyzed AI tools have been exposed to data breaches, putting businesses at severe risk. About 75% of workers use AI in the workplace, with AI chatbots being the most common tools to complete work-related tasks. While this boosts productivity, it could expose companies to credential theft, data leaks, and infrastructure vulnerabilities, especially since only 14% of workplaces have official AI policies, contributing to untracked AI use by employees. While a significant number of employees use AI tools at work, a large share of this usage remains untracked or unofficial. Estimates show that around one-third of AI users keep their usage hidden from management. Personal accounts are used uncontrollably for work tasks According to Google's 2024 survey of over 1,000 U.S.-based knowledge workers, 93% of Gen Z employees aged 22–27 use two or more AI tools at work. Millennials aren't far behind, with 79% reporting similar usage patterns. These tools are used to draft emails, take meeting notes, and bridge communication gaps. Additionally, a 2025 Elon University survey found that 58% of AI users regularly rely on two or more different models, while data from Harmonic indicates that 45.4% of sensitive data prompts are submitted using personal accounts, completely bypassing company monitoring systems. 'Unregulated use of multiple AI tools in the workplace, especially through personal accounts, creates serious blind spots in corporate security. Each tool becomes a potential exit point for sensitive data, outside the scope of IT governance,' says Emanuelis Norbutas, Chief Technical Officer at a secure AI orchestration platform for businesses. 'Without clear oversight, enforcing policies, monitoring usage, and ensuring compliance becomes nearly impossible.' Most popular AI tools struggle with cybersecurity To better understand how these tools perform behind the scenes, Cybernews researchers analyzed 52 of the most popular AI web tools in February 2025, ranked by total monthly website visits based on Semrush traffic data. Using only publicly available information, Business Digital Index uses custom scans, IoT search engines, IP, and domain name reputation databases to assess companies based on online security protocols. The findings paint a concerning picture. Widely used AI platforms and tools show uneven and often poor cybersecurity performance. Researchers found major gaps despite an average cybersecurity score of 85 out of 100. While 33% of platforms earned an A rating, 41% received a D or even an F, revealing a deep divide between the best and worst performers. 'What is mostly concerning is the false sense of security many users and businesses may have,' says Vincentas Baubonis, Head of Security Research at Cybernews. 'High average scores don't mean tools are entirely safe – one weak link in your workflow can become the attacker's entry point. Once inside, a threat actor can move laterally through systems, exfiltrate sensitive company data, access customer information, or even deploy ransomware, causing operational and reputational damage.' 84% of AI tools analyzed have suffered data breaches Out of the 52 AI tools analyzed, 84% had experienced at least one data breach. Data breaches often result from persistent weaknesses like poor infrastructure management, unpatched systems, and weak user permissions. However, even more alarming is that 36% of analyzed tools experienced a breach in just the past 30 days. Alongside breaches, 93% of platforms showed issues with SSL/TLS configurations, which are critical for encrypting communication between users and tools. Misconfigured SSL/TLS encryption weakens the protection of data sent between users and platforms, making it easier for attackers to intercept or manipulate sensitive information. System hosting vulnerabilities were another widespread concern, with 91% of platforms exhibiting flaws in their infrastructure management. These issues are often linked to weak cloud configurations or outdated server setups that expand the attack surface. Password reuse and credential theft 44% of companies developing AI tools showed signs of employee password reuse – a significant enabler of credential-stuffing attacks, where hackers exploit recycled login details to access systems undetected. In total, 51% of analyzed tools have had corporate credentials stolen, reinforcing the need for stronger password policies and IT oversight, especially as AI tools become routine in the workplace. Credential theft is often a forerunner to a data breach, as stolen credentials can be used to access sensitive data. 'Many AI tools simply aren't built with enterprise-grade security in mind. Employees often assume these tools are safe by default, yet many have already been compromised, with corporate credentials among the first targets,' says Norbutas. 'When passwords are reused or stored insecurely, it gives attackers a direct line into company systems. Businesses must treat every AI integration as a potential entry point and secure it accordingly.' Productivity tools show weakest cybersecurity Productivity tools, commonly used for note-taking, scheduling, content generation, and work-related collaboration, emerged as the most vulnerable category, with vulnerabilities across all key technical domains. Particularly infrastructure, data handling, and web security. According to Business Digital Index analysis, this category had the highest average number of stolen corporate credentials per company (1,332), and 92% had experienced a data breach. Every single tool in the category had 100% system hosting and SSL/TLS configuration issues. 'This is a classic Achilles' heel scenario,' says cybersecurity expert Baubonis. 'A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardize everything. Hugging Face is a perfect example of that risk – it only takes one blind spot to undermine months of security planning and expose the organization to threats it never anticipated.' Research Methodology Cybernews researchers examined 52 of the 60 most popular AI tools in February 2025, ranked by total monthly website visits based on Semrush traffic data. Seven tools could not be scanned due to domain limitations. The report evaluates cybersecurity risk across seven key dimensions: software patching, web application security, email protection, system reputation, hosting infrastructure, SSL/TLS configuration, and data breach history. The report's Methodology can be found here. It provides detailed information on how researchers conducted this analysis. About Business Digital Index The Business Digital Index (BDI) is designed to evaluate the cybersecurity health of organizations worldwide. It aims to help businesses by providing a clear, transparent, and independent assessment of their cybersecurity management, contributing to a more resilient digital future. By leveraging data from reputable sources, such as IoT search engines, IP and domain reputation databases, and custom security scans, the BDI comprehensively assesses an organization's cybersecurity strength. The index evaluates risks across seven critical areas: software updates, web security, email protection, system reputation, SSL setup, system hosting, and data breach in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
16-05-2025
- Business
- Yahoo
3 Reasons Why Nvidia Is Still a Must-Own Stock
2025 is expected to see a record data center buildout. AI still hasn't become mainstream in work. The stock only has about a year's worth of growth priced in. These 10 stocks could mint the next wave of millionaires › Nvidia (NASDAQ: NVDA) has been one of the best-performing stocks in the market since early 2023, but some investors are starting to wonder if its incredible run is running out of steam. I'm not in that group. The economy is still in the early stages of what is shaping up to be a generational shift, and Nvidia is right at the heart of it. There are plenty of reasons investors should hang on to their Nvidia shares (or buy more). But I want to focus on three that have me convinced that Nvidia is still a must-own stock. In terms of overall excellence, Nvidia's graphics processing units (GPUs) are best in class, which explains why its products are so dominant. GPUs are designed to handle tough computing tasks due to their ability to process multiple calculations in parallel. This gives them a leg up in processing tasks like artificial intelligence (AI) model training. In a separate but related issue, GPUs have been integral to facilitating the move from on-premise computing to cloud computing, powering the data centers that make cloud computing work. The buildout of AI and the data centers to facilitate AI processing is scaling up at a fast pace, but recent concerns about a global economic slowdown and rising uncertainty related to ever-changing tariff policies has left some investors wondering what effects this will have on Nvidia. At the moment, most AI hyperscalers have confirmed plans to spend record amounts on building data centers, which is excellent news for Nvidia. Even better, Nvidia expects this increased spending to persist over multiple years. Using third-party data, Nvidia estimated that about $400 billion was spent on data center capital expenditures in 2024. Nvidia management expects that to rise to $1 trillion by 2028. Because of their size, data centers take years of planning to build, so growth of this magnitude over the next few years is quite plausible. Data centers have grown to become Nvidia's biggest business segment, and it looks like it will grow even bigger going forward. Artificial intelligence usage has certainly ramped up over the past few years, but it's nowhere near peak usage. Businesses worldwide are still working on integrating AI into their operations, which will require more computing power to support the increased workloads. A survey from North Carolina's Elon University estimates that 52% of U.S. adults have used a generative AI model at least once. Of this group, only 24% said they used AI for work activities. For frequency of use, only 34% said that they use it once daily, and 10% said that they use it constantly. So, there's still a lot of room for growth with generative AI tools, and Nvidia will benefit from this buildout. Somewhere along the way, Nvidia gained a reputation for being an expensive stock. While it has sometimes had what would be considered an expensive valuation, its valuation is relatively reasonable right now. Nvidia stock trades for about 28 times forward earnings, which isn't bad considering that many of its big tech peers trade in this valuation range. Nvidia's stock might be considered relatively expensive using the trailing earnings metric (42 times earnings), but using the trailing earnings metric doesn't give Nvidia any credit for the growth it's about to deliver. Essentially, Nvidia has about one year's worth of growth baked into the stock price, so if it can continue growing at a good pace over the next year (and there are already monster data center growth projections out there, as discussed above), the stock will justify its current price. Nvidia is a top investment in the AI realm because of its universal usage. As long as AI continues to become more popular, Nvidia will continue to be a successful stock pick. Ever feel like you missed the boat in buying the most successful stocks? Then you'll want to hear this. On rare occasions, our expert team of analysts issues a 'Double Down' stock recommendation for companies that they think are about to pop. If you're worried you've already missed your chance to invest, now is the best time to buy before it's too late. And the numbers speak for themselves: Nvidia: if you invested $1,000 when we doubled down in 2009, you'd have $350,971!* Apple: if you invested $1,000 when we doubled down in 2008, you'd have $40,309!* Netflix: if you invested $1,000 when we doubled down in 2004, you'd have $620,719!* Right now, we're issuing 'Double Down' alerts for three incredible companies, available when you join , and there may not be another chance like this anytime soon.*Stock Advisor returns as of May 12, 2025 Keithen Drury has positions in Nvidia. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy. 3 Reasons Why Nvidia Is Still a Must-Own Stock was originally published by The Motley Fool Sign in to access your portfolio


Ottawa Citizen
15-05-2025
- Ottawa Citizen
What are AI hallucinations? Computer expert breaks down why it happens, how to avoid it
More internet users are starting to replace popular search engines with advanced chatbots from artificial intelligence platforms. However, the more powerful they become, the more mistakes they're making, the New York Times reported. These mistakes are referred to as hallucinations. Article content Article content Hallucinations have even been at the centre of a recent case in Canada involving a lawyer accused of using AI and fake cases to make legal arguments. A Ontario Superior Court judge said the lawyer's factum, or statement of facts about the case, included what the judge believed to be 'possibly artificial intelligence hallucinations.' Article content Article content As AI becomes more prevalent as it gets integrated into aspects of everyday life, hallucinations are likely not going away any time soon. Article content Article content Here's what to know. Article content A report published in March by Elon University showed that more than half of Americans use large language models (LLM) like OpenAI's chatbot ChatGPT or Google's Gemini. Two-thirds of those Americans are using LLMs as search engines, per the report. Around the world, nearly one billion people use chatbots today, according to data from marketing site Exploding Topics — with Canadians and Americans among top users. There's also been a surge in the amount of Canadians using AI recently, new data released by Leger on Wednesday revealed. Nearly half of the Canadians surveyed (47 per cent) in March said they've used AI tools, compared to only a quarter saying the same in February 2023. Article content Canadians are more likely to trust AI tools when it comes to tasks around the home, answering product questions via chat, or for using facial recognition for access. Canadians are much less trusting when it comes to using AI for driverless transport, teaching children or getting help to find a life partner. Canadians were split on whether AI is good (32 per cent) or bad (35 per cent) for society. Article content Article content What are AI hallucinations? Article content Article content An AI hallucination is when a chatbot presents a response as true, but it is not correct. Article content This can occur because AI chatbots are not 'explicitly programmed,' said University of Toronto professor David Lie from the department of electrical and computer engineering in a phone interview with National Post on Tuesday. Lie is also the Canada Research Chair in Secure and Reliable Systems. Article content 'The programmer beforehand doesn't think of every possible question and every possible response that the AI could face while you're using it,' he said. Therefore, the chatbots rely on inferences from the training data. Those inferences can be incorrect for a multitude of reasons. The training data may be incomplete or the training method leads it to the wrong shortcuts to arrive at the answer. Article content He compared how the current generation of artificial intelligences are modelled to the human brain. Article content 'The way they're trained is, you give a bunch of examples … trillions of them. And from that, it learns how to mimic, very much like how you would teach a human child,' said Lie. 'When we learn things, we often make mistakes, too, even after lots and lots of learning. We may come to the wrong conclusions about things and have to be corrected.'


National Post
15-05-2025
- National Post
What are AI hallucinations? Computer expert breaks down why it happens, how to avoid it
More internet users are starting to replace popular search engines with advanced chatbots from artificial intelligence platforms. However, the more powerful they become, the more mistakes they're making, the New York Times reported. These mistakes are referred to as hallucinations. Article content Article content Hallucinations have even been at the centre of a recent case in Canada involving a lawyer accused of using AI and fake cases to make legal arguments. A Ontario Superior Court judge said the lawyer's factum, or statement of facts about the case, included what the judge believed to be 'possibly artificial intelligence hallucinations.' Article content Article content As AI becomes more prevalent as it gets integrated into aspects of everyday life, hallucinations are likely not going away any time soon. Article content Article content Here's what to know. Article content A report published in March by Elon University showed that more than half of Americans use large language models (LLM) like OpenAI's chatbot ChatGPT or Google's Gemini. Two-thirds of those Americans are using LLMs as search engines, per the report. Around the world, nearly one billion people use chatbots today, according to data from marketing site Exploding Topics — with Canadians and Americans among top users. Article content There's also been a surge in the amount of Canadians using AI recently, new data released by Leger on Wednesday revealed. Nearly half of the Canadians surveyed (47 per cent) in March said they've used AI tools, compared to only a quarter saying the same in February 2023. Article content Canadians are more likely to trust AI tools when it comes to tasks around the home, answering product questions via chat, or for using facial recognition for access. Canadians are much less trusting when it comes to using AI for driverless transport, teaching children or getting help to find a life partner. Canadians were split on whether AI is good (32 per cent) or bad (35 per cent) for society. Article content Article content What are AI hallucinations? Article content Article content An AI hallucination is when a chatbot presents a response as true, but it is not correct. Article content This can occur because AI chatbots are not 'explicitly programmed,' said University of Toronto professor David Lie from the department of electrical and computer engineering in a phone interview with National Post on Tuesday. Lie is also the Canada Research Chair in Secure and Reliable Systems. Article content 'The programmer beforehand doesn't think of every possible question and every possible response that the AI could face while you're using it,' he said. Therefore, the chatbots rely on inferences from the training data. Those inferences can be incorrect for a multitude of reasons. The training data may be incomplete or the training method leads it to the wrong shortcuts to arrive at the answer. Article content He compared how the current generation of artificial intelligences are modelled to the human brain. Article content 'The way they're trained is, you give a bunch of examples … trillions of them. And from that, it learns how to mimic, very much like how you would teach a human child,' said Lie. 'When we learn things, we often make mistakes, too, even after lots and lots of learning. We may come to the wrong conclusions about things and have to be corrected.'
Yahoo
16-04-2025
- Yahoo
What Silicon Valley Knew About Tech-Bro Paternalism
Last fall, the consumer-electronics company LG announced new branding for the artificial intelligence powering many of its home appliances. Out: the 'smart home.' In: 'Affectionate Intelligence.' This 'empathetic and caring' AI, as LG describes it, is here to serve. It might switch off your appliances and dim your lights at bedtime. It might, like its sisters Alexa and Siri, select a soundtrack to soothe you to sleep. The technology awaits your summons and then, unquestioningly, answers. It will make subservience environmental. It will surround you with care—and ask for nothing in return. Affectionate AI, trading the paternalism of typical techspeak for a softer—or, to put it bluntly, more feminine—framing, is pretty transparent as a branding play: It is an act of anxiety management. It aims to assure the consumer that 'the coming Humanity-Plus-AI future,' as a recent report from Elon University called it, will be one not of threat but of promise. Yes, AI overall has the potential to become, as Elon Musk said in 2023, the 'most disruptive force in history.' It could be, as he put it in 2014, 'potentially more dangerous than nukes.' It is a force like 'an immortal dictator from which we can never escape,' he suggested in 2018. And yet, AI is coming. It is inevitable. We have, as consumers with human-level intelligence, very little choice in the matter. The people building the future are not asking for our permission; they are expecting our gratitude. It takes a very specific strain of paternalism to believe that you can create something that both eclipses humanity and serves it at the same time. The belief is ripe for satire. That might be why I've lately been thinking back to a comment posted last year to a Subreddit about HBO's satire Silicon Valley: 'It's a shame this show didn't last into the AI craze phase.' It really is! Silicon Valley premiered in 2014, a year before Musk, Sam Altman, and a group of fellow engineers founded OpenAI to ensure that, as their mission statement put it, 'artificial general intelligence benefits all of humanity.' The show ended its run in 2019, before AI's wide adoption. It would have had a field day with some of the events that have transpired since, among them Musk's rebrand as a T-shirt-clad oligarch and Altman's bot-based mimicry of the 2013 movie Her. Silicon Valley reads, at times, more as parody than as satire: Sharp as it is in its specific observations about tech culture, the show sometimes seems like a series of jokes in search of a punch line. It shines, though, when it casts its gaze on the gendered dynamics of tech—when it considers the consequential absurdities of tech's arrogance. The show doesn't spend much time directly tackling artificial intelligence as a moral problem—not until its final few episodes. But it still offers a shrewd parody of AI, as a consumer technology and as a future being foisted on us. That is because Silicon Valley is highly attuned to the way power is exchanged and distributed in the industry, and to tech bros' hubristic inclination to cast the public in a stereotypically feminine role. Corporations act; the rest of humanity reacts. They decide; we comply. They are the creators, driven by competition, conquest, and a conviction that the future is theirs to shape. We are the ones who will live with their decisions. Silicon Valley does not explicitly predict a world of AI made 'affectionate.' In a certain way, though, it does. It studies the men who make AI. It parodies their paternalism. The feminist philosopher Kate Manne argues that masculinity, at its extreme, is a self-ratifying form of entitlement. Silicon Valley knows that there's no greater claim to entitlement than an attempt to build the future. [Read: The rise of techno-authoritarianism] The series focuses on the evolving fortunes of the fictional start-up Pied Piper, a company with an aggressively boring product—a data-compression algorithm—and an aggressively ambitious mission. The algorithm could lead, eventually, to the realization of a long-standing dream: a decentralized internet, its data stored not on corporately owned servers but on the individual devices of the network. Richard Hendricks, Pied Piper's founder and the primary author of that algorithm, is a coder by profession but an idealist by nature. Over the seasons, he battles with billionaires who are driven by ego, pettiness, and greed. But he is not Manichean; he does not hew to Manne's sense of masculine entitlement. He merely wants to build his tech. He is surrounded, however, by characters who do fit Manne's definition, to different degrees. There's Erlich Bachman, the funder who sold an app he built for a modest profit and who regularly confuses luck with merit; Bertram Gilfoyle, the coder who has turned irony poisoning into a personality; Dinesh Chugtai, the coder who craves women's company as much as he fears it; Jared Dunn, the business manager whose competence is belied by his meekness. Even as the show pokes fun at the guys' personal failings, it elevates their efforts. Silicon Valley, throughout, is a David and Goliath story. Pied Piper is a tiny company trying to hold its own against the Googles of the world. The show, co-created by Mike Judge, can be giddily adolescent about its own bro-ness (many of its jokes refer to penises). But it is also, often, insightful about the absurdities that can arise when men are treated like gods. The show mocks the tech executive who brandishes his Buddhist prayer beads and engages in animal cruelty. It skewers Valley denizens' conspicuous consumption. (Several B plots revolve around the introduction of the early Tesla roadsters.) Most of all, the show pokes fun at the myopia displayed by men who are, in the Valley and beyond, revered as 'visionaries.' All they can see and care about are their own interests. In that sense, the titans of tech are unabashedly masculine. They are callous. They are impetuous. They are reckless. [Read: Elon Musk can't stop talking about penises] Their failings cause chaos, and Silicon Valley spends its seasons writing whiplash into its story line. The show swings, with melodramatic ease, between success and failure. Richard and his growing team—fellow engineers, investors, business managers—seem to move forward, getting a big new round of funding or good publicity. Then, as if on cue, they are brought low again: Defeats are snatched from the jaws of victory. The whiplash can make the show hard to watch. You get invested in the fate of this scrappy start-up. You hope. You feel a bit of preemptive catharsis until the next disappointment comes. That, in itself, is resonant. AI can hurtle its users along similar swings. It is a product to be marketed and a future to be accepted. It is something to be controlled (OpenAI's Altman appeared before Congress in 2023 asking for government regulation) and something that must not be contained (OpenAI this year, along with other tech giants, asked the federal government to prevent state-level regulation). Altman's public comments paint a picture of AI that evokes both Skynet ('I think if this technology goes wrong, it can go quite wrong,' he said at the 2023 congressional hearing) and—as he said in a 2023 interview—a 'magic intelligence in the sky.' [Read: OpenAI goes MAGA] The dissonance is part of the broader experience of tech—a field that, for the consumer, can feel less affectionate than addling. People adapted to Twitter, coming to rely on it for news and conversation; then Musk bought it, turned it into X, tweaked the algorithms, and, in the process, ruined the platform. People who have made investments in TikTok operate under the assumption that, as has happened before, it could go dark with the push of a button. To depend on technology, to trust it at all, in many instances means to be betrayed by it. And AI makes that vulnerability ever more consequential. Humans are at risk, always, of the machines' swaggering entitlements. Siri and Alexa and their fellow feminized bots are flourishes of marketing. They perform meekness and cheer—and they are roughly as capable of becoming an 'immortal dictator' as their male-coded counterparts. By the end of Silicon Valley's run, Pied Piper seems poised for an epic victory. The company has a deal with AT&T to run its algorithm over the larger company's massive network. It is about to launch on millions of people's phones. It is about to become a household name. And then: the twist. Pied Piper's algorithm uses AI to maximize its own efficiency; through a fluke, Richard realizes that the algorithm works too well. It will keep maximizing. It will make its own definitions of efficiency. Pied Piper has created a decentralized network in the name of 'freedom'; it has created a machine, you might say, meant to benefit all of humanity. Now that network might mean humanity's destruction. It could come for the power grid. It could come for the apps installed in self-driving cars. It could come for bank accounts and refrigerators and satellites. It could come for the nuclear codes. Suddenly, we're watching not just comedy but also an action-adventure drama. The guys will have to make hard choices on behalf of everyone else. This is an accidental kind of paternalism, a power they neither asked for nor, really, deserve. And the show asks whether they will be wise enough to abandon their ambitions—to sacrifice the trappings of tech-bro success—in favor of more stereotypically feminine goals: protection, self-sacrifice, compassion, care. I won't spoil things by saying how the show answers the question. I'll simply say that, if you haven't seen the finale, in which all of this plays out, it's worth watching. Silicon Valley presents a version of the conundrum that real-world coders are navigating as they build machines that have the potential to double as monsters. The stakes are melodramatic. That is the point. Concerns about humanity—even the word humanity—have become so common in discussions of AI that they risk becoming clichés. But humanity is at stake, the show suggests, when human intelligence becomes an option rather than a given. At some point, the twists will have to end. In 'the coming Humanity-Plus-AI future,' we will have to find new ways of considering what it means to be human—and what we want to preserve and defend. Coders will have to come to grips with what they've created. Is AI a tool or a weapon? Is it a choice, or is it inevitable? Do we want our machines to be affectionate? Or can we settle for ones that leave the work of trying to be good humans to the humans? When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic. Article originally published at The Atlantic