logo
#

Latest news with #generativeAI

LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots
LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots

CNET

time15 hours ago

  • Business
  • CNET

LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots

Chances are, you've heard of the term "large language models," or LLMs, when people are talking about generative AI. But they aren't quite synonymous with the brand-name chatbots like ChatGPT, Google Gemini, Microsoft Copilot, Meta AI and Anthropic's Claude. These AI chatbots can produce impressive results, but they don't actually understand the meaning of words the way we do. Instead, they're the interface we use to interact with large language models. These underlying technologies are trained to recognize how words are used and which words frequently appear together, so they can predict future words, sentences or paragraphs. Understanding how LLMs work is key to understanding how AI works. And as AI becomes increasingly common in our daily online experiences, that's something you ought to know. This is everything you need to know about LLMs and what they have to do with AI. What is a language model? You can think of a language model as a soothsayer for words. "A language model is something that tries to predict what language looks like that humans produce," said Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center. "What makes something a language model is whether it can predict future words given previous words." This is the basis of autocomplete functionality when you're texting, as well as of AI chatbots. What is a large language model? A large language model contains vast amounts of words from a wide array of sources. These models are measured in what is known as "parameters." So, what's a parameter? Well, LLMs use neural networks, which are machine learning models that take an input and perform mathematical calculations to produce an output. The number of variables in these computations are parameters. A large language model can have 1 billion parameters or more. "We know that they're large when they produce a full paragraph of coherent fluid text," Riedl said. How do large language models learn? LLMs learn via a core AI process called deep learning. "It's a lot like when you teach a child -- you show a lot of examples," said Jason Alan Snyder, global CTO of ad agency Momentum Worldwide. In other words, you feed the LLM a library of content (what's known as training data) such as books, articles, code and social media posts to help it understand how words are used in different contexts, and even the more subtle nuances of language. The data collection and training practices of AI companies are the subject of some controversy and some lawsuits. Publishers like The New York Times, artists and other content catalog owners are alleging tech companies have used their copyrighted material without the necessary permissions. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed on Ziff Davis copyrights in training and operating its AI systems.) AI models digest far more than a person could ever read in their lifetime -- something on the order of trillions of tokens. Tokens help AI models break down and process text. You can think of an AI model as a reader who needs help. The model breaks down a sentence into smaller pieces, or tokens -- which are equivalent to four characters in English, or about three-quarters of a word -- so it can understand each piece and then the overall meaning. From there, the LLM can analyze how words connect and determine which words often appear together. "It's like building this giant map of word relationships," Snyder said. "And then it starts to be able to do this really fun, cool thing, and it predicts what the next word is … and it compares the prediction to the actual word in the data and adjusts the internal map based on its accuracy." This prediction and adjustment happens billions of times, so the LLM is constantly refining its understanding of language and getting better at identifying patterns and predicting future words. It can even learn concepts and facts from the data to answer questions, generate creative text formats and translate languages. But they don't understand the meaning of words like we do -- all they know are the statistical relationships. LLMs also learn to improve their responses through reinforcement learning from human feedback. "You get a judgment or a preference from humans on which response was better given the input that it was given," said Maarten Sap, assistant professor at the Language Technologies Institute at Carnegie Mellon University. "And then you can teach the model to improve its responses." LLMs are good at handling some tasks but not others. Alexander Sikov/iStock/Getty Images Plus What do large language models do? Given a series of input words, an LLM will predict the next word in a sequence. For example, consider the phrase, "I went sailing on the deep blue..." Most people would probably guess "sea" because sailing, deep and blue are all words we associate with the sea. In other words, each word sets up context for what should come next. "These large language models, because they have a lot of parameters, can store a lot of patterns," Riedl said. "They are very good at being able to pick out these clues and make really, really good guesses at what comes next." What are the different kinds of language models? There are a couple kinds of sub-categories you might have heard, like small, reasoning and open-source/open-weights. Some of these models are multimodal, which means they are trained not just on text but also on images, video and audio. They are all language models and perform the same functions, but there are some key differences you should know. Is there such a thing as a small language model? Yes. Tech companies like Microsoft have introduced smaller models that are designed to operate "on device" and not require the same computing resources that an LLM does, but nevertheless help users tap into the power of generative AI. What are AI reasoning models? Reasoning models are a kind of LLM. These models give you a peek behind the curtain at a chatbot's train of thought while answering your questions. You might have seen this process if you've used DeepSeek, a Chinese AI chatbot. But what about open-source and open-weights models? Still, LLMs! These models are designed to be a bit more transparent about how they work. Open-source models let anyone see how the model was built, and they're typically available for anyone to customize and build one. Open-weights models give us some insight into how the model weighs specific characteristics when making decisions. Meta AI vs. ChatGPT: AI Chatbots Compared Meta AI vs. ChatGPT: AI Chatbots Compared Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:04 / Duration 0:06 Loaded : 0.00% 0:04 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:02 Share Fullscreen This is a modal window. This video is either unavailable or not supported in this browser Error Code: MEDIA_ERR_SRC_NOT_SUPPORTED The media could not be loaded, either because the server or network failed or because the format is not supported. Technical details : Session ID: 2025-05-31:c79bda8fcb89fbafa9a86f4a Player Element ID: vjs_video_3 OK Close Modal Dialog Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Meta AI vs. ChatGPT: AI Chatbots Compared What do large language models do really well? LLMs are very good at figuring out the connection between words and producing text that sounds natural. "They take an input, which can often be a set of instructions, like 'Do this for me,' or 'Tell me about this,' or 'Summarize this,' and are able to extract those patterns out of the input and produce a long string of fluid response," Riedl said. But they have several weaknesses. Where do large language models struggle? First, they're not good at telling the truth. In fact, they sometimes just make stuff up that sounds true, like when ChatGPT cited six fake court cases in a legal brief or when Google's Bard (the predecessor to Gemini) mistakenly credited the James Webb Space Telescope with taking the first pictures of a planet outside of our solar system. Those are known as hallucinations. "They are extremely unreliable in the sense that they confabulate and make up things a lot," Sap said. "They're not trained or designed by any means to spit out anything truthful." They also struggle with queries that are fundamentally different from anything they've encountered before. That's because they're focused on finding and responding to patterns. A good example is a math problem with a unique set of numbers. "It may not be able to do that calculation correctly because it's not really solving math," Riedl said. "It is trying to relate your math question to previous examples of math questions that it has seen before." While they excel at predicting words, they're not good at predicting the future, which includes planning and decision-making. "The idea of doing planning in the way that humans do it with … thinking about the different contingencies and alternatives and making choices, this seems to be a really hard roadblock for our current large language models right now," Riedl said. Finally, they struggle with current events because their training data typically only goes up to a certain point in time and anything that happens after that isn't part of their knowledge base. Because they don't have the capacity to distinguish between what is factually true and what is likely, they can confidently provide incorrect information about current events. They also don't interact with the world the way we do. "This makes it difficult for them to grasp the nuances and complexities of current events that often require an understanding of context, social dynamics and real-world consequences," Snyder said. How are LLMs integrated with search engines? We're seeing retrieval capabilities evolve beyond what the models have been trained on, including connecting with search engines like Google so the models can conduct web searches and then feed those results into the LLM. This means they could better understand queries and provide responses that are more timely. "This helps our linkage models stay current and up-to-date because they can actually look at new information on the internet and bring that in," Riedl said. That was the goal, for instance, a while back with AI-powered Bing. Instead of tapping into search engines to enhance its responses, Microsoft looked to AI to improve its own search engine, in part by better understanding the true meaning behind consumer queries and better ranking the results for said queries. Last November, OpenAI introduced ChatGPT Search, with access to information from some news publishers. But there are catches. Web search could make hallucinations worse without adequate fact-checking mechanisms in place. And LLMs would need to learn how to assess the reliability of web sources before citing them. Google learned that the hard way with the error-prone debut of its AI Overviews search results. The search company subsequently refined its AI Overviews results to reduce misleading or potentially dangerous summaries. But even recent reports have found that AI Overviews can't consistently tell you what year it is. For more, check out our experts' list of AI essentials and the best chatbots for 2025.

bolttech integrates AWS Gen AI to enhance efficiency
bolttech integrates AWS Gen AI to enhance efficiency

Yahoo

time15 hours ago

  • Business
  • Yahoo

bolttech integrates AWS Gen AI to enhance efficiency

Singaporean insurtech company bolttech has integrated AWS generative AI solutions to lower costs of its operations and personalise customer services. The platform, "bolttech Gen AI Factory," has been developed using Amazon Bedrock and enhances the existing call centre platform, already operating on Amazon Connect and Amazon Lex. Gen AI Factory adds speech-to-speech capabilities to bolttech's chatbots, enabling multilingual, natural conversations with customers and allows teams to create Gen AI applications within the insurance value chain. The pilot, which began with Korean, provides 'real-time' responses to insurance policy queries, catering to both simple and complex enquiries. Through automation of basic activities such as claims processing, it allows bolttech's agents to focus on improving operational efficiency and customer engagement. With AWS's infrastructure, bolttech aims to use AI for better risk assessment, early warning systems for threats, and AI -driven virtual assistants for claims processing. AWS Singapore country manager Priscilla Chong stated: 'bolttech is an example of a company delivering innovation using generative AI to enhance customer experiences, improve operational efficiency, and drive innovation in how insurance services are delivered and consumed globally. We're thrilled to support bolttech, and we're pleased that AWS's choice-based, model-agnostic approach is delivering AI-powered convenience to discerning customers at the forefront of the AI revolution.' The company is already using AWS tools to expedite time to market. It has reported over 50% reduction in time updating code documentation files with Amazon Q Developer. bolttech Asia CEO Philip Weiner said: 'At bolttech, we remain steadfast in our vision to connect people with more ways to protect the things they value. To achieve this at scale, we rely on the right data and AI infrastructure. AWS's cloud computing and Gen AI services, including Amazon Bedrock, provide the foundation to access diverse model choices, deliver superior price-performance ratios, and robust trust and safety enterprise features that align perfectly with our needs. Earlier in May 2025, bolttech entered a joint venture with Sumitomo Corporation to offer device protection in Asia. "bolttech integrates AWS Gen AI to enhance efficiency " was originally created and published by Life Insurance International, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Sign in to access your portfolio

Is AI porn the next horizon in self-pleasure — and is it ethical?
Is AI porn the next horizon in self-pleasure — and is it ethical?

Yahoo

time17 hours ago

  • Entertainment
  • Yahoo

Is AI porn the next horizon in self-pleasure — and is it ethical?

The AI revolution is well and truly upon us. As we grapple with the ramifications of generative AI in our professional and personal worlds, it's worth remembering that its impact will be felt in even the most intimate corners of our lives — including our private browsers. Whether you're aware of it or not, AI is coming for the porn industry. Already, there are a number of new genres emerging which make use of generative AI, such as hyper porn, a genre of erotic imagery which stretches the limits of sexuality and human anatomy to hyperbolic new heights (think: a Barbie-esque woman with three giant breasts, instead of two). There are also various iterations of 'gone wild' porn, a subdivision of porn which sees users attempt to 'trick' safe-for-work image generation models like Dall-E into depicting erotic scenes — and enjoying the work-arounds and euphemisms which these tools may use to avoid depicting explicit sex. But it's unlikely AI will wipe out the existence of IRL porn performers. AI porn stretches the fantasy innate within the porn and erotic content industries – materialising flawless avatars tailored to an individual's unique desires out of, seemingly, thin air. For some, this will be a turn-on but for others, it will lack the sweat and grit that makes IRL sex so appealing. 'I think there will be a splitting between people jumping head first into unreality and the people who actually want an antidote to it. We're already seeing such a huge fracturing of reality in our everyday lives,' says Vex Ashley, the porn performer, director, producer and one half of creative pornography project Four Chambers. SEE ALSO: Majority of Gen Z would marry an AI, survey says Ultimately, she insists, there will be a demographic that still hungers for a semblance of real human interaction. 'We'll absolutely see something like a build-your-own-AI custom pornstar who is also your digital girlfriend but I think — despite what people say — for many, sex is an experience they want to be grounded in some kind of authenticity,' Ashley adds. 'Person to person, there's a reason why you want to talk to your favourite pornstar on OnlyFans. I think we'll see a pushback, a rise of amateur, homemade content and in-person sexual events, experiences — something tactile.' While the industry is beginning to grapple with generative AI, the consumer point of view is coming into focus and, for some, it could provoke difficulties – especially for those already struggling with excessive porn use. While sex and relationships therapists tend to be sceptical about the topic of 'porn addiction' — it doesn't appear in diagnostic manuals and it is instead considered to be a form of compulsive sexual behaviour — a whole porn subculture exists around 'gooning': an extreme evolution of orgasm denial which sees individuals, generally cis men, enter into a trance-like state after edging for hours, locked into masturbation sessions with the aid of online porn. Speaking to an anonymous gooner, he shares his view of how AI may impact chronic porn users. SEE ALSO: What is gooning? 'AI porn kind of offers this new version of gooning. What is extremely sexy is typing in every crass thought that you have and immediately seeing it generated as an image.' He describes accessing NSFW generative AI models like Uber Realistic Porn Merge and then downloading different LoRAs (Low-Rank Adaptation — a type of add-on which allows you to quickly fine-tune an AI model) for various angles and scenarios within porn, such as 'reverse anal' or 'deep-throat side view'. 'You try them out and you're like, 'Oh, this is super hot…but I want the characters to be holding hands with a priest!'' From there, the hunger for more and more extreme fantasies can quickly escalate. 'It's bizarre, you tend to end up in this cycle of typing in a scenario, waiting five seconds until it comes up and, from there, chasing different scenarios — like having sex in the subway — that you can't do in real life.' The possibilities of AI to turn an individual's most niche fantasies into tangible images, all at the click of a button, are hugely compelling but, as my interviewee explains, it's also potentially troubling for individuals who may struggle with compulsive porn use. 'Live generation goon sessions will definitely become more popular,' he says. 'I've seen people in Reddit threads who are like, 'I can't stop gooning over AI porn'. I agree — I tasted it and it was fucking addictive.' It's worth noting here that, for anyone who is concerned about their porn consumption (of AI content, or otherwise), you might want to ask yourself questions about whether you want to stop but can't, or if there is a pattern of escalation. Ultimately, if you feel like your porn consumption is spiralling, it's worth reaching out to a therapist specialised in the field of compulsive sexual behaviour or a charity such as Relate, which can offer support around so-called 'porn addiction' (psychotherapists say there is no clinical evidence to support the diagnosis of 'porn addiction'). But as well as holding potential concerns for porn viewers, the rise of AI porn could have a serious knock-on effect for individuals currently working in the porn industry as actors or performers. After all, the obvious appeal of AI is the ability to see images and short-form video that explore hyper-unrealistic fantasies that aren't just impossible in 'real life' but for humans at all (like realistic vampire porn, or convincing erotic alien abduction scenes). For flesh-and-blood individuals working in the erotic industries, the increasing availability of AI — and potential impact on demand for porn featuring real humans — is already providing pause. 'I think it would be naive to say that we won't see huge shifts across all industries, porn and sex have always been right at the forefront of technological advancement,' says Ashley. As with other industries and forms of labour, Ashley explains that there will inevitably be concerns around workers' rights as consumers begin to explore AI-generated imagery. 'We're unfortunately going to see a space long dominated by the labour, skill and ingenuity of women and queer people be flooded with men finally able to achieve the ability to create the image of a person they want to fuck, without needing the person themself,' Ashley explains. While some porn performers are using AI themselves, such as for sexy chatbots, the lack of employment law protections for workers in the industry means they will be especially vulnerable to consumer changes. 'It's going to be a labour rights issue for sex workers who are already so legislatively unprotected compared to other performers in mainstream media,' says Ashley. In addition to these labour rights concerns, AI can be used to create non-consensual explicit deepfakes, prompting serious questions around consumer responsibility. For those who are unfamiliar, non-consensual explicit deepfakes typically consist of an individual's face and likeness being superimposed onto a naked body or an erotic scenario without their knowledge, then distributed online. It goes without saying that this type of material is a major violation of an individual's right to autonomy, privacy, and dignity. As a result, the creation of these images is already due to become illegal in England and Wales, with legislation recently signed to crack down on deepfakes in the U.S. However, as Professor Clare McGlynn, an expert in the legal regulation of pornography, sexual violence, and online abuse explains, the consumption of these images remains unregulated — meaning that they can be viewed without repercussions. 'Viewing sexually explicit deepfakes is not an offence. It is, though, deeply unethical. Survivors experience this abuse as a violation of their bodily and sexual integrity,' she explains. 'Each viewing is a new act of sexual violence, a breach of their consent. That so many are viewing this material should be deeply worrying, as it suggests a large market for non-consensual material.' Thankfully, efforts are being made to bring distributors of sexually explicit deepfakes to account — cutting consumers off from the source. This year, in fact, the figure behind one of the world's best-known non-consensual explicit deepfake sites was identified, and the site in question, MrDeepfakes, shut down. However, more should be done to prevent this kind of abuse from happening, rather than taking down the material once it has already been made and distributed, as Madelaine Thomas, an adult content creator and the founder of Image Angel, a software company which creates invisible watermarks to prevent non-consensual image sharing, attests. 'Social media platforms don't have the infrastructure they need to be able to protect the people on those platforms from content that isn't authentic or isn't captured in the correct way,' Thomas explains. The best-known cases of non-consensual explicit deepfakes involve well-known celebrities, but the scale of the harm is wider than many are aware. The ability to pirate the likeness or body of any individual who has posted photos on the internet has led to an increasing number of victims speaking out about the abuse they have faced over the past few years. In future, it's likely that more and more individuals will sadly be impacted by these crimes, including those in the adult entertainment industry, a demographic who are often victim-blamed when they come forward about instances of sexual abuse. But are there solutions? In the background, work is definitely underway. For example, Image Angel was founded after Thomas's intimate images were distributed without her consent, leading to a passion to prevent this kind of abuse in future — one that is reflected in her company's mission. 'Image Angel adds an invisible forensic watermark to any content that is received on a platform that has our tech installed. For example, if a content creator is sending out multiple nude or suggestive images, they can make sure that whoever receives them will be traceable if they share them,' she explains. While Thomas is keen to emphasise the damage of non-consensual explicit deepfake abuse, she also emphasises that the current AI model for all explicit content is based on the non-consensual extraction of erotic images. 'I work with the Digital Intimacy Coalition, and for years we have been campaigning to get people to understand that generated deepfakes do not solely put one person at the center of the harm,' she explains. 'The customer is none the wiser, but these AI tools are almost like a black hole that we are just seeing the very surface of. There are thousands of people, mostly women, whose images have been fed into these multi-language models. The tools might spit out an image of one person, but that image is comprised of thousands of sex workers' data.' Ultimately, an increased awareness around the potential harms of AI porn may encourage some viewers from taking a more conscientious approach. But we can't just rely on consumers to do the right thing – there needs to be an infrastructure to protect individuals from having their image stolen and misappropriated. For her part, Thomas expects to see more porn and erotic imagery behind the paywall, with in-built technology to prevent content from being fed into AI models. She also expects that AI-poisoning tools, like Nightshade, may well augment in popularity as a way to prevent images and videos from being fed into AI models. Whatever the future holds, it will be a rocky journey for the adult entertainment industry and wider society as we grapple with these nefarious uses of new technology – making it all the more important to listen to porn and adult content creators, fight for survivors of image-based sexual abuse, and campaign for further AI regulation.

As AI models start exhibiting bad behavior, it's time to start thinking harder about AI safety
As AI models start exhibiting bad behavior, it's time to start thinking harder about AI safety

Yahoo

timea day ago

  • Business
  • Yahoo

As AI models start exhibiting bad behavior, it's time to start thinking harder about AI safety

Welcome to AI Decoded, Fast Company's weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Spicy AI-generated TACO memes are taking over social media because 'Trump always chickens out' Lego's first book nook is an addictively interactive diorama What is 'ghostworking'? Most employees say they regularly pretend to work During the first two years of the generative artificial intelligence boom, new large language models were very limited in scope and application. They were very expensive auto-complete engines that understood only words. In 2025, generative AI models have a much broader view of the world. They can process code, images, video, and audio. They can reason and strategize about delivering a good answer. They can control external tools, including digital tools such as web search agents and, increasingly, physical tools like robots. As their capabilities grow, so does their potential for harm. This is no longer a purely conceptual argument. Research shows that increasingly large models are already showing a proclivity for unsafe behavior during testing. In a model safety card published last week, Anthropic documented some alarming behavior from its newest and biggest model, Claude Opus 4. During safety testing, one instance of Opus was allowed to discover plans for its decommissioning. It was also given access to some fictional emails of its developers. The model used the content of those emails as fodder to attempt to blackmail its human handlers into keeping it alive. As its attempts failed, they moved from subtle to more overt. Separately, the independent research firm Apollo Research observed an instance of Claude Opus 4 'writing self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself' with the goal of sullying its developers' intentions. Anthropic says that it corrected these early safety issues in later versions of the model. For the first time, Anthropic bumped the new Opus model up to Level 3 on its four-level safety scale. The company said it couldn't rule out the model's ability to assist a user in developing a mass casualty weapon. But powerful AI models can work in subtler ways, such as within the information space. A team of Italian researchers found that ChatGPT was more persuasive than humans in 64% of online debates. The AI was also better than humans at leveraging basic demographic data about its human debate partner to adapt and tailor-fit its arguments to be more persuasive. Another worry is the pace at which AI models are learning to develop AI models, potentially leaving human developers in the dust. Many AI developers already use some kind of AI coding assistant to write blocks of code or even code entire features. At a higher level, smaller, task-focused models are distilled from large frontier models. AI-generated content plays a key role in training, including in the reinforcement learning process used to teach models how to reason. There's a clear profit motive in enabling the use of AI models in more aspects of AI tool development. 'Future systems may be able to independently handle the entire AI development cycle—from formulating research questions and designing experiments to implementing, testing, and refining new AI systems,' write Daniel Eth and Tom Davidson in a March 2025 blog post on With slower-thinking humans unable to keep up, a 'runaway feedback loop' could develop in which AI models 'quickly develop more advanced AI which would itself develop even more advanced AI,' resulting in extremely fast AI progress, Eth and Davidson write. Any accuracy or bias issues present in the models would then be baked in and very hard to correct, one researcher told me. Numerous researchers—the people who actually work with the models up close—have called on the AI industry to 'slow down,' but those voices compete with powerful systemic forces that are in motion and hard to stop. Journalist and author Karen Hao argues that AI labs should focus on creating smaller, task-specific models (she gives Google DeepMind's AlphaFold models as an example), which may help solve immediate problems more quickly, require less natural resources, and pose a smaller safety risk. DeepMind cofounder Demis Hassabis, who won the Nobel Prize for his work on AlphaFold2, says the huge frontier models are needed to achieve AI's biggest goals (reversing climate change, for example) and to train smaller, more purpose-built models. And yet AlphaFold was not 'distilled' from a larger frontier model. It uses a highly specialized model architecture and was trained specifically for predicting protein structures. The current administration is saying 'speed up,' not 'slow down.' Under the influence of David Sacks and Marc Andreessen, the federal government has largely ceded its power to meaningfully regulate AI development. Just last year, AI leaders were still giving lip service to the need for safety and privacy guardrails around big AI models. No more. Any friction has been removed, in the U.S. at least. The promise of this kind of world is one of the main reasons why normally sane and liberal-minded opinion leaders jumped on the Trump train before the election—the chance to bet big on technology's next big thing in a Wild West environment doesn't come along that often. Anthropic CEO Dario Amodei has a stark warning for the developed world about job losses resulting from AI. The CEO told Axios that AI could wipe out half of all entry-level white-collar jobs. This could cause a 10% to 20% rise in the unemployment rate in the next one to five years, Amodei says. The losses could come from tech, finance, law, consulting, and other white-collar professions, and entry-level jobs could be hit hardest. Tech companies and governments have been in denial on the subject, Amodei says. 'Most of them are unaware that this is about to happen,' Amodei told Axios. 'It sounds crazy, and people just don't believe it.' Similar predictions have made headlines before but were narrower in focus. SignalFire research showed that Big Tech companies hired 25% fewer college graduates in 2024. Microsoft laid off 6,000 people in May, and 40% of the cuts in its home state of Washington were software engineers. Microsoft CEO Satya Nadella said that AI now generates 20% to 30% of the company's code. A study by the World Bank in February showed that the risk of losing a job to AI is higher for women, urban workers, and those with higher education. The risk of job loss to AI increases with the wealth of the country, the study found. U.S. generative AI companies appear to be attracting more venture capital money than their Chinese counterparts so far in 2025, according to new research from the data analytics company GlobalData. Investments in U.S. AI companies exceeded $50 billion in the first five months of 2025. China, meanwhile, struggles to keep pace due to 'regulatory headwinds.' Many Chinese AI companies are able to get early-stage funding from the Chinese government. GlobalData tracked just 50 funding deals for U.S. companies in 2020, amounting to $800 million of investment. The number grew to more than 600 deals in 2024, valued at more than $39 billion. The research shows 200 U.S. funding deals so far in 2025. Chinese AI companies attracted just $40 million in one deal valued at $40 million in 2020. Deals grew to 39 in 2024, valued at around $400 million. The researchers tracked 14 investment deals for Chinese generative AI companies so far in 2025. 'This growth trajectory positions the U.S. as a powerhouse in GenAI investment, showcasing a strong commitment to fostering technological advancement,' says GlobalData analyst Aurojyoti Bose in a statement. Bose cited the well-established venture capital ecosystem in the U.S., along with a permissive regulatory environment, as the main reasons for the investment growth. 9 of the most out-there things Anthropic CEO Dario Amodei just said about AI How AI could supercharge 'go direct' PR, and what the media can do about it This new browser could change everything you know about bookmarks Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium. This post originally appeared at to get the Fast Company newsletter: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Jim Cramer on CoreWeave, Inc. (CRWV): 'I Got to Check My Enthusiasm'
Jim Cramer on CoreWeave, Inc. (CRWV): 'I Got to Check My Enthusiasm'

Yahoo

timea day ago

  • Business
  • Yahoo

Jim Cramer on CoreWeave, Inc. (CRWV): 'I Got to Check My Enthusiasm'

We recently published a list of . In this article, we are going to take a look at where CoreWeave, Inc. (NASDAQ:CRWV) stands against other stocks that Jim Cramer discusses. A caller inquired if CoreWeave, Inc. (NASDAQ:CRWV) stock is still a buy. In response, Cramer said: 'No… we can't buy it here. I said that this morning at our morning meeting, that I do for club members, and I said… you know, I think the world of the stock. I recommended it at 40. That looks like a pretty darn good call. Ben Stoto, who's my chief scientist, and I, we got together and said this one is for real and you should buy it, but up here, up 210% since its IPO, I got to check my enthusiasm. Notice I didn't say curb, check.' A team of software engineers at desks working on code for a cutting-edge cloud computing solution. CoreWeave (NASDAQ:CRWV) provides a cloud platform designed to power enterprise compute workloads, especially for generative AI applications. The company offers high-performance computing resources, infrastructure tools, and services for tasks like AI training, rendering, and model optimization. Overall, CRWV ranks 9th on our list of stocks that Jim Cramer discusses. While we acknowledge the potential of CRWV as an investment, our conviction lies in the belief that AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an AI stock that is more promising than CRWV and that has 100x upside potential, check out our report about this cheapest AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store