logo
#

Latest news with #neuralnetworks

AI Agent Types
AI Agent Types

Forbes

timea day ago

  • Forbes

AI Agent Types

In the big conversation that companies and people are having about AI agents, one of the major points is around the various different types of agents that we classify into different categories. In other words, there are AI agents, and there are AI agents. Some are fairly rudimentary – while others are extremely sophisticated and skilled. Another way to think about this is that neural networks are not the same as human brains: they're much more heterogenous. They didn't evolve collectively over millions of years, so they may not look like each other in the same ways that human brains do. That said, one of the biggest differences between AI agents is their memory. Stateful systems have some sort of recollection of data – it provides ongoing context for their work. By contrast, stateless systems just start over every single time a user session begins. You'll see the difference in a chatbot or AI agent that either remembers your history, or sees you as a brand new person each time you interact. Seven Types of Agents It also helps to think about AI agent memory within the framework that has developed to distinguish agent types. Experts like to classify AI agents in these seven categories:In terms of memory, perhaps the best distinction is between the first two types – simple reflex agents, and model-based reflex agents. An author simply named Manika at ProjectPro describes an example of a simple reflex agent this way: 'An automatic door sensor is a simple reflex agent. When the sensor detects movement near the door, it triggers the mechanism to open. The rule is: if movement is detected near the door, then open the door. It does not consider any additional context, such as who is approaching or the time of day, and will always open whenever movement is sensed.' And a model-based reflex agent this way: 'A vacuum cleaner like the Roomba, one that maps a room and remembers obstacles like furniture (represents a model-based agent). It ensures cleaning without repeatedly bumping into the same spots.' (Manika actually cites input by Andrew Ng at Sequoia, someone we've had on Imagination in Action forums and interview panels). Essentially, the stateful AI agent relies on having that consistent memory for specific capabilities. Daffodil provides these characteristics of a stateful agent:You can see how having the framework and context drives things like perceiving a shift in user intent, or leveraging a task or purchase history to predict a future outcome or preference. Acting Like Humans In a recent TED talk on the subject, Aditi Garg began with the idea of reconnecting with an old middle school friend: 'That's the beauty of human relationships, the fact that we don't have to reintroduce ourselves,' she said. 'We don't have to explain our inside jokes or our favorite stories. We just pick up where we left off. It's effortless, it's personal. It's what makes friendships so meaningful.' Contrast this with the current capabilities of an AI system that doesn't have vibrant memory…AI today, it can unpack physics, it can summarize books,' Garg added. 'It can also … compose some symphonies, but the moment you open a new chat window, it resets. It's like talking to a brilliant mind, but with amnesia. Machines can reason, but they still cannot remember.' Reimagining Memory Garg went over some of the ways that we are used to thinking about memory, with a suggestion that changing the framework will be useful in adding memory to AI systems. 'On a very fundamental level, we think of data as like a vast digital library with bytes and bytes of information that you can access,' she said. That idea, she noted, may need to be worked on. The memory of AI will need to be accessible in real-time, flowing through the system in the same ways that our own memory is instantly recalled by our biological brains. Making the analogy to a Ferrari that need to be refueled every lap of a race, Garg talked about how AI operations will waste enormous amounts of time trying to access these parts of an AI agent's system. On the other hand, she said, new systems will have immediate, transformed statefulness. 'If an AI system can access any piece of information, it can literally never forget. If it can maintain context across conversations (and) projects … the same storage breakthrough that keeps GPUs fed is the breakthrough that will keep your AI memory alive.' That goal, Garg suggested, has to do with locating the memory and the compute in the same place. Data Centers and Colocation Design I've seen this played out in data center plans where engineers actually put the data and the operations in the same place, along with the energy or power source. You can think of a mini data center sitting next to a nuclear power plant, with the storage banks tied directly into a centralized LLM that will use that data to its advantage. What do you get with these systems? We stand at the threshold of AI that remembers,' Garg concluded. 'When the speed of remembering finally matches the speed of thinking, we enable AI that transforms from a brilliant mind with amnesia, (to) your digital twin.' That might be the next big innovation in machine learning and artificial intelligence – you'll see the same models that you interact with today, endowed with better memory, and they'll seem smarter and more 'with it', because they will know a lot of the things that you would expect them to know if they had the memory of a human brain. By the way, it's a really good idea to know those seven kinds of AI agents, since they're going to remain part of the conversation for a long time to come. What do you see as the next major advance in AI?

PhD student develops brain-like technology that could solve dangerous issue with electric vehicles: 'Orders of magnitude faster'
PhD student develops brain-like technology that could solve dangerous issue with electric vehicles: 'Orders of magnitude faster'

Yahoo

time14-07-2025

  • Automotive
  • Yahoo

PhD student develops brain-like technology that could solve dangerous issue with electric vehicles: 'Orders of magnitude faster'

New research has potentially found a solution for some of consumers' biggest concerns about electric vehicle adoption: reducing fire hazards and extending battery life. In a new study published in npj Computational Materials, researchers, including a Ph.D. student from Skoltech and AIRI Institute, demonstrated how neural networks can significantly accelerate the discovery of solid electrolyte materials. This advancement could address one of the biggest hurdles in EV battery design: creating batteries that are safer, longer-lasting, and capable of holding more charge while reducing fire risks. Data shows that traditional internal combustion engine vehicles already have a much higher fire hazard risk than EVs. Solid-state batteries are a highly anticipated successor to traditional lithium-ion EV batteries. Instead of using flammable liquid electrolytes, solid-state batteries utilize ceramic or other solid materials to move lithium ions between electrodes. These materials offer greater stability, enabling faster charging, longer ranges, and improved safety. However, most known solid electrolytes do not yet meet all the technical requirements for commercial EVs. Researchers are now using artificial intelligence neural networks to predict new materials with high ionic mobility at speeds far surpassing traditional trial-and-error methods. "We demonstrated that graph neural networks can identify new solid-state battery materials with high ionic mobility and do it orders of magnitude faster than traditional quantum chemistry methods," explained Artem Dembitskiy, the lead author of the study and a Ph.D. student at Skoltech. "Machine learning lets us screen tens of thousands of materials in a fraction of the time." This innovative approach has helped the team identify two promising new protective coatings that could stabilize next-generation batteries and prevent dangerous short circuits. The potential of solid-state batteries is significant: Some automakers estimate they could offer up to 50% more range compared to today's EVs, along with reduced fire risk and longer battery life. This translates into lower long-term maintenance costs and fewer battery replacements. This research builds on previous AI-assisted breakthroughs in EV battery technology, fueling solid-state battery innovations that could enable EVs to last a decade longer than current battery technology. Pairing these high-efficiency EVs with home solar can drive savings even further. By charging at home using solar energy, drivers can lower their electricity bills and easily compare rates on sites like EnergySage. If you were going to purchase an EV, which of these factors would be most important to you? Cost Battery range Power and speed The way it looks Click your choice to see results and speak your mind. Considering an EV as your next car? You could save over $1,500 a year on gas and maintenance as well as receive Inflation Reduction Act tax breaks and credits, up to $7,500 through Sept. 30, while reducing planet-warming pollution and avoiding high gas prices. While these solid-state batteries are not yet ready for mass-market EVs, AI tools like these are helping us get there faster. This breakthrough could enable automakers to reduce their reliance on nonrenewable fuels and create a cleaner, more affordable future for drivers everywhere. Join our free newsletter for weekly updates on the latest innovations improving our lives and shaping our future, and don't miss this cool list of easy ways to help yourself while helping the planet.

PhD student develops brain-like technology that could solve dangerous issue with electric vehicles: 'Orders of magnitude faster'
PhD student develops brain-like technology that could solve dangerous issue with electric vehicles: 'Orders of magnitude faster'

Yahoo

time14-07-2025

  • Automotive
  • Yahoo

PhD student develops brain-like technology that could solve dangerous issue with electric vehicles: 'Orders of magnitude faster'

New research has potentially found a solution for some of consumers' biggest concerns about electric vehicle adoption: reducing fire hazards and extending battery life. In a new study published in npj Computational Materials, researchers, including a Ph.D. student from Skoltech and AIRI Institute, demonstrated how neural networks can significantly accelerate the discovery of solid electrolyte materials. This advancement could address one of the biggest hurdles in EV battery design: creating batteries that are safer, longer-lasting, and capable of holding more charge while reducing fire risks. Data shows that traditional internal combustion engine vehicles already have a much higher fire hazard risk than EVs. Solid-state batteries are a highly anticipated successor to traditional lithium-ion EV batteries. Instead of using flammable liquid electrolytes, solid-state batteries utilize ceramic or other solid materials to move lithium ions between electrodes. These materials offer greater stability, enabling faster charging, longer ranges, and improved safety. However, most known solid electrolytes do not yet meet all the technical requirements for commercial EVs. Researchers are now using artificial intelligence neural networks to predict new materials with high ionic mobility at speeds far surpassing traditional trial-and-error methods. "We demonstrated that graph neural networks can identify new solid-state battery materials with high ionic mobility and do it orders of magnitude faster than traditional quantum chemistry methods," explained Artem Dembitskiy, the lead author of the study and a Ph.D. student at Skoltech. "Machine learning lets us screen tens of thousands of materials in a fraction of the time." This innovative approach has helped the team identify two promising new protective coatings that could stabilize next-generation batteries and prevent dangerous short circuits. The potential of solid-state batteries is significant: Some automakers estimate they could offer up to 50% more range compared to today's EVs, along with reduced fire risk and longer battery life. This translates into lower long-term maintenance costs and fewer battery replacements. This research builds on previous AI-assisted breakthroughs in EV battery technology, fueling solid-state battery innovations that could enable EVs to last a decade longer than current battery technology. Pairing these high-efficiency EVs with home solar can drive savings even further. By charging at home using solar energy, drivers can lower their electricity bills and easily compare rates on sites like EnergySage. If you were going to purchase an EV, which of these factors would be most important to you? Cost Battery range Power and speed The way it looks Click your choice to see results and speak your mind. Considering an EV as your next car? You could save over $1,500 a year on gas and maintenance as well as receive Inflation Reduction Act tax breaks and credits, up to $7,500 through Sept. 30, while reducing planet-warming pollution and avoiding high gas prices. While these solid-state batteries are not yet ready for mass-market EVs, AI tools like these are helping us get there faster. This breakthrough could enable automakers to reduce their reliance on nonrenewable fuels and create a cleaner, more affordable future for drivers everywhere. Join our free newsletter for weekly updates on the latest innovations improving our lives and shaping our future, and don't miss this cool list of easy ways to help yourself while helping the planet.

Six Ways To Advance Modern Architecture For AI Systems
Six Ways To Advance Modern Architecture For AI Systems

Forbes

time23-06-2025

  • Science
  • Forbes

Six Ways To Advance Modern Architecture For AI Systems

View of the clouds reflected in the curve glass office building. 3d rendering These days, many engineering teams are coming up against a common problem – basically speaking, the models are too big. This problem comes in various forms, but there's often a connecting thread and a commonality to the challenges. Project are running up against memory constraints. As parameters range into the billions and trillions, data centers have to keep up. Stakeholders have to look out for thresholds in vendor services. Cost is generally an issue. However, there are new technologies on the horizon that can take that memory footprint and compute burden, and reduce them to something more manageable. How are today's innovators doing this? Let's take a look. Input and Data Compression First of all, there is the compression of inputs. You can design a loss algorithm to compress the model, and even run a compressed model versus the full one; compression methodologies are saving space when it comes to specialized neural network function. Here's a snippet from a paper posted at Apple's Machine Learning Research resource: 'Recently, several works have shown significant success in training-free and data-free compression (pruning and quantization) of LLMs achieving 50-60% sparsity and reducing the bit-width down to 3 or 4 bits per weight, with negligible perplexity degradation over the uncompressed baseline.' That's one example of how this can work. This Microsoft document looks at prompt compression, another component of looking at how to shrink or reduce data in systems. The Sparsity Approach: Focus and Variation Sometimes you can carve away part of the system design, in order to save resources. Think about a model where all of the attention areas work the same way. But maybe some of the input area is basically white space, where the rest of it is complex and relevant. Should the model's coverage be homogenous or one-size-fits-all? You're spending the same amount of compute on high and low attention areas. Alternately, people engineering the systems can remove the tokens that don't get a lot of attention, based on what's important and what's not. Now in this part of the effort, you're seeing hardware advances as well. More specialized GPU and multicore processors can have an advantage when it comes to this kind of differentiation, so take a look at everything that makers are doing to usher in a whole new class of GPU gear. Changing Context Strings Another major problem with network size is related to the context windows that systems use. If they are typical large language systems operating on a sequence, the length of that sequence is important. Context means more of certain kinds of functionality, but it also requires more resources. By changing the context, you change the 'appetite' of the system. Here's a bit from the above resource on prompt compression: 'While longer prompts hold considerable potential, they also introduce a host of issues, such as the need to exceed the chat window's maximum limit, a reduced capacity for retaining contextual information, and an increase in API costs, both in monetary terms and computational resources.' Directly after that, the authors go into solutions that might have broad application, in theory, to different kinds of fixes. Dynamic Models and Strong Inference Here are two more big trends right now: one is the emergence of strong inference systems, where the machine teaches itself what to do over time based on its past experience. Another is dynamic systems, where the input weights and everything else changes over time, rather than remaining the same. Both of these have some amount of promise, as well, for helping to match the design and engineering needs that people have when they're building the systems. There's also the diffusion model where you add noise, analyze, and remove that noise to come up with a new generative result. We talked about this last week in a post about the best ways to pursue AI. Last, but not least, we can evaluate traditional systems such as digital twinning. Twinning is great for precise simulations, but it takes a lot of resources – if there's a better way to do something, you might be able to save a lot of compute that way. These are just some of the solutions that we've been hearing about and they dovetail with the idea of edge computing, where you're doing more on an endpoint device at the edge of a network. Microcontrollers and small components can be a new way to crunch data without sending it through the cloud to some centralized location. Think about all of these advances as we sit through more of what people are doing these days with AI.

LexisNexis Risk Solutions Launches Location Intelligence: A First-of-Its-Kind Underwriting Solution for U.S. Commercial Property Risk Assessment
LexisNexis Risk Solutions Launches Location Intelligence: A First-of-Its-Kind Underwriting Solution for U.S. Commercial Property Risk Assessment

Yahoo

time19-06-2025

  • Business
  • Yahoo

LexisNexis Risk Solutions Launches Location Intelligence: A First-of-Its-Kind Underwriting Solution for U.S. Commercial Property Risk Assessment

New proprietary solution delivers more than 20 times the lift, enabling commercial property insurers to help automate and optimize risk strategies with more accuracy ATLANTA , June 19, 2025 /PRNewswire/ -- LexisNexis® Risk Solutions today announced the launch of LexisNexis® Location Intelligence for Commercial, a next-generation commercial property risk assessment solution that sets a new standard for more precise, automated and predictive modeling power in the U.S. commercial insurance sector at underwriting and renewal. As severe weather events continue to escalate in frequency and severity – driving more than 65% of all U.S. property losses1, commercial insurers face mounting challenges in underwriting, pricing and portfolio management. Location Intelligence for Commercial helps commercial insurance carriers better assess and spotlight property risks that are highly indicative of loss propensity. To address these blind spots in today's commercial underwriting processes, Location Intelligence for Commercial helps deliver insights using a holistic approach that combines industry loss data that is highly indicative of future loss, weather forensics and granular property characteristics into a suite of predictive modeling risk scores and supporting attributes. The patent-pending solution then adds proprietary claims information and neural network-driven risk propensity models to deliver actionable, future-focused insights directly into commercial insurance carrier workflows for a more detailed and accurate view of commercial property risk. "As opposed to conventional sources of information from basic weather data, roof age and aerial imagery, LexisNexis Location Intelligence represents a new standard for commercial property risk assessment that helps give insurers the actionable intelligence they need for a fuller and more granular view of risk coupled with workflow automation they can actually leverage," said David Zona, senior vice president, commercial insurance, LexisNexis Risk Solutions. "With Location Intelligence, they can better assess risk, such as which 10% of the properties in their book could generate a third of their property losses. This can put them in a unique position to be a customer service leader, proactively working with their customers on risk mitigation and resilience efforts." LexisNexis Location Intelligence capabilities include: Unmatched Predictive Modeling Lift: Can deliver over 20 times the lift2 compared to traditional loss propensity models, enabling commercial insurance carriers to better pinpoint the 10% of properties likely to generate 34% of weather-related losses in the coming year. Multi-Source Intelligence: Synthesizes aerial imagery, LexisNexis Risk Solutions comprehensive claims data, weather events and proprietary information for a more complete, accurate risk profile far beyond what imagery alone can provide and not just a point-in-time snapshot. Automation and Efficiency: Integrates seamlessly into commercial insurance underwriting and renewal workflows, supporting straight-through processing and enabling more efficient, targeted risk control strategies. Transparency and Communication: Provides a more transparent, multi-source approach to risk assessment, positioning commercial insurance carriers to more easily adapt to evolving regulatory requirements and communicate to business owners a more robust view of their commercial property risk, helping to ensure they are adequately insured in the event of a claim. "We understand that insurance underpins the economy, and commercial property insurers need to be able to confidently manage risk and improve profitability as they look to support the nation's small business foundation," continued Zona. "With year-over-year losses and the growing volatility of weather patterns, insurance carriers need more than historical data to win the day. They need forward-looking, actionable insights to better identify underwriting risk and improve profitability so they can continue helping their business customers thrive." For more information, please visit LexisNexis Location Intelligence for Commercial. About LexisNexis Risk SolutionsLexisNexis® Risk Solutions harnesses the power of data, sophisticated analytics platforms and technology solutions to provide insights that help businesses across multiple industries and governmental entities reduce risk and improve decisions to benefit people around the globe. Headquartered in metro Atlanta, Georgia, we have offices throughout the world and are part of RELX (LSE: REL/NYSE: RELX), a global provider of information-based analytics and decision tools for professional and business customers. For more information, please visit LexisNexis Risk Solutions and RELX. 1 LexisNexis Risk Solutions internal study, 20252 LexisNexis Risk Solutions internal study, 2024 Contact: Chas View original content to download multimedia: SOURCE LexisNexis Risk Solutions Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store