Latest news with #AIdevelopers

Wall Street Journal
21-07-2025
- Business
- Wall Street Journal
Battery Makers in Slumping EV Business Find Lifeline Elsewhere
Big U.S. EV battery makers are stepping back from the market that got them started and betting on a new set of customers in an entirely different business. Instead of carmakers, these companies have started making batteries for utilities, wind- and solar-power developers, and massive data centers that train artificial intelligence.
Yahoo
18-07-2025
- Yahoo
Adobe Firefly can now generate sound effects from your audio cues
Since rolling out the redesign of its Firefly app in April, Adobe has been releasing major updates for the generative AI hub at a near monthly clip. Today, the company is introducing a handful of new features to assist those who use Firefly's video capabilities. To start, Adobe is making it easier to add sound effects to AI-generated clips. Right now, the majority of video models create footage without any accompanying audio. Adobe is addressing this with a nifty little feature that allows users to first describe the sound effect they want to generate and then record themselves making it. The second part isn't so Adobe's model can mimic the sound. Rather, it's so the system can get a better idea of the intensity and timing the user wants from the effect. In the demo Adobe showed me, one of the company's employees used the feature to add the sound of a zipper being unzipped. They made a "zzzztttt" sound, which Adobe's model faithfully used to reproduce the effect at the intended volume. The translation was less convincing when the employee used the tool to add the sound of footsteps on concrete, though if you're using the feature for ideation as Adobe intended, that may not matter. When adding sound effects, there's a timeline editor along the bottom of the interface to make it easy to time the audio properly. The other new features Adobe is adding today are called Composition Reference, Keyframe Cropping and Video Presets. The first of those allows you to upload a video or image you captured to guide the generation process. In combination with Video Presets, you can define the style of the final output. Some of the options Adobe is offering at launch allow you to create clips with anime, black and white or vector art styles. Lastly, with Keyframe Cropping you can upload the first and final frame of a video and select an aspect ratio. Firefly will then generate a video that stays within your desired format. In June, Adobe added support for additional third-party models, and this month it's doing the same. Most notable is the inclusion of Veo 3, which Google premiered at its I/O 2025 conference in May. At the moment, Veo 3 is one of the only AI models that can generate video with sound. Like with all the other partner models Adobe offers in Firefly, Google has agreed not to use data from Adobe users for training future models. Every image and video people create through Firefly is digitally signed with the model that was used to create it. That is one of the safeguards Adobe includes so that Firefly customers don't accidentally ship an asset that infringes on copyrighted material. According to Zeke Koch, vice president of product management for Adobe Firefly, users can expect the fast pace of updates to continue. "We're relentlessly shipping stuff almost as quickly as we can," he said. Koch adds Adobe will continue to integrate more third-party models, as long as their providers agree to the company's data privacy terms.


Forbes
14-07-2025
- Forbes
Agentic AI Is The New Vaporware
The hype term 'agentic AI' repackages pie in the sky AI ambitions, but does not allude to any ... More particular advancement that might achieve them. The hype term 'agentic AI' is the latest trending buzzword to repackage pie in the sky AI ambitions, but it does not allude to any particular advancement that might achieve them. It amplifies the overpromising narrative that we're rapidly headed toward a great leap in autonomy – most extraordinarily, toward the most audacious goal of all, artificial general intelligence, the speculative idea of machines that could automate virtually all human work. Setting unrealistic expectations compromises real value. Generative AI and predictive AI deliver concrete opportunities that will continue to grow, but the claim that technology will soon hold 'agency' is the epitome of vaporware. It only misleads, setting up the industry for costly, avoidable disillusionment. Most high-tech terms – such as machine learning, predictive modeling or autonomous driving – are legit. They represent one of two things: a specific technical approach or a novel goal for technology. But the terms 'agent' and 'agentic' fail in both respects: 1) Most uses of 'agentic' do not refer to any novel technical methodology and 2) the ambition of increasing autonomy is not new – even as the word falsely implies otherwise on both accounts. Here's a breakdown of those two failings and their ramifications. 1) 'Agentic' Does Not Refer To Any Particular Technology Or Advancement 'Agentic AI' poses as a credible near-term capability, but it represents only the most self-evident goal there could be for technology – increased automation – not a means to get there. Sure, we'd like a large language model to complete monumental tasks on its own – including gathering and assimilating information and completing online tasks and transactions – but labeling such ambitions as 'agentic' does not make them more feasible. The term 'agentic AI' intrinsically misleads. Its sheer popularity widens the belief that technology will soon become capable of running much more autonomously, but the buzzword does not refer to any particular technical approach that may get us there. Its trendiness serves to institutionalize the notion that we're nearing great new levels of automation – 'agentic AI' is so ubiquitous that it may sound 'established' and 'real' – and this implies the existence of a groundbreaking advancement where in fact there is none. Despite the fact that the vast majority of press about 'agentic AI' only promotes this hype narrative with no substance to support it, autonomy itself is often a worthy goal and researchers are conducting valuable work in the pursuit of increasing it. For example, a recent collaboration between Carnegie Mellon University and Amazon curates a large testbed of modest tasks in order to assess how well LLMs can manage them autonomously. This study focuses on information retrieval tasks, such as 'Retrieve an article discussing recent trends in renewable energy from The Guardian' and 'Retrieve a publicly available research paper on quantum computing from MIT's website.' The study evaluates clever approaches for using LLMs to navigate websites and automatically perform such tasks, but I would not say that these approaches constitute groundbreaking technology. Rather, they are ways to leverage what is already groundbreaking: LLMs. As the study reveals, the state of the art currently fails at these modest tasks 43% the time. 2) 'Agentic' Presents No New Goal Or Purpose 'Agentic AI' spotlights machine autonomy as if it were a new ambition, but it's an old, self-evident goal. There's no new, revolutionary thrust at play. While the buzzword is somewhat malleable and fuzzy, it generally refers to the desire for increased autonomy – 'agentic AI' means hypothetical machines that could perform substantial tasks on their own. This has always been a core, fundamental objective. The very purpose of any machine is to automate some or all of what would otherwise be carried out by a person or animal. Put another way, we build machines to do stuff. By reiterating our innate desire to automate, 'agentic' only states the obvious. Sure, the more machines can safely do for us, the better. But there's a fairly stubborn limit to the scope of tasks that can be fully automated with no human in the loop. For example, predictive AI instantly decides whether to allow each credit card charge, whereas the wholesale replacement of physicians with machines is a very long way off at best. 'Agentic AI' is as redundant as 'evil Sith Lord,' 'book library' or 'data science.' To be clear, autonomy is often a worthy goal and there is potential for LLMs to excel, at least where the scope of automation is somewhat modest. Economic interests exert pressure to increase autonomy – and various societal concerns exert pressure in both directions. But the scope of unleashed machine autonomy only increases quite slowly. One reason is that technology doesn't improve as quickly as advertised. Another is that cultural and societal inertia tends to spell slow adoption. The Farfetched Notion Of Machine 'Agency' There's another problem with using the words 'agent' and 'agentic' to evoke the goal of autonomous machines: Crediting machines with 'agency' is fantastical. This doubles down on AI's core mythology and original sin, the anthropomorphization of machines. The machine is no longer a tool at the disposal of humans – rather, it's elevated to have its own human-level understanding, goal-setting and volition. It's our peer. Essentially, it's alive. The spontaneous goal-setting that comes with agency – and its resulting unbottleability – have been seeping into the AI narrative for years. "AI that works doesn't stay in a lab," writes Kevin Roose in The New York Times. "It makes its way into weapons used by the military and software used by children in their classrooms." In another article, he wrote, 'I worry that the technology will... eventually grow capable of carrying out its own dangerous acts.' Likewise, Elon Musk, one of the world's most effective transmitters of AGI hype, announced safety assurances that cleverly imply a willful or dangerous AI. He says that his company's forthcoming humanoid robot will be hardwired to obey whenever anyone says, 'Stop, stop, stop.' The story of technology taking on a life of its own is an age-old drama. We need to see this high tech mythology for what it is: a more convincingly rationalized ghost story. It's the novel Mary Shelley would have written had she been familiar with algorithms. The implausible, unsupported notion that we're actively progressing toward AGI – aka artificial humans – underlies much of the hype (and often overlays it explicitly as well). 'Agentic' invokes this narrative. Despite the unprecedented capabilities – and uncanny, seemingly humanlike qualities – of generative AI, the limit on how much human work can be fully automated will continue to only very slowly budge. I believe that we will generally need to settle for partial autonomy. Don't buy 'agentic AI' and don't sell it either. It's an empty buzzword that, in most uses, overpromises. The AI industry runs largely – although certainly not entirely – on hype. To the degree that it continues to overinflate expectations, the industry will ultimately face a commensurate burst bubble: the dire disillusionment and unfulfilled debt that result from unmet promises.


Medical News Today
11-07-2025
- Health
- Medical News Today
Can AI help detect breast cancer, and is it accurate?
Artificial intelligence (AI) may help detect breast cancer earlier and more accurately than traditional methods alone. It may also help predict a person's risk of developing breast professionals use imaging scans like mammograms and breast ultrasounds to screen people for breast cancer, which can help with early detection. They may also assess a person's family history, genetics, and other factors to help determine their risk of developing the studies suggest that AI could help health professionals detect breast cancer more quickly and accurately than with traditional screening methods alone. The technology may also help predict a person's risk of breast cancer with greater precision. How does AI help with breast cancer detection?AI developers can train computer systems to recognize, interpret, and analyze patterns in breast cancer detection, AI technicians input information gathered from large data sets of mammograms for the systems to learn AI software uses the data to create an algorithm that outlines the characteristics of mammograms with and without cancer. The system can then compare new images to the algorithm to help identify accurate is AI in detecting breast cancer?Research has found that AI could help detect breast cancer with similar or greater accuracy than radiologists a recent Swedish study, AI-supported screening detected cancer in 244 women after analyzing 39,996 mammograms. In a separate group, two radiologists each used traditional screening methods to analyze a different set of 40,024 mammograms, from which they were able to detect cancer in 203 false positive rate was 1.5% in both groups, which means AI and radiologists both mistakenly detected breast cancer in 1.5% of the mammograms they the detection rates were similar between both groups, the AI screening method reduced the workload for radiologists and allowed them to spend 44% less time reading screens.A 2025 meta-analysis of eight studies indicated that AI techniques could detect breast cancer with better overall accuracy than the researchers also highlighted the current limitations of AI screening. These included the technology sometimes failing to identify visible lesions or interpret ambiguous results like radiologists researchers suggest that AI and traditional radiology combined may result in the most accurate and effective breast cancer of a 2022 study agree that AI should support, rather than replace, radiologists. Their results indicate that a combination of AI and a radiologist could detect breast cancer 2.6% more accurately than a radiologist AI detect early breast cancer?Early breast cancer detection and treatment can significantly improve a person's outlook for the disease. The survival rate is almost 100% for the earliest stages of breast cancer and declines to 22% at stage screening mammograms miss about 20% of breast cancers, according to the National Cancer screening may help reduce false-negative results and help identify breast cancer earlier, as research suggests it could improve overall screening accuracy. However, more research is necessary to understand the reliability and implications of the technology. »Learn more:How can people detect breast cancer early?Can AI assess individual breast cancer risk?Research suggests AI may be able to effectively assess a person's risk of developing breast professionals typically use tools like the Breast Cancer Risk Assessment tool (BCRAT) or the Breast Cancer Surveillance Consortium (BCSC) Risk Calculator to estimate a person's likelihood of developing the tools calculate an individual's risk based on several factors, including their age, race and ethnicity, and personal and family medical histories.A recent study found that AI may be able to predict a person's breast cancer risk without these the study, AI systems used mammogram images to predict people's risk of developing breast cancer more accurately than the BCSC risk model researchers found that combining AI and the BCSC model achieved the most accurate there challenges in using AI for breast cancer detection?Potentially, AI offers significant developments in breast cancer detection. However, AI systems lack standardization and rigorous regulatory and ethical guidelines and may present several challenges for researchers and health professionals. These include:Research challenges: AI algorithms are not generalized and often include large numbers of variables. This may affect how consistently AI models are able to perform and the reliability of the data they provide. Scientists need more evidence from large-scale studies to assess how safe, accurate, and reliable the technology may be for breast cancer detection in real-world clinical 'black box enigma': Scientists may refer to AI algorithms as black boxes, as humans cannot always understand the patterns the models find and the decisions they make. This could lead to AI making incomprehensible mistakes that scientists cannot predict, detect, or concerns: The use of AI raises various ethical issues, including contributing to health disparities and the effect on healthcare professionals that AI systems may challenges: AI algorithm and infrastructure development and maintenance involve substantial ongoing a person's data secure when AI screens for breast cancer? There are complex legal, regulatory, and technological challenges that may affect a person's data security when AI is used for breast cancer screening. The Health Insurance Portability and Accountability Act (HIPAA) protects people's health information and health privacy rights in the United States. As a new and evolving technology, however, AI presents legal and regulatory challenges that may affect health data may disclose 'de-identified' protected health information. This involves removing information that could identify an individual or link them to the data. However, researchers suggest AI healthcare may result in opportunities for re-identification, which could link sensitive and private information to specific bodies, healthcare professionals, and AI developers may continue to determine and implement safety measures as the technology AI help reduce breast cancer disparities?Breast cancer disparities can prevent certain groups from receiving equitable screening and treatment. These disparities persist due to various factors, including racism, the underrepresentation of certain groups in clinical trials, and a lack of access to cancer disparities affect Black women especially severely. The group has a 38% higher mortality rate than white women, despite having a lower incidence of the suggest that AI is vulnerable to bias and may contribute to and exacerbate existing racial disparities in healthcare, such as breast cancer suggests that AI reflects human bias as it learns from the data that people provide. Additionally, human choices may influence AI systems to perform in exploitative or discriminatory may help to reduce racial disparities in breast cancer screening if the people who use the technology actively counter existing healthcare biases. This may involve the use of ethical AI programs and the inclusion of diverse data tend to agree that AI breast cancer screening should be integrated to support, rather than replace, radiologists for the most accurate technology may be a promising tool for breast cancer detection and risk prediction. However, it faces several challenges, including ethical concerns, excessive financial costs, and reliability research is needed to determine if the technology is safe, accurate, and reliable before it can be widely implemented.


Forbes
04-07-2025
- Forbes
Is The Obsession With Attaining AGI And AI Superintelligence Actually Derailing Progress In AI?
Some insist that pursuing AGI has become a misguided obsession, though not everyone agrees with that ... More sentiment. In today's column, I explore a controversial claim that the seemingly obsessive pursuit of artificial general intelligence (AGI) and artificial superintelligence (ASI) is leading us astray and derailing true progress in AI. The argument asserts that AI makers and AI developers have their priorities wrong and need to reorient their aims. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Obsession With AGI As A North Star Not everyone thinks that the pursuit of AGI is all that productive. Indeed, the AGI obsession is said to be counterproductive. In a research paper entitled 'Stop Treating 'AGI' As The North-Star Goal Of AI Research' by Borhane Blili-Hamelin, Christopher Graziul, Leif Hancox-Li, Hananel Hazan, El-Mahdi El-Mhamdi, Avijit Ghosh, Katherine Heller, JacobMetcalf, FabricioMurai, Eryk Salvaggio, Andrew Smart, Todd Snider, Mariame Tighanimine, Talia Ringer, Margaret Mitchell, Shiri Dori-Hacohen, arXiv, February 7, 2025, they made these salient points (excerpts): Those six key traps are worthy of keen attention. Unpacking The Key Traps I'll briefly cover the six key traps in my own words; you are encouraged to read the above-cited research article to see how the paper explains them. First, the illusion of consensus says that AI makers are deluding themselves into thinking that they are pursuing the same thing, namely AGI. But the reality is that there isn't an across-the-board agreed definition of AGI. Some AI makers are actually sneakily moving the cheese, as it were, by defining AGI to suit their own preferences and watering down what AGI was intended to consist of (see my analysis at the link here). Second, the pell-mell rush to be the winner-winner chicken dinner of attaining AGI before anyone else is promulgating bad science. AI makers are throwing spaghetti at the wall to see what sticks. The idea of performing carefully designed and hardcore empirical research is eschewed in deference to making a splash and proclaiming some wonderous new AI advancement (which is unsupported by any semblance of proper rigor). Third, the goal of attaining AGI appears to be a purely scientific endeavor and belies the fact that all sorts of other societal or political string-pulling are at hand. Nations for example would wield immense geo-political and economic power by achieving AGI within their midst, see my discussion at the link here. AI makers hide the underlying reasons for AGI and tout the technological merits to blind us to the full truth. Fourth, under the guise of pursuing AGI, various allegedly aligned subgoals can be concocted and then used with heroic energy, even if the subgoals have little or nothing to do with achieving AGI. If an AI maker suddenly announces that more hardware is needed to reach AGI, voila, they can amass a mountain's worth of investor cash. They do not need to scrupulously showcase why the hardware is on the direct pathway to AGI, just the mere mention will open wallets. Fifth, trickery is used to put shiny objects in front of the public to distract from the reality that AI progress is not moving as smoothly as suggested. For example, declaring that AI can perform superhuman chess-playing would lead the public to believe that AGI is getting very close to fruition. This though neglects the importance of AGI generalization, whereby the core element is that AI of this caliber is supposed to work across all manner of domains versus specializing in a specific domain such as chess. Six, using the banner of attaining AGI allows AI makers to skip past those that they believe will get in the way of their efforts. This can push to the wayside serious concerns about AI safety and security. Likewise, qualms about an existential risk of AGI, such as AGI opting to harm humans, get downplayed in comparison to the happy face upsides of such AI. Hearing From The Other Side Like most important aspects of life, there are two sides to this contentious coin. Those who believe ardently in the pursuit of AGI would contend that despite those key traps, which they acknowledge are worthy of attention and resolution, there are bona fide reasons to continue the AGI pursuit per se. First, there is value in providing an aspirational goal that can be rallied around. Even though AGI is not well-defined, the overall gist is that AI ought to work on par with that of human intellect. This is a 30,000-foot-level definition that can readily inspire AI developers and AI makers to make advancements in AI. By labeling this as AGI, it is a succinct and vision-clarifying messaging that motivates AI builders and AI researchers daily. Second, many AI scientists and researchers are genuinely and meticulously pursuing sound AI advancements. In other words, lumping together all such pursuits as bad apples is unfair and exasperating. Sure, there are bad apples here and there, no doubt, but give credit where credit is due. Third, imagine how scattered and confused the AI community would be without a flag of AGI as the purposeful aim afoot. Perhaps the preponderance of attention would lean into a narrow realm, such as focusing entirely on AI for solving physics problems or figuring out genomes. Meanwhile, we would hardly be making progress toward the broad elements of overarching human intellect. It could also be thousands of disparate pursuits in a fully scattergun of directions, rather than an overarching focused aim at artificial general intelligence. Practicality Will Prevail Trying to dislodge AGI as a kind of north star for the AI community is a nearly impossible ask. The allure of AGI as a figurative destination is something that captures the spirit and the mind of not only the AI field but likewise the public at large. The momentum is so strong that it is difficult to imagine anything powerful enough to bring it to a halt. The only probable means of gumming up the works is if the AI pursuits fizzle out and people become disillusioned that AGI was merely a pipe dream. At that juncture, yes, AGI as a north star would undoubtedly get junked. What would take its place? Aha, given the ever-hopeful spirit of humankind, the odds are that a new name would be given to what AGI used to be, and this newly anointed gem would become the latest north star. It wouldn't be especially different and just an old cause with a less familiar moniker. Perhaps our best hope right now might be to grit our teeth about AGI as a north star, push the AI community to realize the dangers and pitfalls therein, and do our darnedest to overcome the downsides. As Jimmy Dean memorably stated: 'I can't change the direction of the wind, but I can adjust my sails to always reach my destination.'