logo
Quanta Services price target raised to $414 from $364 at Goldman Sachs

Quanta Services price target raised to $414 from $364 at Goldman Sachs

Goldman Sachs raised the firm's price target on Quanta Services (PWR) to $414 from $364 and keeps a Buy rating on the shares. The firm is positive on the stock and the trading multiple with the belief that over the longer-term, the substantial volume of transmission and distribution spending will continue to provide support to shares even in periods of AI headline driven softness, the analyst tells investors in a research note. Goldman adds however that there is a relatively better understanding of the fundamental drivers of estimates and a recognition among investors that under 10% of revenue comes from data centers.
Confident Investing Starts Here:

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nimanode Launches $NMA Token Presale, Pioneering the First No-Code AI Agent Platform on XRP Ledger
Nimanode Launches $NMA Token Presale, Pioneering the First No-Code AI Agent Platform on XRP Ledger

Associated Press

time35 minutes ago

  • Associated Press

Nimanode Launches $NMA Token Presale, Pioneering the First No-Code AI Agent Platform on XRP Ledger

LEEDS, United Kingdom, June 07, 2025 (GLOBE NEWSWIRE) -- Nimanode, a pioneering protocol building AI-driven autonomous agents on the XRP Ledger (XRPL), has officially launched the presale of its native utility token, $NMA. As the first no-code AI agent builder on XRPL, Nimanode empowers users to deploy, customize, and manage intelligent blockchain agents without writing a single line of code. With over 45% of the 90 million $NMA tokens allocated to the presale already in motion, Nimanode is attracting early interest from both individual investors and large XRP holders who recognize the platform's potential to transform automation within decentralized ecosystems. $NMA Token Presale Bringing Agentic Automation to the XRP Ledger Nimanode aims to reshape how work is executed on-chain by enabling autonomous agents—AI-driven programs capable of completing blockchain tasks on behalf of users. Core features of the protocol include: 'The $NMA token will serve as the backbone of the Nimanode ecosystem,' said a representative from the Nimanode team. 'From deploying agents to earning through staking and participating in governance, $NMA enables a truly participatory AI economy on-chain.' Utility and Use Cases of $NMA The $NMA token offers multiple functions within the Nimanode platform: Following the presale, $NMA is expected to list on decentralized exchanges at a 25% higher price, offering early participants an incentivized entry. How to Join the $NMA Presale Participants can acquire $NMA using XRP via the official Nimanode presale portal. Here's how to participate: The Nimanode presale is now live and available to the public for a limited time. About Nimanode Nimanode is building the first AI protocol layer on the XRP Ledger, allowing artificial intelligence not only to support blockchain ecosystems—but to live natively within them. The platform's agentic architecture aims to redefine DeFi, work automation, and intelligent interaction across decentralized applications. Media Contact Nimanode Communications Email: [email protected] Website: Twitter/X: Telegram: Documentation: Contact: Nick Lambert [email protected] Disclaimer: This is a paid post and is provided by Nimanode. The statements, views, and opinions expressed in this content are solely those of the content provider and do not necessarily reflect the views of this media platform or its publisher. We do not endorse, verify, or guarantee the accuracy, completeness, or reliability of any information do not guarantee any claims, statements, or promises made in this content is for informational purposes only and should not be considered financial, investment, or trading in crypto and mining-related opportunities involves significant risks, including the potential loss of is possible to lose all your capital. These products may not be suitable for everyone, and you should ensure that you understand the risks involved. Seek independent advice if necessary. Speculate only with funds that you can afford to are strongly encouraged to conduct their own research and consult with a qualified financial advisor before making any investment decisions. However, due to the inherently speculative nature of the blockchain sector—including cryptocurrency, NFTs, and mining—complete accuracy cannot always be the media platform nor the publisher shall be held responsible for any fraudulent activities, misrepresentations, or financial losses arising from the content of this press the event of any legal claims or charges against this article, we accept no liability or does not endorse any content on this page. Legal Disclaimer: This media platform provides the content of this article on an 'as-is' basis, without any warranties or representations of any kind, express or implied. We assume no responsibility for any inaccuracies, errors, or omissions. We do not assume any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information presented herein. Any concerns, complaints, or copyright issues related to this article should be directed to the content provider mentioned above. A photo accompanying this announcement is available at

Klarna boss: AI will lead to recession and mass job losses
Klarna boss: AI will lead to recession and mass job losses

Yahoo

time36 minutes ago

  • Yahoo

Klarna boss: AI will lead to recession and mass job losses

The introduction of AI at firms could lead to a recession due to mass job losses of professionals, Klarna's chief executive has warned. The use of AI to make work more efficient has been encouraged by leading figures, including the government itself which said it could save civil servants two weeks a year. But fintech boss Sebastian Siemiatkowski, who runs the Stockholm-based 'buy now, pay later' business Klarna, has warned that its potential to replace work done by office staff could push economies into a recession in the coming years. Siemiatkowski suggested that tech bosses had tended to dismiss the impacts AI could have on headcounts at firms, with Klarna cutting staff numbers from 5,500 to 3,000 in recent years. 'Many people in the tech industry, especially CEOs, tend to downplay the consequences of AI on jobs and white-collar jobs in particular. And I don't want to be one of them,' Siemiatkowski told The Times Tech Podcast. 'There will be an implication for white-collar jobs and when that happens that usually leads to a recession at least in the short-term. Unfortunately, I don't see how we could avoid that, with what's happening from a technology perspective.' He said he based his predictions on anecdotes seen through the number of firms pushing AI tools, with economic data not yet considering the impacts of AI. 'I feel like I have an email almost every day from some CEO of a tech or a large company that says we also see opportunities to become more efficient and we would like to compare notes. If I just take all of those emails and add up the amount of jobs in those emails, it's considerable.' He said humans could be protected for some jobs at Klarna, such as in customer service where people can work on fraud and other complex banking issues. 'The value of that human touch will increase,' he said, adding that AI meant people working in client-facing roles would have to be more skilled. 'They will provide a much higher quality type of service.' Siemiatkowski's comments echo those made by the chief executive of Anthropic, Dario Amodei, who said half of all entry level professional jobs could be eliminated in the next five years. The head of another AI firm, Arthur Mesch, who runs French firm Mistral, has claimed that the excitement of fellow tech bosses for its ability to overpower humans was a 'very religious' fascination while Demis Hassabis of Google DeepMind said it would take another five to ten years before AI would overtake human intelligence. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

AI Can't Replace Education
AI Can't Replace Education

Yahoo

timean hour ago

  • Yahoo

AI Can't Replace Education

Credit - Tingting Ji—Getty Images As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless? Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete. While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring. Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world? AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability. Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding. That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method. That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us. I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop. Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature. The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process. The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this. We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations. After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding. This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.' AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter. The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it. Contact us at letters@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store