Latest news with #AI
Yahoo
22 minutes ago
- Business
- Yahoo
Morgan Stanley sees the AI productivity boom adding $16 trillion to the stock market's value
The S&P 500 could gain as much as $16 trillion in value as AI turbocharges productivity, Morgan Stanley said. Strategists said agentic AI and humanoid robots could lead to huge benefits for companies. The tech could also impact 90% of existing jobs, leading some workers to upskill or change careers. AI could ultimately be a $16 trillion gift to the stock market. That's according to strategists at Morgan Stanley, who see the productivity gains and cost-cutting spree stemming from artificial intelligence adding as $13 to $16 trillion in value for the S&P 500. At the high end of Morgan Stanley's estimates, that implies the benchmark index adding another 29% to its market cap. The bank's predictions, which aren't tied to a concrete timeline, assume that AI's capabilities will continue to "improve rapidly" and that companies will adopt AI on a widespread level, strategists wrote in a note to clients over the weekend. On a year-to-year basis, that could add around $920 billion in net benefits for large-cap firms, largely due to companies reducing headcount, lowering costs, and helping generate new revenue. Agentic AI, or AI that can make decisions and act with less supervision than generative AI, could account for around $490 billion of that value, while embodied AI, or humanoid robots, could account for around $430 billion, the strategists estimated. Together, those forces could increase value for S&P 500 by more than 25% of adjusted pre-tax income, per Morgan Stanley's analysis. The bank added that value creation could be most pronounced for companies in sectors like consumer staples distribution, retail, real estate, and transportation. Over the long term, strategists estimated that value creation in all three of those sectors could be at least double what companies are expected to make in pre-tax income in 2026. According to the bank's AI mapping research, corporations are showing signs of "an inflection" when it comes to adopting artificial intelligence, the note added. "This degree of market value creation assumes full adoption, which will take place over many years, with time frame varying by company and industry," strategists wrote. "If AI capabilities continue to improve at a non-linear rate, the magnitude of value creation from AI adoption will rise above our already high estimates." Job market impact While the stock market could boom, AI-driven value creation could spell trouble for human workers, some of whom may need to upskill or change occupations, the bank said. Strategists estimated that AI adoption could impact around 90% of existing jobs, but create new roles, like "AI supply chain analyst" and "AI ethicist." "If history is any guide, AI could result in net job creation, though there could still be periods of displacement," the bank said, pointing to job displacement from prior technological revolutions, like the internet boom. "The ability of employees to be re-skilled will be important for how quickly they can be absorbed back into the labor force." Other forecasters have voiced more dystopian views on how AI could reshape the job market. In 2023, Goldman Sachs estimated that AI could automate around 300 million full-time jobs, with roles in the administrative and legal industries being most at-risk. Antropic's CEO, Dario Amodei, said he believes AI could eliminate half of entry-level white-collar jobs over the next five years, which he speculated could cause the unemployment rate to spike as high as 20%. Read the original article on Business Insider Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Irish Examiner
23 minutes ago
- Irish Examiner
Colman Noctor: Is AI in the classroom a welcome revolution or a shortcut to nowhere?
As artificial intelligence threatens to reshape how children learn, we must balance its promise of personalised teaching with the risk of losing what lies at the heart of education — learning through human connection. With ChatGPT-5 and other AI programmes freely available, our understanding of teaching and learning could be transformed. Many parents are fascinated by, and uneasy about, AI's impact on their children's education. While AI offers efficiency and personalisation, it also raises concerns about fairness and human interaction. Educators and parents need to understand this change because our goal must be to ensure that children learn effectively, not just quickly. AI's greatest strength is its ability to adapt to each user. Adaptive learning systems can recognise a learner's struggles, motivations, and pace. A 2020 review of 37 studies by researcher Shuai Wang, at the University of Illinois, found that 86% were positive about learning from AI technologies. Many learning platforms are adopting personalised AI approaches, and tools like language education app Duolingo can tailor lessons to individual needs. Teachers also benefit from the rapid technological changes. AI promises to handle repetitive tasks, such as grading quizzes, creating assignments, and monitoring attendance, freeing them to provide one-to-one support and relational teaching. The potential of AI to support children with learning needs is of particular interest to me as my teenage daughter is dyslexic. I am interested in exploring how word-prediction software and voice-enabled interfaces can help students with dyslexia or speech difficulties keep pace with their peers. Risks of AI in education Every benefit has a corresponding risk, and when it comes to AI in education, the main one is over-reliance and the erosion of real-life connections. Screens and algorithms cannot replace the warmth, empathy, and spontaneity of teachers, who are crucial for social-emotional development. Bias is another concern. AI often inherits or amplifies biases from its training data, which could potentially disadvantage minorities and reinforce stereotypes. We have already seen this happen with social media algorithms, which have contributed to the fragmentation and polarisation of societal views. Academic integrity is also under threat. Universities report widespread use of generative AI, raising fears about the loss of independent thinking. AI detection tools were once seen as a safeguard, but newer systems can now 'humanise' text to evade detection. The capacity for AI to write essays and complete assignments undetected is particularly troubling in professional training programmes, such as medicine or nursing. To be safe practitioners, these students must fully understand the course content. If AI undermines academic standards and evades detection, the consequences could be significant. Suppose a student submits an AI-generated essay on which medications should not be taken together, but does not know what it contains; this lack of understanding poses risks for competency in the clinical field. While there are established safeguards to ensure clinical competency through practice placements, concerns persist regarding students' comprehensive knowledge of the course content. AI could also deepen digital inequality. Without access to high-speed internet or the latest tools, some students risk falling even further behind. Growing use among students Many third-level students are aware of AI's limitations. A 2025 University of Florida survey found that students value AI's instant feedback and study support, but worry about accuracy, the loss of critical thinking, data privacy, and bias. Regardless, the use of ChatGPT among students is on the rise, particularly in secondary schools. In the US, 26% of teens (13–17 years old) reported using ChatGPT for schoolwork in 2024, which is double the 2023 figure. They found that older secondary students were most likely to use it, and when asked what they deemed 'acceptable use', 54% said for research purposes, dropping to 29% for solving maths problems. So how can parents help children learn in a world increasingly being driven by AI? 1. Encourage discernment: Let children use AI tools like ChatGPT or personalised tutoring, but teach them to question: Where did this answer come from? Could it be wrong? How would you explain it yourself? These questions are essential to developing critical-thinking skills. 2. Champion human connection: Young people need to see teachers, mentors, and parents who model empathy, resilience, and humour. These are essential life qualities that AI cannot replicate. 3. Prioritise AI literacy: Teach children that AI can be wrong, biased, or unclear. Discussions about privacy, data, and ethics have never been more critical, and they need to take place at home, not just be left to teachers in school. To raise the issue of ethical and responsible use, you could say: 'AI can give quick answers, but it doesn't always get things right. How do you check whether the information is true before using it?' Or, 'It's fine to use AI to brainstorm ideas, but your teachers will want to see your thinking. How do you decide when to stop relying on the tool and write in your own words?' 4. Value effort alongside efficiency: In a world where AI can write essays or instantly solve difficult maths questions, we should at least require that students produce their own initial draft. If AI provides a solution to a problem, we could ask the student to explain the process to confirm they genuinely grasp the concept. 5. Keep learning yourself: The more you understand AI's strengths and weaknesses, the better you can guide children through this fast-changing world. Keeping humans at the centre The Higher Education Policy Institute recently concluded that while AI can support teachers, it cannot, and must not, replace them. Critical thinking, creativity, and emotional connection are essential human qualities. UNESCO, the UN agency that promotes education, science, culture, and communication, also acknowledges AI's potential to make education more inclusive and equitable, but only if we remain vigilant about its ethical implications, privacy, and fairness. The agency argues that AI should not be used as an answer generator, but as a tutor. Rather than prompting it to 'give me the answer to this question', attempt the problem and tell ChatGPT to 'show me where I went wrong'. The next two years will be transformative for the education sector. While a change to the current system is needed, we must be careful not to throw the baby out with the bathwater when it comes to educational progress and hoped-for efficiency. We could learn from Estonia, which has historically invested heavily in digital infrastructure and is the highest PISA-ranked country for global recognition of technological learning. The Estonian Department of Education has partnered with OpenAI and other platforms to provide training for teachers and establish guardrails for data protection, as well as strategies to encourage critical thinking. The Estonian government seems to see the value of AI in education, but understands the necessity to be prepared, so that it can be integrated in a manner that optimises its benefits and minimises the risks. From an Irish perspective, the Department of Education committed to developing guidelines for schools regarding the integration of AI in April 2024, but this is still in development. AI promises significant benefits, and can positively transform our future. However, we need to engage with it in a way that considers the benefits and the risks. If we allow AI to dominate our educational interactions, we risk creating a superficial education system where efficiency takes precedence over understanding. As we approach the return to school, let's consider how we leverage AI's benefits without losing sight of the human heart of learning. Dr Colman Noctor is a child psychotherapist


Forbes
25 minutes ago
- Politics
- Forbes
AI Fail — Chancellor Merz Is Not Chancellor Merkel
President Trump met with European leaders, including Ukrainian President Zelenskyy, at the White House to discuss the ongoing war in Ukraine. When Donald Trump introduced the German Chancellor Friedrich Merz, YouTube's automatic transcription and translation system confidently announced 'Chancellor Merkel.' A small error, but a telling one: even the most advanced AI systems can still confuse yesterday's truth with today's reality. How AI Transcription Works When we say 'AI transcribes speech,' we usually mean that the system takes in an audio signal (your voice, or Trump's words) and predicts what sequence of text best matches the sound. The underlying technology is built on neural networks that learn from billions of examples of speech paired with text. Modern systems use an architecture similar to ChatGPT: It's all probabilities. The model doesn't understand the words — it just plays the odds. Why AI Failed And Why 'Merz' Becomes 'Merkel' Here's where the mistake creeps in. For 16 years, 'German Chancellor' almost always meant Angela Merkel. That phrase was baked into countless hours of training data. So when the model hears 'Merz' — which is acoustically close but far less common — it leans toward the familiar, high-probability continuation: Merkel. Think of it like predictive text on your phone. If you type 'Happy New…,' it will almost always suggest 'Year' instead of 'Birthday.' The YouTube model isn't wrong in a statistical sense — it's just out of sync with the real-world moment. AI Does Not Have Logic Gap The crucial gap: AI transcription models don't verify facts. They don't pause and think, 'Merkel is no longer Chancellor, so that must be wrong.' They don't access live knowledge graphs or cross-check reality. They just generate the most likely sequence of words, based on historical data. That's why AI feels brilliant in some moments — and brain-dead in others. It reflects the averages of the past, not necessarily the truth of the present. Thinking Is Still Needed In A Post-GPT Time I always cringe about the faith some folks put into AI. You see how easily the past can be a wrong predictor for the current times. Harvard found in an experiment with nearly 300 executives, that those who relied on ChatGPT for stock price forecasts grew more optimistic and overconfident — and ultimately made worse predictions than peers who worked with other humans to discuss the logic. The study shows how AI can amplify cognitive biases and distort judgment in high-stakes decisions. Don't Be Average - Don't Become An AI Fail We don't see an AGI moment. We humans are still needed and that is good news. As I say in my eCornell certificate on Designing and Building AI Solutions — AI is a tool not more — not less. It's an impressive tool, but it won't replace the need for thinking, and human judgment. Because human excellence isn't about predicting averages.


South China Morning Post
25 minutes ago
- Business
- South China Morning Post
Alibaba's AI coding model Qwen 3 Coder soars in popularity, challenging Claude Sonnet 4
Alibaba Group Holding's artificial intelligence coding model Qwen 3 Coder has been gaining popularity globally amid intensified competition in the AI-assisted coding sector, according to data from a third-party agency. Advertisement The model's usage share on AI marketplace OpenRouter soared to more than 20 per cent as of mid-August, trailing only Anthropic's Claude Sonnet 4 with 31 per cent of usage. Alibaba's Qwen team released the Qwen 3 Coder on July 23, claiming it delivers top-tier AI coding performance on par with Anthropic's Claude Sonnet 4, citing various third-party benchmark tests. Claude Sonnet 4 is widely regarded as the industry's leading coding model. Alibaba owns the Post. Qwen 3 Coder's performance boost was due to expanded data sets that include a high portion of coding-related data for training, improved overall data quality and the use of large-scale reinforcement learning, according to the Qwen team. Alibaba's coding tool launched at a time when the AI-assisted coding sector was becoming increasingly crowded with more players entering the space. Advertisement A slew of Chinese tech majors – from ByteDance to Tencent Holdings and Baidu – have all released AI coding tools to ride on the so-called vibe coding wave, which is the trend of using AI to help generate, complete and debug code.
Yahoo
33 minutes ago
- Business
- Yahoo
FloQast Forms Strategic Alliance with Deloitte Australia to Accelerate Financial Transformation
Alliance combines FloQast's AI-powered automation platform with Deloitte Australia's professional services expertise to deliver greater value to clients SYDNEY, Aug. 18, 2025 (GLOBE NEWSWIRE) -- FloQast, an Accounting Transformation Platform created by accountants for accountants, today announced a strategic alliance with Deloitte Australia to deliver financial transformation for clients across multiple industries and service lines. The collaboration will bring together FloQast's powerful AI platform and Deloitte Australia's deep expertise to streamline the financial close process and drive efficiency gains. 'We're thrilled to partner with Deloitte Australia to help organizations make their accounting and finance operations smarter, faster, and more agile,' said Mike Whitmire, Co-founder and CEO of FloQast, CPA*. 'This alliance will empower accounting teams with innovative, AI-driven tools that unlock efficiency and allow them to thrive in the face of unprecedented change.' The alliance with FloQast is poised to bring notable benefits to various service lines and industry focuses within Deloitte Australia. Specifically, the advisory and consulting service lines stand to enhance financial transformation offerings through streamlined financial close processes. Harnessing AI for advanced automation and data analysis, the alliance aims to reduce manual efforts, improve accuracy, and unlock valuable insights from financial data. 'We are delighted to announce our strategic alliance with FloQast, a collaboration that will significantly enhance the value we deliver to our clients,' said Brian Cameron, Director of Accounting and Reporting Assurance at Deloitte Australia. 'This alliance underscores Deloitte Australia's commitment to driving finance transformation, where innovative solutions and expertise converge to enhance business performance, elevate operational efficiency, and foster sustained growth for our clients.' *inactive About FloQastFloQast, an Accounting Transformation Platform created by accountants for accountants, enables organizations to automate a variety of accounting operations. Trusted by more than 3,000 global accounting teams – including Twilio, Los Angeles Lakers, Zoom, and Snowflake – FloQast enhances the way accounting teams work, enabling customers to automate close management, account reconciliations, accounting operations, and compliance activities. With FloQast, teams can utilize the latest advancements in AI technology to manage aspects of the close, reduce their compliance burden, stay audit-ready, and improve accuracy, visibility, and collaboration overall. FloQast is consistently rated #1 across all user review sites. Learn more at Contact:John SiegelSenior Content Marketing ManagerCommunications in to access your portfolio