AI speeding up special educational needs reports
Artifical intelligence (AI) is being used by a council in a bid to cut lengthy waiting times for children's special needs reports.
Stoke-on-Trent City Council produces hundreds of education, health and care plans (EHCPs) each year and has been struggling to issue them on time because of increased demand, but delays can leave families waiting for extra support.
In 2023-24, 43% of the council's EHCPs were completed within 20 weeks, compared with a national target of 60%.
Members of a scrutiny committee were told AI tools had been trained to extract information from documents such as psychological reports and write it into a plan - completing the task faster than a human.
Delyth Mathieson, assistant director of education and family support, said each young person received a range of different reports and until now, individual case workers pulled elements together.
She said: "What we're looking at is a process whereby we can upload those reports, securely obviously, so that the information is collated and dropped into the format automatically and intelligently."
The process still involved a case worker who knows the children, she told the children and family services overview and scrutiny committee.
That case worker went through each individual report to check for any misunderstandings by the AI, she added.
Ms Mathieson said the approach also freed up case workers to "focus on the quality of that report, rather than a cut-and-paste exercise".
Committee member Laura Carter, welcomed the move, adding: "If we have the option of bringing in technology, then why not use it?"
The council said it had cleared its backlog and provisional figures for April showed 83% of EHCPs were issued within 20 weeks.
The authority has also recruited more educational psychologists, changed how applications are processed, and increased early intervention to reduce demand.
This news was gathered by the Local Democracy Reporting Service, which covers councils and other public service organisations.
Follow BBC Stoke & Staffordshire on BBC Sounds, Facebook, X and Instagram.
Thousands of children waiting too long for school support, BBC finds
Stoke-on-Trent City Council

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
22 minutes ago
- Yahoo
Demis Hassabis On The Future of Work in the Age of AI
WIRED Editor At Large Steven Levy sits down with Google DeepMind CEO Demis Hassabis for a deep dive discussion on the emergence of AI, the path to Artificial General Intelligence (AGI), and how Google is positioning itself to compete in the future of the workplace. Director: Justin Wolfson Director of Photography: Christopher Eusteche Editor: Cory Stevens Host: Steven Levy Guest: Demis Hassabis Line Producer: Jamie Rasmussen Associate Producer: Brandon White Production Manager: Peter Brunette Production Coordinator: Rhyan Lark Camera Operator: Lauren Pruitt Gaffer: Vincent Cota Sound Mixer: Lily van Leeuwen Production Assistant: Ryan Coppola Post Production Supervisor: Christian Olguin Post Production Coordinator: Stella Shortino Supervising Editor: Erica DeLeo Assistant Editor: Justin Symonds - It's a very intense time in the field. We obviously want all of the brilliant things these AI systems can do, come up with new cures for diseases, new energy sources, incredible things for humanity. That's the promise of AI. But also, there are worries if the first AI systems are built with the wrong value systems or they're built unsafely, that could be also very bad. - Wired sat down with Demis Hassabis, who's the CEO of Google DeepMind, which is the engine of the company's artificial intelligence. He's a Nobel Prize winner and also a knight. We discussed AGI, the future of work, and how Google plans to compete in the age of AI. This is "The Big Interview." [upbeat music] Well, welcome to "The Big Interview," Demis. - Thank you, thanks for having me. - So let's start talking about AGI a little here. Now, you founded DeepMind with the idea that you would solve intelligence and then use intelligence to solve everything else. And I think it was like a 20-year mission. We're like 15 years into it, and you're on track? - I feel like, yeah, we're pretty much dead on track, actually, is what would be our estimate. - That means five years away from what I guess people will call AGI. - Yeah, I think in the next five to 10 years, that would be maybe 50% chance that we'll have what we are defined as AGI, yes. - Well, some of your peers are saying, "Two years, three years," and others say a little more, but that's really close, that's really soon. How do we know that we're that close? - There's a bit of a debate going on in the moment in the field about definitions of AGI, and then obviously, of course, dependent on that. There's different predictions for when it will happen. We've been pretty consistent from the very beginning. And actually, Shane Legg, one of my co-founders and our chief scientist, you know, he helped define the term AGI back in, I think, early 2001 type of timeframe. And we've always thought about it as system that has the ability to exhibit, sort of all the cognitive capabilities we have as humans. And the reason that's important, the reference to the human mind, is the human mind is the only existence proof we have. Maybe in the universe, the general intelligence is possible. So if you want to claim sort of general intelligence, AGI, then you need to show that it generalizes to all these domains. - Is when everything's filled in, all the check marks are filled in, then we have it- - Yes, so I think there are missing capabilities right now. You know, that all of us who have used the latest sort of LLMs and chatbots, will know very well, like on reasoning, on planning, on memory. I don't think today's systems can invent, you know, do true invention, you know, true creativity, hypothesize new scientific theories. They're extremely useful, they're impressive, but they have holes. And actually, one of the main reasons I don't think we are at AGI yet is because of the consistency of responses. You know, in some domains, we have systems that can do International Math Olympiad, math problems to gold medal standard- - Sure. - With our AlphaFold system. But on the other hand, these systems sometimes still trip up on high school maths or even counting the number of letters in a word. - Yeah. - So that to me is not what you would expect. That level of sort of difference in performance across the board is not consistent enough, and therefore shows that these systems are not fully generalizing yet. - But when we get it, is it then like a phase shift that, you know, then all of a sudden things are different, all the check marks are checked? - Yeah. - You know, and we have a thing that can do everything. - Mm-hmm. - Are we then power in a new world? - I think, you know, that again, that is debated, and it's not clear to me whether it's gonna be more of a kind of incremental transition versus a step function. My guess is, it looks like it's gonna be more of an incremental shift. Even if you had a system like that, the physical world, still operates with the physical laws, you know, factories, robots, these other things. So it'll take a while for the effects of that, you know, this sort of digital intelligence, if you like, to really impact, I think, a lot of the real world things. Maybe another decade plus, but there's other theories on that too, where it could come faster. - Yeah, Eric Schmidt, who I think used to work at Google, has said that, "It's almost like a binary thing." He says, "If China, for instance, gets AGI, then we're cooked." Because if someone gets it like 10 minutes, before the next guy, then you can never catch up. You know, because then it'll maintain bigger, bigger leads there. You don't buy that, I guess. - I think it's an unknown. It's one of the many unknowns, which is that, you know, that's sometimes called the hard takeoff scenario, where the idea there is that these AGI systems, they're able to self-improve, maybe code themselves future versus themselves, that maybe they're extremely fast at doing that. So what would be a slight lead, let's say, you know, a few days, could suddenly become a chasm if that was true. But there are many other ways it could go too, where it's more incremental. Some of these self-improvement things are not able to kind of accelerate in that way, then being around the same time, would not make much difference. But it's important, I mean, these issues are the geopolitical issues. I think the systems that are being built, they'll have some imprint of the values and the kind of norms of the designers and the culture that they were embedded in. - [Steven] Mm-hmm. - So, you know, I think it is important, these kinds of international questions. - So when you build AI at Google, you know, you have that in mind. Do you feel competitive imperative to, in case that's true, "Oh my God, we better be first?" - It's a very intense time at the moment in the field as everyone knows. There's so many resources going into it, lots of pressures, lots of things that need to be researched. And there's sort of lots of different types of pressures going on. We obviously want all of the brilliant things that these AI systems can do. You know, I think eventually, we'll be able to advance medicine and science with it, like we've done with AlphaFold, come up with new cures for diseases, new energy sources, incredible things for humanity, that's the promise of AI. But also there are worries both in terms of, you know, if the first AI systems are built with the wrong value systems or they're built unsafely, that could be also very bad. And, you know, there are at least two risks that I worry a lot about. One is, bad actors in whether it's individuals or rogue nations repurposing general purpose AI technology for harmful lens. And then the second one is, obviously, the technical risk of AI itself. As it gets more and more powerful, more and more agentic, can we make sure the guardrails are safe around it? They can't be circumvented. And that interacts with this idea of, you know, what are the first systems that are built by humanity gonna be like? There's commercial imperative- - [Steven] Right. - There's national imperative, and there's a safety aspect to worry about who's in the lead and where those projects are. - A few years ago, the companies were saying, "Please, regulate us. We need regulation." - Mm-hmm, mm-hmm. - And now, in the US at least, the current administration seems less interested in putting regulations on AI than accelerating it so we can beat the Chinese. Are you still asking for regulation? Do you think that that's a miss on our part? - I think, you know, and I've been consistent in this, I think there are these other geopolitical sort of overlays that have to be taken into account, and the world's a very different place to how it was five years ago in many dimensions. But there's also, you know, I think the idea of smart regulation that makes sense around these increasingly powerful systems, I think is gonna be important. I continue to believe that. I think though, and I've been certain on this as well, it sort of needs to be international, which looks hard at the moment in the way the world is working, because these systems, you know, they're gonna affect everyone, and they're digital systems. - Yeah. - So, you know, if you sort of restrict it in one area, that doesn't really help in terms of the overall safety of these systems getting built for the world and as a society. - [Steven] Yeah. - So that's the bigger problem, I think, is some kind of international cooperation or collaboration, I think, is what's required. And then smart regulation, nimble regulation that moves as the knowledge about the research becomes better and better. - Would it ever reach a point for you where you would feel, "Man, we're not putting the guardrails in. You know, we're competing, that we really have to stop, or you can't get involved in that?" - I think a lot of the leaders of the main labs, at least the western labs, you know, there's a small number of them and we do all know each other and talk to each other regularly. And a lot of the lead researchers do. The problem is, is that it's not clear we have the right definitions to agree when that point is. Like, today's systems, although they're impressive as we discussed earlier, they're also very flawed. And I don't think today's systems, are posing any sort of existential risk. - Mm-hmm. - So it's still theoretical, but the problem is that a lot of unknowns, we don't know how fast those will come, and we don't know how risky they will be. But in my view, when there are so many unknowns, then I'm optimistic we'll overcome them. At least technically, I think the geopolitical questions could be actually, end up being trickier, given enough time and enough care and thoughtfulness, you know, sort of using the scientific method as we approach this AGI point. - That makes perfect sense. But on the other hand, if that timeframe is there, we just don't have much time, you know? - No, we don't. We don't have much time. I mean, we're increasingly putting resources into security and things like cyber, and also research into controllability and understanding of these systems, sometimes called mechanistic interpretability. You know, there's a lot of different sub-branches of AI. - Yeah, that's right. I wanna get to interpretability. - Yeah, that are being invested in, and I think even more needs to happen. And then at the same time, we need to also have societal debates more about institutional building. How do we want governance to work? How are we gonna get international agreement, at least on some basic principles, around how these systems are used and deployed and also built? - What about the effect on work on the marketplace? - Yeah. - You know, how much do you feel that AI is going to change people's jobs, you know, the way jobs are distributed in the workforce? - I don't think we've seen, my view is if you talk to economists, they feel like there's not much has changed yet. You know, people are finding these tools useful, certainly in certain domains- - [Steven] Yeah. - Like, things like AlphaFold, many, many scientists are using it to accelerate their work. So it seems to be additive at the moment. We'll see what happens over the next five, 10 years. I think there's gonna be a lot of change with the jobs world, but I think as in the past, what generally tends to happen is new jobs are created that are actually better, that utilize these tools or new technologies, what happened with the internet, what happened with mobile? We'll see if it's different this time. - Yeah. - Obviously everyone always thinks this new one, will be different. And it may be, it will be, but I think for the next few years, it's most likely to be, you know, we'll have these incredible tools that supercharge our productivity, make us really useful for creative tools, and actually almost make us a little bit superhuman in some ways in what we're able to produce individually. So I think there's gonna be a kind of golden era, over the next period of what we're able to do. - Well, if AGI can do everything humans can do, then it would seem that they could do the new jobs too. - That's the next question about like, what AGI brings. But, you know, even if you have those capabilities, there's a lot of things I think we won't want to do with a machine. You know, I sometimes give this example of doctors and nurses. You know, maybe a doctor and what the doctor does and the diagnosis, you know, one could imagine that being helped by AI tool or even having an AI kind of doctor. On the other hand, like nursing, you know, I don't think you'd want a robot to do that. I think there's something about the human empathy aspect of that and the care, and so on, that's particularly humanistic. I think there's lots of examples like that but it's gonna be a different world for sure. - If you would talk to a graduate now, what advice would you give to keep working- - Yeah. - Through the course of a lifetime- - Yeah. - You know, in the age of AGI? - My view is, currently, and of course, this is changing all the time with the technology developing. But right now, you know, if you think of the next five, 10 years as being, the most productive people might be 10X more productive if they are native with these tools. So I think kids today, students today, my encouragement would be immerse yourself in these new systems, understand them. So I think it's still important to study STEM and programming and other things, so that you understand how they're built, maybe you can modify them yourself on top of the models that are available. There's lots of great open source models and so on. And then become, you know, incredible at things like fine-tuning, system prompting, you know, system instructions, all of these additional things that anyone can do. And really know how to get the most out of those tools, and do it for your research work, programming, and things that you are doing on your course. And then come out of that being incredible at utilizing those new tools for whatever it is you're going to do. - Let's look a little beyond the five and 10-year range. Tell me what you envision when you look at our future in 20 years, in 30 years, if this comes about, what's the world like when AGI is everywhere? - Well, if everything goes well, then we should be in an era of what I like to call sort of radical abundance. So, you know, AGI solves some of these key, what I sometimes call root node problems in the world facing society. So a good one, examples would be curing diseases, much healthier, longer lifespans, finding new energy sources, you know, whether that's optimal batteries and better room temperature, superconductors, fusion. And then if that all happens, then we know it should be a kind of era of maximum human flourishing where we travel to the stars and colonize the galaxy. You know, I think the beginning of that will happen in the next 20, 30 years if the next period goes well. - I'm a little skeptical of that. I think we have an unbelievable abundance now, but we don't distribute it, you know, fairly. - Yeah. - I think that we kind of know how to fix climate change, right? We don't need a AGI to tell us how to do it, yet we're not doing it. - I agree with that. I think we being as a species, a society not good at collaborating, and I think climate is a good example. But I think we are still operating, humans are still operating in a zero-sum game mentality. Because actually, the earth is quite finite, relative to the amount of people there are now in our cities. And I mean, this is why our natural habitats, are being destroyed, and it's affecting wildlife and the climate and everything. - [Steven] Yeah. - And it's also partly 'cause people are not willing to accept, we do now to figure out climate. But it would require people to make sacrifices. - Yeah. - And people don't want to. But this radical abundance would be different. We would be in a finally, like, it would feel like a non-zero-sum game. - How will we get [indistinct] to that? Like, you talk about diseases- - Well, I gave you an example. - We have vaccines, and now some people think we shouldn't use it. - Let me give you a very simple example. - Sure. - Water access. This is gonna be a huge issue in the next 10, 20 years. It's already an issue. Countries in different, you know, poorer parts of the world, dryer parts of the world, also obviously compounded by climate change. - [Steven] Yeah. - We have a solution to water access. It's desalination, it's easy. There's plenty of sea water. - Yeah. - Almost all countries have a coastline. But the problem is, it's salty water, but desalination only very rich countries. Some countries do do that, use desalination as a solution to their fresh water problem, but it costs a lot of energy. - Mm-hmm. - But if energy was essentially zero, there was renewable free clean energy, right? Like fusion, suddenly, you solve the water access problem. Water is, who controls a river or what you do with that does not, it becomes much less important than it is today. I think things like water access, you know, if you run forward 20 years, and there isn't a solution like that, could lead to all sorts of conflicts, probably that's the way it's trending- - Mm-hmm, right. - Especially if you include further climate change. - So- - And there's many, many examples like that. You could create rocket fuel easily- - Mm-hmm. - Because you just separate that from seawater, hydrogen and oxygen. It's just energy again. - So you feel that these problems get solved by AGI, by AI, then we're going to, our outlook will change, and we will be- - That's what I hope. Yes, that's what I hope. But that's still a secondary part. So the AGI will give us the radical abundance capability, technically, like the water access. - Yeah. - I then hope, and this is where I think we need some great philosophers or social scientists to be involved. That should hopefully shift our mindset as a society to non-zero-sum. You know, there's still the issue of do you divide even the radical abundance fairly, right? Of course, that's what should happen. But I think there's much more likely, once people start feeling and understanding that there is this almost limitless supply of raw materials and energy and things like that. - Do you think that driving this innovation by profit-making companies is the right way to go? We're most likely to reach that optimistic high point through that? - I think it's the current capitalism or, you know, is the current or the western sort of democratic kind of systems, have so far been proven to be sort of the best drivers of progress. - Mm-hmm. - So I think that's true. My view is that once you get to that sort of stage of radical abundance and post-AGI, I think economics starts changing, even the notion of value and money. And so again, I think we need, I'm not sure why economists are not working harder on this if maybe they don't believe it's that close, right? But if they really did that, like the AGI scientists do, then I think there's a lot of economic new economic theory that's required. - You know, one final thing, I actually agree with you that this is so significant and is gonna have a huge impact. But when I write about it, I always get a lot of response from people who are really angry already about artificial intelligence and what's happening. Have you tasted that? Have you gotten that pushback and anger by a lot of people? It's almost like the industrial revolution people- - Yeah. - Fighting back. - I mean, I think that anytime there's, I haven't personally seen a lot of that, but obviously, I've read and heard a lot about, and it's very understandable. That's all that's happened many times. As you say, industrial revolution, when there's big change, a big revolution. - [Steven] Yeah. - And I think this will be at least as big as the industrial revolution, probably a lot bigger. That's surprising, there's unknowns, it's scary, things will change. But on the other hand, when I talk to people about the passion, the why I'm building AI- - Mm-hmm. - Which is to advance science and medicine- - Right. - And understanding of the world around us. And then I explain to people, you know, and I've demonstrated, it's not just talk. Here's AlphaFold, you know, Nobel Prize winning breakthrough, can help with medicine and drug discovery. Obviously, we're doing this with isomorphic now to extend it into drug discovery, and we can cure terrible diseases that might be afflicting your family. Suddenly, people are like, "Well, of course, we need that." - Right. - It'll be immoral not to have that if that's within our grasp. And the same with climate and energy. - Yeah. - You know, many of the big societal problems, it's not like you know, we know, we've talked about, there's many big challenges facing society today. And I often say I would be very worried about our future if I didn't know something as revolutionary as AI was coming down the line to help with those other challenges. Of course, it's also a challenge itself, right? But at least, it's one of these challenges that can actually help with the others if we get it right. - Well, I hope your optimism holds out and is justified. Thank you so much. - And I'll do my best. Thank you. [upbeat music]


Forbes
37 minutes ago
- Forbes
How SAP Is Managing AI And Data To Meet ERP Customers Where They Are
SAP CEO Christian Klein opened SAP Sapphire 2025 by highlighting today's business uncertainty and ... More emphasizing SAP's focus on helping customers adapt to new trade rules, regulations and technologies. The discussions at SAP's Sapphire 2025 event in Orlando were different than in previous years — focused, grounded and more customer-centric. SAP's key message was clear: ERP transformation doesn't need to be disruptive, nor is it one-size-fits-all. This is so important — and welcome — because many customers are still operating in hybrid computing environments, managing legacy on-premises systems while also moving some functions to the cloud, and they're navigating complex change cycles. Instead of urging them to leap into the unknown, SAP presented a more modular path centered on embedded AI, flexible data platforms and tools built to meet organizations where they are. I think this pragmatic messaging is a smart approach for SAP, and it was backed up by the announcements from the company throughout the conference. (Note: SAP is an advisory client of my firm, Moor Insights & Strategy.) One of the core architectural shifts discussed was SAP's effort to unify its platform. This is realized through tighter integration of the Business Technology Platform, SAP Business Suite and the Business Data Cloud, which entered controlled general availability earlier this year. BDC, which I wrote about in an earlier Forbes piece, consolidates services including SAP Datasphere, HANA Cloud, SAP Analytics Cloud and BW/4HANA into a single managed environment. It supports both SAP and non-SAP data and is built to reduce fragmentation, simplify access and support analytics, AI models and simulations without data duplication. BDC also includes extended support for older SAP BW systems, offering customers a bridge to modern cloud analytics with less disruption. Meanwhile, the Business Technology Platform (which you'll hear the company call BTP) continues to serve as SAP's foundation for extensibility and automation. On top of that, SAP Build — a tool for creating apps with little to no coding — now includes AI features to help generate code, design user interfaces and automate business logic. These improvements should help both technical and business teams build applications more efficiently and manage workflows with less effort. Integrating Joule — the company's generative AI assistant — across SAP Build, Analytics Cloud and key business applications reflects SAP's intention to make AI a daily utility, not a separate layer or some special extra feature. Among other functions, Joule can now generate and automate processes, surface contextual insights, launch prebuilt AI agents tailored to specific functions, answer natural-language questions and recommend actions based on real-time business data. SAP's AI assistant, Joule, helps orchestrate processes across key business areas such as finance, ... More supply chain, HR and customer experience. SAP's AI strategy is now rooted in an AI-first approach, with AI embedded across the portfolio, and its updated platform reflects this shift. At the center of this is the 'Business AI flywheel,' SAP's framework for linking applications, real-time data and AI — including agents — to support continuous improvement. This 'flywheel' concept includes the Business Data Cloud and Joule. Indeed, Joule plays a central role in this strategy. It's no longer just a task-based assistant — it's becoming an interface that works across products. With integrations for WalkMe (which SAP acquired in 2024) for in-app guidance and Perplexity AI for contextual search, Joule can provide real-time support based on company data. At Sapphire 2025, SAP also introduced AI Foundation, a centralized environment for building, managing and deploying AI agents. To keep those agents working properly, tools like Joule Studio and governance features powered by SAP LeanIX allow organizations to track how AI agents align with business capabilities. Looking ahead, SAP plans to embed AI into 400 business use cases by the end of 2025, reflecting its commitment to making AI part of the everyday experience rather than a standalone function. At the conference, SAP also introduced new intelligent applications built on the Business Data Cloud. These apps address specific needs — People Intelligence for workforce planning, Green Ledger for sustainability reporting, Spend Control Tower for managing procurement and supplier risk, 360 Customer for enhancing customer insights and engagement and the Sustainability Tower for tracking and improving ESG performance. Rather than offering broad, unfocused capabilities, each of these apps is designed to use AI and simulation to support targeted business scenarios. Support for ERP transformation projects remains a priority. SAP has repositioned its RISE with SAP and GROW with SAP programs to reflect the distinct needs of existing and new ERP customers. RISE with SAP is a comprehensive transformation framework for current on-premises SAP ERP customers that are moving to S/4HANA in the cloud. Meanwhile, GROW with SAP focuses on net-new customers adopting SAP cloud-based ERP and includes community-based support and best practices. Both programs are backed by SAP's Integrated Toolchain, which enables architectural modeling, scenario simulation, governance and user adoption planning. The Business Transformation Center, which comes with SAP support licenses, is another potentially helpful addition. BTC helps customers move their systems step by step, archiving old ones. This is a big deal for customers who are hesitant to make significant changes. SAP Build has also been improved to support these transformation projects with low-code and pro-code extensions powered by embedded AI. SCM was one of the more practical focus areas at the event. SAP showed how AI agents help with tasks like demand forecasting, supply chain planning and spotting issues in logistics and operations. Some customers shared early results, saying they've seen better visibility, faster cycle times and improved compliance, especially as they deal with today's shifting trade rules and global supply chain uncertainty. SAP connected this to the idea of Industry 5.0, where automation and AI still leave room for human judgment, accountability and transparency. That message seemed to land especially well with customers in healthcare, manufacturing and the public sector, where AI explainability makes a big difference. SAP also highlighted its growing partner ecosystem, which continues to expand the company's AI and data capabilities. Partners include Google Cloud for machine learning and analytics, Microsoft for productivity tools and infrastructure and AWS for industry-specific AI use cases. Accenture is supporting pre-configured cloud solutions to speed up deployment. Palantir contributes to operational modeling, while Cohere, Mistral AI and Deloitte's Zora AI focus on bringing scalable language models into SAP's environment. As touched on earlier, the partnership with Perplexity AI adds real-time, context-aware search directly into Joule. Databricks — already integrated with SAP's Business Data Cloud through a special partnership — is helping accelerate AI model development. Syniti is working with SAP to address data quality and data readiness, which is a key hurdle for many organizations. To its credit, SAP did not downplay the ongoing hurdles that its customers face. At the event, different customers expressed concern over pricing clarity, the complexity of transitioning to cloud deployments, the delayed availability of key features like full BDC rollout and Joule agent capabilities, and the challenge of mapping all the new tools to practical use cases. Many enterprises also still face foundational issues such as data fragmentation, siloed processes and limited organizational capacity for change. While SAP's tools are definitely improving, customers still need stronger enablement measures and more tailored roadmaps to act with confidence. With this in mind, I think SAP would benefit from focusing more on practical, outcome-driven roadmaps that show customers how new tools actually solve real business problems. It should make it easier to understand how features such as Joule and BDC fit into day-to-day workflows, not just how they fit conceptually. Customers also need more hands-on help — like clear migration plans, industry-specific examples and partner workshops — to build confidence and move forward faster. SAP Sapphire 2025 made it clear that SAP is focusing on helping customers move forward without forcing big, disruptive changes. This year's updates were about making things easier to manage — like better integration across BTP, the SAP Business Suite and the Business Data Cloud. That kind of unification matters for customers trying to connect data, simplify their systems and get more value from what they already have. SAP also expanded its partner network in useful ways to give customers access to more resources, whether that means getting help with cloud infrastructure, AI model development or real-time search. These are practical ways to expand what SAP can offer without trying to build everything in-house. I think customers still have concerns. Many are cautious about moving to the cloud, and with good reason — data cleanup, change management, pricing clarity and keeping things running during the transition are all real challenges. SAP's tools like the BTC and the reworked RISE with SAP and GROW with SAP programs are built to help with this, but organizations want clear guidance, too. In the end, SAP's message was that transformation doesn't have to mean tearing everything out and starting over. Most customers aren't looking for dramatic change; they want progress they can manage. SAP is starting to reflect that more in its products and messaging, and the shift is noticeable. For the ERP world, it's a reminder that the best path forward might not be the fastest, but the one that actually fits.
Yahoo
38 minutes ago
- Yahoo
AI-Media Redefines Global Accessibility with LEXI Voice at InfoComm 2025
Booth #5389 + AVIXA TV Studio: Experience Real-Time Multilingual Translation at Scale InfoComm 2025 x AI-Media NEW YORK, June 06, 2025 (GLOBE NEWSWIRE) -- AI-Media (ASX: AIM), the global leader in AI-powered language solutions, will debut its game-changing LEXI VOICE platform at InfoComm 2025, setting a new benchmark for live, multilingual accessibility. Attendees can visit Booth #5389 or tune in via AVIXA TV Studio to witness how LEXI VOICE instantly translates spoken content into natural-sounding audio across 100+ languages - redefining how the world connects. Following its acclaimed launch at NAB Show 2025, LEXI VOICE combines ultra-accurate live captioning, AI-driven translation, and lifelike voice synthesis to deliver seamless, simultaneous multilingual output in real time. Whether powering global summits, live broadcasts, corporate town halls, or government briefings, LEXI VOICE equips content creators to transcend language barriers and scale inclusion - without adding complexity. 'As AV and broadcast converge, LEXI VOICE stands out as a powerful growth engine - not just a compliance tool,' said Tony Abrahams, CEO of AI-Media. 'InfoComm is the perfect stage to show how our tech doesn't just translate - it transforms communication.' AVIXA TV Goes Trilingual - Powered by LEXI AI-Media is proud to partner with AVIXA TV Studio (Booth #7861) to deliver the first-ever trilingual live broadcast in English, Spanish, and German. Powered by LEXI VOICE, LEXI TEXT, and LEXI TRANSLATE, this production uses a fully cloud-based workflow, in collaboration with AWS, Ross Video, and other partners - demonstrating how scalable, real-time accessibility is now achievable for any AV or broadcast event. Discover the Full LEXI Suite at InfoComm 2025 At Booth #5389, explore the complete LEXI ecosystem, engineered for today's hybrid communication era: LEXI VOICE – Real-time multilingual voice translation with lifelike audio output to engage audiences everywhere. LEXI TEXT – Low-latency, high-accuracy AI captioning for live and hybrid events. LEXI TRANSLATE – AI-powered caption translation to extend accessibility across global audiences. Book a meeting onsite or online to see how LEXI can elevate your global communications strategy. About AI-Media Founded in Australia in 2003, AI-Media (ASX: AIM) is a global innovator in AI-powered captioning, translation, and live voice accessibility. With operations across 25+ countries, AI-Media delivers unmatched automation, scalability, and precision through its end-to-end ecosystem - including LEXI, iCap, Alta, Encoder Pro, and the LEXI Toolkit. Its newest breakthrough, LEXI VOICE, transforms how live content is delivered and consumed - turning accessibility into a strategic advantage for broadcasters, enterprises, and content producers worldwide. A photo accompanying this announcement is available at CONTACT: Media Contact Fiona Habben Head of Global Marketing in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data