
UK launches AI skills drive for workers and schoolchildren
London
: The UK government is to team up with tech-giants including Google,
Microsoft
and
Amazon
to train 7.5 million workers in AI skills, Prime Minister Keir Starmer is to announce Monday.
Starmer is also set to unveil £187 million ($253 million) in funding to help develop tech abilities for one million secondary school students, as part of its "TechFirst" programme to bring AI learning into classrooms and communities.
"We are putting the power of AI into the hands of the next generation -- so they can shape the future, not be shaped by it," Starmer was to say, according to extracts released by his Downing Street office.
"This training programme will unlock opportunity in every classroom -- and lays the foundations for a new era of growth," he was to add.
The UK's AI sector is valued at £72 billion and is projected to exceed £800 billion by 2035. It is growing 30 times faster than the rest of the economy, employing over 64,000 people, according to government figures.
Alongside TechFirst, Starmer was also to announce a government-industry partnership to train 7.5 million workers, with tech giants committing to make training materials freely available to businesses over the next five years.
Training will focus on teaching workers to use chatbots and large language models to boost productivity.
Google EMEA President Debbie Weinstein called it a "crucial initiative" essential developing AI skills, unlocking AI-powered growth "and cementing the UK's position as an AI leader".
The government was also to sign two Memorandums of Understanding with semiconductor firm NVIDIA, "supporting the development of a nationwide AI talent pipeline", according to the UK government.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hans India
an hour ago
- Hans India
iOS 26 Brings Adaptive Power Mode to Boost iPhone Battery Life
Apple is stepping up its battery optimization game with the introduction of a new "adaptive power" mode in iOS 26, currently available in the developer beta. The feature is designed to help your iPhone battery last longer by making subtle performance adjustments during use. As detailed in the beta, Adaptive Power works by reducing screen brightness and allowing certain tasks to take slightly longer — small but meaningful tweaks that help conserve energy. Apple says the system may also automatically activate Low Power Mode when your device dips below 20 percent battery, limiting background processes to preserve power. The new feature is generating buzz, particularly after Bloomberg's Mark Gurman revealed last month that Apple has been exploring AI-based battery optimization tools. According to Gurman, the technology is expected to use 'battery data it has collected from users' devices' to determine which apps can afford reduced power consumption without affecting user experience. This approach resembles Google's Adaptive Battery for Android, which leverages AI to minimize background activity for infrequently used apps. Users testing the iOS 26 developer beta can find the Adaptive Power toggle in the Battery > Power Mode section of the Settings menu, where it lives alongside the existing Low Power Mode. Apple plans to open the beta to more users next month, with a full public release slated for this fall. iOS 26 also debuts a fresh "Liquid Glass" design and packs new updates across the Camera, Phone, Safari, and Messages apps. Notably, it includes Apple Intelligence-powered live translation for text messages and phone calls — setting the stage for a smarter, more energy-efficient iPhone experience.


Indian Express
an hour ago
- Indian Express
As AI lag overshadows its ‘Liquid Glass' design updates, is Apple headed the Nokia way?
Apple just announced iOS 26, the biggest redesign of its software backend in years. After having belatedly stumbled out of the starting blocks in Big Tech's decisive race to leverage the promise of artificial intelligence, the question was whether Apple would redeem itself at its annual Worldwide Developers Conference — a pre-summer rite that draws developers in droves to Apple's Cupertino headquarters in California. Well, the answer may be a resounding no. Beyond some snazzy updates on its 'Liquid Glass' display and some whittling around the edges when it comes to improving the backend operating system, there doesn't seem to be a real big remarkable breakthrough at WWDC25. The promise of a smarter and more versatile version of its virtual assistant, Siri, continues to be just that – a promise. Incremental steps notwithstanding, analysts point to the potential mistake of Apple persisting with attempts to basically build on Siri, rather than starting from scratch in the way that some of the other AI companies have done. In 2023, Apple unveiled a mixed-reality headset that has been little more than a niche product, and last year's WWDC heralded its first major foray into the AI space with a range of new software features accompanied by the promise of a more versatile and smarter version of Siri. This was all part of what the company called Apple Intelligence. Apple's stock surged by more than $200 billion the following day — one of the biggest single-day gains of any company in American history. The showing at this year's WWDC is clearly more underwhelming, renewing analyst calls that it might have been better served to start over from scratch rather than merely attempt to improve Siri. One way to do that could be to invest in companies such as Perplexity, just like what Microsoft did early on. It is now reaping the benefits of ChatGPT-driven Copilot being integrated into Microsoft systems. Apple Intelligence is, for instance, not a patch on other voice-activated AI assistance bots such as Google's Gemini. Despite the fact that it was one of the early movers into the backend chip design business, and given that the company has the resources to spend on R&D, Apple is seen as falling continuously behind in the software pivot. So much so that comparisons are being drawn to Finnish telecoms major Nokia, a market leader in handsets that was disrupted by Apple itself in the mid-2000s. To be fair, Apple has acknowledged that its hardware bestseller, the smartphone, could be a thing of the past in less than a decade. And even as Apple's rivals have been faster off the block to explore new use cases, with both Google and Meta betting on AI-infused smart glasses, alongside Chinese competitors including Xiaomi and Baidu. OpenAI, the developer of ChatGPT where Microsoft has a stake, has meanwhile announced a software-to-hardware pivot, after it recently announced a $6.4 billion deal to buy a firm created by Jony Ive, Apple's former chief designer for more than 25 years, to build an AI device. While Apple has a product of the future in its Vision Pro headset, it is still a big clunky device as compared to the new Meta glasses. And now as Jony Ive works with OpenAI, the collaboration could include wearables, meaning Apple could have another big problem on its hands. At the same time, though, Apple still has a billion phones out there, and most of the world's premium users to boot. But there again, Apple's unwillingness to hoover up customers' individual information, however creditable that might be from a privacy point of view, makes it harder for the company to train personalised AI models. As part of its 'differential privacy' policy, Apple uses collective insights, rather than the granular data scraped up by companies such as Google. Also, according to the Economist, privacy has encouraged Apple to prioritise AI that runs on its own devices, rather than investing in cloud infrastructure, even as chatbots have advanced more rapidly in the cloud because the models can be much bigger in scale. The result being that Apple has had to offer some users of Apple Intelligence an opt-in to ChatGPT – clearly a compromise of sorts. Apple's struggle on the AI front is also being compared to its other previous shortcomings: the Apple TV project and the Apple car, both of which never materialised despite years of backend work. Not that iOS 26 is all fuzz. The big change this year is the customer interface redesign. 'Liquid Glass' in Applespeak is a new translucent interface that does make the OS look sleek and the app icons can be customised with a glass look. Popular apps such as Safari and the camera have also been redesigned to make the screen look and feel bigger. CarPlay has been tweaked. Then, the big change is while using the phone. There is now an automatic call screening facility that jumps in to answer a call from an unknown number, which then prompts the caller to say who they are. Once the caller shares their name and the reason for their call, then only does the phone ring. Then there's hold assist, which basically detects when there is hold music, and the phone will sit on hold for the user and alert the user only when a human has come on the line. But in all of this, Apple is merely playing catch up to Google and Samsung. And that really is the problem for the Silicon Valley-based electronics major. Anil Sasi is National Business Editor with the Indian Express and writes on business and finance issues. He has worked with The Hindu Business Line and Business Standard and is an alumnus of Delhi University. ... Read More


Hindustan Times
an hour ago
- Hindustan Times
Google Pixel 10 could get a big display upgrade thanks to this old Pixel 4 tech: Report
Google Pixel 10 series is expected to revive the Ambient EQ feature that first debuted on a Pixel phone with the Pixel 4 and the Pixel 4 XL, Android Authority has reported. For those uninitiated, it is basically a feature that lets the Pixel 4 adjust the white balance on its display according to your environment (light). Yes, this is similar to how Apple does it with True Tone. This was made possible using dedicated hardware on the Pixel 4. But, Google decided to skip it altogether for the Pixel 5, and since then, until the Pixel 9, no other Pixel mobile phone has had it. One exception that exists is the Pixel Tablet, which debuted the feature under a different name called 'Adaptive Tone.' The report, citing a source, states that Google is going to be bringing this feature back to the Pixel lineup with the Pixel 10 series, and it could be renamed to Adaptive Tone, like on the Pixel Tablet. Google says that it can dynamically adapt the display to warmer or cooler tones based on your ambient lighting. This will reportedly be facilitated by a new ambient light and colour sensor, the AMS TMD3743, and will notably be present on all non-foldable Pixel 10 devices, Android Authority says. Our take: We feel that having dedicated hardware to handle the white balance of the display could be an interesting addition, especially considering how important it is for reducing eye strain and making colours look better according to your lighting conditions. This should help with the overall viewing experience. The Google Pixel launch is not going to be too far away. Last year, we got the Pixel 9 series in August, which was Google's new release schedule. And this year, more or less, it is expected to be the same, as per reports. Google is also expected to release Android 16 in full stable capacity. As per a new exposé by Android developers, it wouldn't be out of the ordinary to expect the Pixel 10 series to follow soon after, sometime in August. As for what to expect, leaks and renders suggest the Pixel 10 series could resemble last year's Pixel 9's design aesthetic. However, with the base Pixel 10, Google could be looking at bringing several upgrades, including finally adding a telephoto camera to the vanilla model. MOBILE FINDER: iPhone 16 LATEST Price, Specs And More