
Philippine Government and Sutherland Launch AI Academy to Equip Filipinos with Future-Ready Skills
The AI Academy will offer practical, industry-aligned training designed to equip Filipino professionals with skills to integrate artificial intelligence into their work. It aims to strengthen the country's talent pool by developing capabilities that are increasingly in demand across sectors, whether as AI specialists, prompt engineers, or cybersecurity professionals. The program will prepare participants to harness AI in driving productivity, advancing innovation, and pursuing high-value opportunities across industries.
'This initiative is a vital step toward our goal of building a digitally resilient and inclusive workforce, said President Ferdinand R. Marcos Jr. 'By expanding access to training in future-ready skills, we are empowering our countrymen to take part in – and benefit from – an economy increasingly shaped by AI.'
Dilip Vellodi, Chairman & CEO, Sutherland, said, 'This initiative represents a powerful partnership rooted in a shared goal: getting the Philippines' workforce ready for the future of work. Sutherland and the Government of the Philippines are working together to build a lasting ecosystem for digital skills and innovation—one that will support industries, uplift communities, and boost the economy.'
Department of Information and Communications Technology (DICT) Secretary Henry Rhoel Aguda likewise underscored the importance of the partnership, emphasizing its role in advancing the country's digital transformation agenda and ensuring that Filipinos are well-positioned to lead in an increasingly AI-driven global economy.
'AI and automation are going to transform jobs in every industry over the next decade. That's why our partnership with Sutherland is all about giving Filipinos the skills, knowledge, and confidence to stay competitive. Together, we're building a stronger foundation for a future where AI fluency isn't just an advantage—it's a necessity for 85% of jobs,' Secretary Aguda said.
A National Blueprint for AI-Driven Growth
Sutherland will offer hands-on training for critical sectors like fintech, healthcare, and emerging digital industries, leveraging Sutherland's deep expertise in AI, analytics, automation, and cloud technologies.
The AI Academy is supported by the Philippine Government, local educational institutions, and provincial governments such as Camsur and Tarlac. Its mission is to make AI skills accessible to all, encourage strong public-private collaboration, and drive long-term economic growth by building a future-ready workforce.
This partnership is envisioned to strengthen the Philippines' position as a leader in AI-driven workforce development, while reinforcing Sutherland's commitment to delivering Digital Outcomes that transform industries, strengthen economies, and improve lives.
About Sutherland
Artificial Intelligence. Automation. Cloud Engineering. Advanced Analytics.
For Enterprises, these are key factors of success. For us, they're our core expertise.
We work with global iconic brands. We bring them a unique value proposition through market-leading technologies and business process excellence. At the heart of it all is Digital Engineering – the foundation that powers rapid innovation and scalable business transformation.
We've created over 200 unique inventions under several patents across AI and other emerging technologies. Leveraging our advanced products and platforms, we drive digital transformation at scale, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless 'as-a-service' model.
For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. With proven strategies and agile execution, we don't just enable change – we engineer digital outcomes.
Sutherland
digital outcomes.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
8 minutes ago
- Yahoo
CoreWeave earnings: 4 things Wall Street wants from the print
CoreWeave (CRWV) reports second quarter results after Tuesday's closing bell. Yahoo Finance Executive Editor Brian Sozzi outlines what investors need to see in the earnings print to send the stock higher, Yahoo Finance Senior Reporters Brooke DiPalma and Ines Ferré take a closer look at recent price action, and Bianco Research president Jim Bianco discusses the artificial intelligence (AI) space and initial public offering (IPO) market. To watch more expert insights and analysis on the latest market action, check out more Opening Bid. Let's turn now to our Stock of the Day. Today, it's all about CoreWeave. Momo AI trade CoreWeave will report earnings after the close today. Stock has increased almost four times from its late March IPO price. Here's why I think the street is looking for CoreWeave. If it could hit these marks, the stock could renew its upper bias that has been on delay since early June. One, revenue has to beat estimates by a double digit percentage. Key CoreWeave customer Microsoft had a huge quarter on the AI use front, so CoreWeave should have theoretically benefited. Two, backlog needs to have increased compared to the first quarter. Three, earnings need to thump estimates. I've heard some concerns on how CoreWeave's earnings come in or could come in given robust investments in CAPEX. And four, management temps down concerns on how how much stock could be sold by insiders later this week. CoreWeave has an expiring IPO lockup that frees up 83% of Class A shares beginning the morning of August 15th, points out city. If a lot of CoreWeave stock is dumped, it could weigh on the market price in the near term. CoreWeave's valuation leaves no margin for error. Stock trades at big time premiums to the broader market. Still with me, my round table, Jim Bianco, Bianco Research president, and Yahoo finance senior reporters Brooke DePalma and Ines Ferre. Brooke, I want to go over to you because you've been examining how CoreWeave essentially ties in with Microsoft. Yeah, absolutely. What we do know is that Microsoft is one of CoreWeave's biggest companies. And if you take a look, not only is Microsoft one of their biggest companies, but we also know that IBM, meta, those are additional companies that CoreWeave works with in order to supply their servers for these big AI semiconductors. And so, it's important to note too that NVIDIA also is a partner with CoreWeave. We know that CoreWeave rents out computer servers equipped by NVIDIA, and NVIDIA also has a stake within the company. And so, it's been interesting to watch this stock movement over the past month. Right now, what we've seen is CoreWeave see about an 8% jump as these companies have reported their earnings and reported that their cloud businesses have outperformed expectations. NVIDIA is about a 10% jump over the last month. But as you said, this stock is up more than 240% since the IPO, and so definitely trading at a premium here. But what we do know is that based on its S1 filings, these are key players in the game right now when it comes to the AI revolution, and so CoreWeave is certainly benefiting from having a part in that. And as I always get a little nervous ahead of an earnings report where so many things appear to be going right. Sentiment on the AI trade is good. Microsoft performed well. Google crushed it. Uh, the stage theoretically is set for CoreWeave, but I did call out some concern how their earnings may come out because of investments they're making. Yeah, that's right. So, that's one of the concerns that the street has, and also that lockup period that you just mentioned, that could cause some volatility. But you do have Wall Street analysts, some of them saying, look, uh, the stock can absorb this. So, even if you do see selling after that lockup period expires this week, then you you would see the stock sort of absorbing this. As you mentioned, it has had quite a run, and it really underscores what the market has been leaning towards. The fact that you have AI that it's looking forward to. So, what Wall Street has been talking about throughout this entire time when we've seen these tariffs being rolled out and this uncertainty over what's happening at the Fed, they are expecting a cut, and they're expecting AI to move this market forward. Uh, Jim, real quick over to you. Um, you know, this has been another CoreWeave, really one of the hottest IPOs of this year. You have Figma, Firefly last week. What do you attribute some of these early gains for these IPOs? You know, I wrote over the weekend in our Morning Brief newsletter, I I think it reflects more mature companies coming to market, but I'd love your take. Yeah, I think it also reflects the the hope and maybe the hype of AI and that AI, count me in on, you know, those that think it's going to be bigger than the internet itself was in the late '90s or early 2000s once it achieves its potential. But the big thing you've got to keep in mind with company like CoreWeave or NVIDIA or the rest of them is they're all customers of each other because we're still at the picks and shovels play. We're still at the let's make AI play. We're not at the what is Ford doing with AI to change its bottom line, or what is Proctor and Gamble doing with AI to change its bottom line, or what am I doing with AI to change their bottom line. We're using a little bit of it like a lot the rest of them, but it's not changing our bottom line. So, it reminds me of that old Bill Gates line that technology over the next three years will underwhelm what you think it's going to do and over the next 10 years it's going to overwhelm what you think it's going to do. And if it underwhelms with these valuations in this hype, we might be seeing a little bit of disappointment somewhere down the road. Jim, I wonder how Fed chair Bill Polty would use AI. No, I'm just kidding. I'm just kidding. I'm just kidding, Jim. You know, just keeping the the mood light on this Tuesday morning. All right, let's fire up our question of the day, friends. He'd use it to cut rates to 1%. Yeah, no, fair enough. See, I knew you would tie all this together, Jim. I appreciate you, man.


Forbes
10 minutes ago
- Forbes
Quality Clicks: Google Rebuts Its Critics Sans Data
Amid evidence of an online catastrophe for publishers both large and small, Google is officially challenging the narrative that its AI Overviews tool is leading people to click into search engine results pages less frequently. The rebuttal comes through a now-prominent blog post by Google VP and Head of Search Liz Reid, who is contending that regardless of the numbers, the addition of the AI blurb at the top of a search is leading to more 'quality clicks.' 'Overall, total organic click volume from Google Search to websites has been relatively stable year-over-year,' Reid wrote Aug. 6. 'Additionally, average click quality has increased and we're actually sending slightly more quality clicks to websites than a year ago (by quality clicks, we mean those where users don't quickly click back — typically a signal that a user is interested in the website).' Quality Time with Google So the idea is that users will drill down deeper into things, spurred on by the introduction made by the AI tool. But this misses the point: it's the loss of aggregate traffic that hurts publishers and those trying to garner attention online, and since Google has had a practical monopoly on search for, well, decades, the impact of fewer clicks is big. But rather than leave it there, Reid added: 'This data is in contrast to third-party reports that inaccurately suggest dramatic declines in aggregate traffic — often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the roll out of AI features in Search.' The reader can assume that 'this data' refers to the first line, the organic click volume remaining 'relatively stable year-to-year' and not to the rest of it. However, outside parties are specifically taking aim at the former claim, noting that Google does not provide data to counter studies like those by Pew that definitely show lower levels of click-through traffic. With Their Own Eyes In addition to Pew's research, critics of the Google response point to their own evidence. 'Do the hundreds of thousands of Google Search Console [GSC] screenshots showing impressions remaining flat (or increasing) this year, while clicks dramatically decline – since AI Overviews were rolled out more broadly – count as 'flawed methodologies' or 'isolated examples'?' writes Amsive Vice President of SEO Strategy & Research Lily Ray. 'Thousands of us are seeing it… but it must just be some big coincidence?' 'Gaslighting of the highest order,' adds Florentina Schinteie, SEO Strategist for In-House Teams and Former Head of SEO at DesignRush. The Re-Skilling of the Web Here's another bit from the above blog post: 'While overall traffic to sites is relatively stable, the web is vast, and user trends are shifting traffic to different sites, resulting in decreased traffic to some sites and increased traffic to others. People are increasingly seeking out and clicking on sites with forums, videos, podcasts, and posts where they can hear authentic voices and first-hand perspectives. People are also more likely to click into web content that helps them learn more — such as an in-depth review, an original post, a unique perspective or a thoughtful first-person analysis. Sites that meet these evolving user needs are benefiting from this shift and are generally seeing an increase in traffic.' This seems in some ways eerily similar to the arguments of big bosses bullish on AI in the job market. Old jobs, they admit, will go away, but new jobs, they contend, will also be created. So it's a wash. Well… Workers will have to re-skill, then – what does that look like? The burden, you'd assume, would be on the workers themselves, which is convenient for whoever's moving the goalposts. The same concept is in play here. Reid suggests it's on the publishers to quickly add content … forums? Podcasts? But all of that aside, that first claim of the 'stability' of traffic is under fire. It's not just Pew, either. 'Research by AI search and SEO platform Authoritas, submitted as part of a legal complaint to the UK's Competition and Markets Authority, found that when an AI Overview is present, publishers are seeing a drop of 47.5% in per-query clickthrough rate on desktop, and 37.7% on mobile,' wrote Charlotte Tobitt Aug. 7 at PressGazette. 'Similarweb data found that among the top 100 news and media websites globally, the average rate of zero-click searches has gone from 50.5% to 52.7% in the past year. Among a wider dataset, zero-click news searches were said to have increased from 56% when AI Overviews were first launched in the US in May 2024 to almost 69% in May this year.' Publishers also report huge losses in traffic. Covering this last week, I cited this article in Columbia Journalism Review, which lays out some of these claims. For those who are skeptical about Reid's post, as a stand-in for a larger Google response: it's the numbers. The claim just doesn't seem credible, and no one from Google is coming up with any real proof. Panda and Penguin Another way to view Google's side of the issue, represented by Reid's arguments, is that the addition of AI Overviews is just like former Google algorithm changes, like Panda in 2011, and Penguin in 2012. These shifts were done with the stated goal of meeting user needs and driving positive change. Advertisers had to scramble. Digital marketers had to adapt by crafting content and sites to attract the priorities of the 'new boss' of Google favor. Conceptually, the same is true here – but the shift is much bigger, and the paradigm changes a lot more. The result appears to be a showdown between Google (which, in classical Marxian parlance, owns the means of production) and publishers, who, absent some course correction, may be left out in the cold, with simple admonitions to get started podcasting.


Forbes
10 minutes ago
- Forbes
AWS Launches Open-Weight OpenAI Models, Eroding Microsoft Exclusivity
AWS made a splash last week when it announced day-of-launch availability of two new open-weight models from OpenAI for use on Amazon Bedrock and Amazon SageMaker. This means that AWS customers can now use the models via these managed services, especially for building new functionality with AI agents. The move also represents the latest development in the ongoing competition among the big cloud service providers to bring the best of enterprise AI to their customers, although this one feels a little different because of the traditionally tight relationship between Microsoft and OpenAI. The same day these new models launched, I had a chance to talk through this news with Shaown Nandi, who leads the technical deal team worldwide for AWS, and I want to share my takeaways from that conversation and my perspective on what it all means. (Note: AWS is an advisory client of my firm, Moor Insights & Strategy.) What 'Open-Weight' Means And Why It's Important Open-weight models have parameters that are visible to the people using them, though the underlying training data isn't visible like it would be in a fully open-source model. Still, having access to the parameters means that AWS customers can fine-tune the models for their specific use cases. The two models just launched — gpt-oss-120b and gpt-oss-20b — are the first open-weight models that OpenAI has released since GPT-2, way back in 2019. In terms of intelligence level, OpenAI says that they are positioned between GPT-3 and GPT-4. For AWS customers, these two new entrants join open-weight models from Meta, Mistral and other makers that are already supported on Bedrock. Customers will be able to run the new models, edit them and fine-tune them within the AWS toolset and infrastructure — without interacting with OpenAI directly. You can bet that many of the use cases already being set in motion by AWS users involve agentic workflows. The two new models are text-based, not multi-modal, which makes them well-suited for agentic use cases like browsing the web or navigating software. Nandi also assured me that the new models will have full access to the same Bedrock infrastructure capability as any other model. (As usual with a model introduction like this, AWS is launching it region by region, in this case starting with the U.S. West.) How AWS And OpenAI Benefit From This Linkage First, this helps AWS continue its long tradition of (trying to) offer the widest range of choice to its customers. In fact, I see that outlook as being baked into the Amazon ethos, going all the way back to the company's main web storefront. Nandi summarized his view of the AWS AI mindset when he told me, 'Offering customers choice is something we've been ultra-focused on, probably since we launched Bedrock back in 2023.' I talk with a lot of CIOs, CTOs and CEOs week in and week out — especially about their technology purchases and rationales — and I can confirm that these people want optionality, which AWS is definitely bringing with the new OpenAI models. When I shared this observation with Nandi, he pointed out that having the choice to work with any of the top AI models via AWS also provides 'air cover' for executives' AI decisions. Your board of directors wants to know that you're working with the right providers; when you can tell them that you're working with the biggest CSP using the biggest variety of open models, that's a fruitful path into a conversation about what you're enabling in terms of innovation and productivity. So what's in it for OpenAI? Nandi can't speak for another company, of course, and OpenAI is well-known for not answering questions. But I can share an industrywide perspective that's grounded in just how quickly the enterprise AI environment is shifting. All of the model creators — independents like OpenAI and hyperscalers like AWS alike — can see how fast things are changing. New models are dropping all the time, and disruptive events such as the debut of DeepSeek at the start of this year force everyone to reconsider the best ways to build and train models efficiently and cost-effectively. OpenAI set off the AI gold rush when it launched ChatGPT late in 2022, but it's hardly the only game in town now. It has compelling technology, but there are enterprise customers that are tooled for AWS rather than Azure, and if those customers can't easily access OpenAI via AWS, they may turn to some other model provider. Conversely, being so readily available via Bedrock and SageMaker could benefit OpenAI in terms of building out its ecosystem, meeting customers where they already live — and simply moving fast. How AWS Customers Stand To Benefit The customers that are already getting under the hood of this thing are sure to be connecting existing applications to the new models to see how they perform. Naturally, they'll be looking for ways to improve performance and drive costs down. According to the press release accompanying the launch, AWS says that the new models 'are 10x more price-performant than the comparable Gemini model, 18x more than DeepSeek-R1 and 7x more than the comparable OpenAI o4 model.' I'd like to judge those numbers for myself against real-world field results; it's early days yet, but I'm sure I'll have more insights to validate or challenge these claims in the coming months. Setting aside the potential cost advantages, there are significant operational benefits I have no question about. First, being able to access OpenAI models through AWS tools you're already using means that you don't need to have a commercial agreement with OpenAI. Nothing against OpenAI, mind you — just that it's way simpler to call up one of these models in SageMaker or Bedrock and try it out for a proof-of-concept when you don't also have to go through a vendor-onboarding process or a set of engineering steps to tap directly into OpenAI's technical ecosystem. That also extends to your AI devs who are doing the actual work. They don't need to learn a new platform or test out how well it works with their existing tools, nor do they need to rebuild their applications. Rather, they can stay within their current tools to access the new models and get down to work. If I were the engineering leader running an AI shop already tooled out for AWS, I would welcome this. Nandi confirmed that his customers have been calling for this. Every month or two, they see new models being launched in the market that they're curious to try because they think it might save them money, improve latency or bring some other benefit. They like using Bedrock to try out new models — and to run existing models from Meta, DeepSeek, Amazon itself and so on. And yet, Nandi told me, ''You're missing OpenAI' — that's what they would say.' Now that gap has been addressed. AWS, OpenAI And Microsoft So what does this mean for Microsoft? The company is doing pretty well, to the tune of $101 billion in profit for the fiscal year that ended on June 30. As part of that success, AI has helped drive Azure revenue to $75 billion in the past year. (AWS is above $100 billion annually.) Azure also provides a range of OpenAI models — including hot-off-the-presses GPT-5 variants — that goes well beyond the two open-weight models AWS just launched. That said, Microsoft's long-running relationship with OpenAI is complex, and at times it has been vexed. I don't want to read too much into the availability of a couple of slick new open-weight models via the biggest CSP AI platform in the world. Yet the lock-in that Microsoft enjoyed until last week for all things OpenAI was the first angle that popped into my head when I heard the AWS news. At the moment — and surely for some years to come — there is plenty of AI business to go around. As Nandi pointed out during our conversation, 'Agentic is super-early.' More than that, for all the enterprises building focused agents for different use cases, 'They're not looking for one general-purpose model for agents.' Rather, they want to find different models that supply the right price/performance for each use case. While they can do that with a single AI service provider — AWS, Azure, Google Cloud, Oracle, IBM — they probably won't find it with a single AI model provider. AWS will of course be adding features to support the new open-weight OpenAI models in the weeks and months to come. And there's no question these are nice additions to the AWS toolbox. Maybe six months or a year from now, Microsoft's loss of exclusivity with OpenAI in this instance won't seem like a big deal. But I do wonder whether there could be a scenario where we look back on this as the first chink in the Microsoft–OpenAI armor.