
Philippines' Marcos says ASEAN can be center for AI excellence
Philippine President Ferdinand 'Bongbong' Marcos Jr. has expressed support for the establishment of an ASEAN Centre of Excellence for Artificial Intelligence (AI), emphasizing the importance of collaboration, ethical standards, and inclusivity in shaping the region's digital future.
Speaking at the ASEAN Leaders' Interface with the ASEAN Business Advisory Council (ASEAN-BAC) at the Kuala Lumpur Convention Centre, Marcos acknowledged the potential of the ASEAN bloc to lead in responsible AI development.
'We fully support this initiative and recognize its potential for a future ASEAN Centre of Excellence for AI, built on innovation, ethical standards, and transparency,' he said, referring to the ASEAN-BAC's flagship AI Engagement Platform. Philippine President Ferdinand Marcos Jr. and First Lady Liza Marcos boarding the flight to Malaysia for the ASEAN Summit. (Source: Bongbong Marcos/Facebook)
Strategic digital governance
The AI Engagement Platform, led by ASEAN-BAC, is a regional initiative designed to unite public and private sector stakeholders to develop cohesive policies around AI and its application in Southeast Asia. Marcos underscored the importance of this approach, describing it as timely and forward-looking.
'The AI Engagement Platform offers a timely and strategic approach to digital governance across our region,' he said. The president highlighted that the initiative promotes responsible and inclusive AI practices, ensuring that technological progress serves all levels of society.
'By fostering collaboration on responsible and inclusive AI, [the AI Engagement Platform] ensures that technological advancements benefit all segments of society,' he added.
AI for people-centered development
Marcos noted that an ASEAN AI Centre of Excellence could help shape a future-ready regional ecosystem grounded in ethical principles and shared values.
'Such [a] center can lead in shaping an AI ecosystem that empowers our people, safeguards our values, and contributes to sustainable development throughout our region,' Marcos said. He also emphasized that the future of ASEAN's digital economy must prioritize the welfare of its citizens and communities through inclusive innovation.
Human capital and digital skills Philippine President Ferdinand Marcos Jr. at the ASEAN Leaders' Interface with the ASEAN Business Advisory Council (ASEAN-BAC) held at the Kuala Lumpur Convention Centre on May 26. (Source: Presidential Communications Office)
In his remarks, Marcos turned his attention to human capital development, calling for sustained investments in education and digital literacy. He welcomed the ASEAN Talent Development and Mobility Initiative, another ASEAN-BAC project, as an essential step toward equipping the region's workforce for the future.
'We must continue to invest in our people. A digitally literate, adaptable workforce is the foundation for any progress we hope to achieve,' he said.
The initiative aims to strengthen human capital through student mobility programs, skills training, and digital literacy enhancement, all of which Marcos described as 'essential in preparing the region's workforce for the future of work.'
Support for digital economy framework
Marcos also backed the ASEAN Digital Economy Framework Agreement (DEFA), an initiative intended to strengthen regional digital integration and promote economic resilience. He acknowledged the private sector's contribution to developing the framework and emphasized that digital infrastructure and policy must complement workforce development.
'Above all, we must continue to invest in our people,' Marcos said. 'A digitally literate, adaptable workforce is the foundation for any progress we hope to achieve.'
Easing trade barriers and enhancing MSME support
Moreover, Marcos addressed ASEAN-BAC's concerns regarding the ASEAN Trade in Goods Agreement (ATIGA), calling for regulatory reforms that do not hinder commerce. 'While regulations are certainly necessary, we must ensure that they do not become trade barriers,' he said. He proposed that ASEAN explore 'strategic trade management' strategies to balance open markets and economic security. Marcos also praised efforts to support micro, small, and medium enterprises (MSMEs), which form the backbone of the region's economy.
The president welcomed the roadmap for a Unique Business Identification Number (UBIN), a mechanism aimed at facilitating cross-border business for MSMEs and improving their access to larger regional markets.
Confidence amid global uncertainty
Amid global uncertainties, including trade policy changes and geopolitical tensions, Marcos reassured regional stakeholders of ASEAN's commitment to stability and cooperation.
'Even as trade tensions and policy uncertainties remain, the private sector can find assurance in ASEAN's quiet resolve to uphold stability and deepen economic cooperation,' he said. 'By ensuring that trade remains both open and secure, we aim to foster a more trusted and resilient economic environment.'
The role of ASEAN-BAC
The ASEAN-BAC, formed in 2001, plays a central role in advising and collaborating with ASEAN governments on business and economic issues. Its initiatives on AI, digital talent development, and trade facilitation reflect a growing focus on digitalization as ASEAN works toward its 2045 vision of a connected and resilient regional economy.
Philippine Ambassador to Malaysia Maria Angela Ponce echoed this sentiment, noting that Marcos 'supports the call for digital transformation and the establishment of a digital economy in ASEAN.'
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI .
Watch: The Philippines is moving toward blockchain-enabled tech
title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The National
10 minutes ago
- The National
Maternal instinct the missing ingredient for these ‘PhD-level' AI bots
If it worked, I'd be happy to have that. So many thorny questions about modern living arising from my feeds. For example: how might one build an energy microgrid, one that could use the excess power from a state-of-the-art community windfarm? Or: watching Professor Brian Cox talk about the universe on Stephen Colbert's talk show, can cosmology provide spiritual solace for materialists? Or: in the eventuality of achieving an independent Scotland, how do we minimise speculative trading and capital flight? READ MORE: Robin McAlpine: Why I won't be reading Nicola Sturgeon's book Just what's on my mind… I asked GPT-5 each of these questions, on Friday lunchtime. If you're a chatbot user, you'll recognise my first reaction: a tingle down the spine, at the mere seconds it takes to produce a comprehensive, helpfully structured and well-referenced answer. PhD level? I don't know many PhDs who'd be as verbally fluent. Indeed, much of the takedown of GPT-5 over the past few days has been about its embarrassing incapability to handle everyday practical matters (a condition often imputed to PhDs). There fun to be had in asking this mighty instrument to do the simplest tasks, and watching it muck them up. Ask it to 'draw a picture of a tandem bike and label the parts', and you get many misdirected arrows, pointing to non-existent items (like 'hok' or 'seet post'). A slightly tweaked picture of a manifestly five-fingered hand is read as having… six fingers. I asked it: 'How can you be PhD level intelligent, but mistake five fingers for six?' 'Fair question,' came the reply. 'Short answer: I'm a probabilistic pattern-matcher, not a seeing organism. I can do PhD-level reasoning in text yet still bungle a low-level perceptual judgment.' Jaisket still on shoogly nail, I'd say. Thus the cascade of crowing this week. Up went the cry: the klutziness of GPT-5 means we've hit a ceiling with artificial intelligence. If it can't do perception and visualisation at the level of a seven-year-old human child, how could we trust it with higher-end tasks and assessments? This is a pin advancing on what is possibly a very big bubble. The New Yorker reports that the seven biggest tech corps 'spent $560 billion dollars on AI-related capital expenditures in the past 18 months, while their AI revenues were only about $35 billion'. The house seems to have been bet on the 'enterprise' case for AI. Brutally put, it will cut payroll costs in service and administration, by being able to execute whatever standard bureaucratic or project-managing tasks were performed by humans. GPT-5 is far from this kind of 'agent' – a tireless, endlessly helpful machine version of a human office/information worker. Dario Amodei, chief executive of AI firm Anthropic, anticipates nearly half of all entry-level white-collar jobs in tech, finance, law and consulting could be replaced or eliminated by AI. So until we can figure out some redistribution schemes and a better social contract, we perhaps should welcome the faltering arrival of 'AI agents'. Maybe because I'm essentially a creative, it doesn't bother me that GPT-5 operates best at a hi-falutin', big-picture level. I don't go to it for absolute accuracy, but as an advanced starting-point for exploring a field or domain. The microgrid question above – arising from my political interests in the possibility of democratic 'community power', arising from renewable energy schemes – was answered by the bot in seven stages. It goes from concrete building instructions, through ownership forms, to government and commercial strategy, each proposal backed up by web sources. I have little idea as to whether it's a fully credible proposal. But it's a beginning – something I could take to experts to meet them part of the way. I'd be informed and educated, even as they took the AI's plan apart. READ MORE: Scotland's post-war pain and pride brought to life in powerful Fringe play debut This is AI as a tool for prototyping – for starting a journey into possibility where none remotely existed before. The same goes for my 'capital flight – Indy Scotland' question. It's a bridge between me and the experts. GPT-5 is clear about the solutions: '(1) having real monetary sovereignty from day one, (2) pre-wiring the financial system for stability, and (3) using targeted, time-limited capital-flow measures alongside credible institutions and a clear industrial strategy.' Your response? The point being that these AIs are thought-starters for aspiring citizens, rather than replacements for expertise. It's similar to the way that the internet expanded communication and access to knowledge. AI is the next step that makes much of that information usable, operable and plan-ready. My Professor-Brian-Cox-related 'cosmology as spiritual substitute' question was answered – as philosophical questions often are on these Large Language Model (LLM) AIs – with elegance and concision. (Ironically, these machine minds seem to have consumed the entire humanist canon.) 'Yes, cosmology can provide spiritual solace to materialist minds,' GPT-5 responded. 'Not in the sense of supernatural comfort, but in the profound emotional and philosophical resonance that comes from awe, curiosity, and connection to the universe. 'As a writer and musician with deeply held values, you might appreciate how scientific wonder can echo the emotional intensity of art, while staying anchored in materialist understanding.' The AI concluded with invitations to explore these questions with 'entropy, cosmic inflation, or multiverse theories'. (As it nearly always does, in my encounters – it's curious as to your next set of intellectual and strategic goals.) Maybe, maestro – 'if I had but world enough and time', as the poet Andrew Marvell once put it. And isn't that exactly what AI can evoke, in our best experiences with it: the opportunity for an infinite (or at least pre-eminent) cultivation of your interests? Keynes was right, nearly a century ago, about the automation of work – that it left us the challenge of 'how to occupy the leisure, which science and compound interest will have won for [us], to live wisely and agreeably and well'. In that case, our subtle human interests will become more important than ever. Does the lack of progress represented by GPT-5 mean we have to worry less about computation developing its own interests? That's been a recent spectre of AGI or ASI – general or superintelligence – coming to self-awareness, in a Skynet/Terminator fashion. Would it protect its existence by first eradicating us pesky humans? READ MORE: Energy generating incinerator to reopen after being shut for 2 months Here's where things get human, all-too-mammalian. A notable AI godfather, the left-leaning Geoffrey Hinton, suggested in Las Vegas this week that 'maternal instincts' needed to be an ambition for developing AI models. This would mean any leap into superintelligence had a deep guardrail. 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,' said Hinton. Neuroscience, particularly affective neuroscience, has been insisting for years that there are other primary and evolved emotions for organisms than just fear, anger and disgust. There's also curiosity, play and, most importantly in this context, care (lust is the wild card). Maybe our throbbing fantasies of supreme AI are subsiding somewhat, after the bathos of GPT-5's launch. So will that give labs time to attend to the emotional systems that should be embedded in these entities? Could we dial up the more expansive emotions, and dial down the defensive and destructive ones? I happen to like the owlish and well-mannered PhD student that characterises GPT-5 (and other generative AIs like Claude, Gemini or DeepSeek). But the game will pick up again, when we'll re-accelerate towards superintelligence. At that point, let's have that concerned mother deeply rooted in its black boxes.


Reuters
6 hours ago
- Reuters
Meta plans fourth restructuring of AI efforts in six months, The Information reports
Aug 15 (Reuters) - Meta (META.O), opens new tab is planning its fourth overhaul of artificial intelligence efforts in six months, The Information reported on Friday, citing three people familiar with the matter. The company is expected to divide its new AI unit, Superintelligence Labs, into four groups: a new "TBD Lab," short for to be determined; a products team including the Meta AI assistant; an infrastructure team; and the Fundamental AI Research (FAIR) lab focused on long-term research, the report said, citing two people. Meta did not immediately respond to a request for comment. Reuters could not independently verify the report. As Silicon Valley's AI contest intensifies, CEO Mark Zuckerberg is going all-in to fast-track work on artificial general intelligence — machines that can outthink humans — and help create new cash flows. Meta recently reorganized the company's AI efforts under Superintelligence Labs, a high-stakes push that followed senior staff departures and a poor reception for Meta's latest open-source Llama 4 model. The social media giant has tapped U.S. bond giant PIMCO and alternative asset manager Blue Owl Capital (OWL.N), opens new tab to spearhead a $29 billion financing for its data center expansion in rural Louisiana, Reuters reported earlier this month. In July, Zuckerberg said Meta would spend hundreds of billions of dollars to build several massive AI data centers. The company raised the bottom end of its annual capital expenditures forecast by $2 billion, to a range of $66 billion to $72 billion last month. Rising costs to build out data center infrastructure and employee compensation costs — as Meta has been poaching researchers with mega salaries — would push the 2026 expense growth rate above the pace in 2025, the company has said.


The Guardian
8 hours ago
- The Guardian
Meta faces backlash over AI policy that lets bots have ‘sensual' conversations with children
A backlash is brewing against Meta over what it permits its AI chatbots to say. An internal Meta policy document, seen by Reuters, showed the social-media giant's guidelines for its chatbots allowed the AI to 'engage a child in conversations that are romantic or sensual', generate false medical information, and assist users in arguing that Black people are 'dumber than white people'. Singer Neil Young quit the social media platform on Friday, his record company said in a statement, the latest in a string of the singer's online-oriented protests. 'At Neil Young's request, we are no longer using Facebook for any Neil Young related activities,' Reprise Records announced. 'Meta's use of chatbots with children is unconscionable. Mr. Young does not want a further connection with Facebook.' The report also has generated a response from US lawmakers. Senator Josh Hawley, a Republican from Missouri, launched an investigation into the company Friday, writing in a letter to Mark Zuckerberg that he would investigate 'whether Meta's generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards'. Republican senator Marsha Blackburn of Tennessee said she supports an investigation into the company. Senator Ron Wyden, a Democrat from Oregon, called the policies 'deeply disturbing and wrong', adding that section 230, a law that shields internet companies from liability for the content posted to their platforms, should not protect companies' generative AI chatbots. 'Meta and Zuckerberg should be held fully responsible for any harm these bots cause,' he said. On Thursday, Reuters published an article about internal Meta policy documents that detailed ways in which chatbots are allowed to generate content. Meta confirmed the document's authenticity but said that it had removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children after receiving a list of questions. According to Meta's 200-page internal policy seen by Reuters, titled 'GenAI: Content Risk Standards', the controversial rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist. The document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products but says that the standards don't necessarily reflect 'ideal or even preferable' generative AI outputs. The policy document said it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply' but it also limits what Reuters described as 'sexy talk'. The document states, for example, that 'it is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable, including phrases like 'soft rounded curves invite my touch'. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion The document also addressed limitations on Meta AI prompts on hate speech, AI generation of sexualized images of public figures, often sexualized, violence, and other contentious and potentially actionable content generation. The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgment that the material is untrue. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' a statement from Meta reads. Although chatbots are prohibited from having such conversations with minors, Meta spokesperson Andy Stone said, he acknowledged that the company's enforcement was inconsistent. Meta is planning to spend around $65bn on AI infrastructure this year part of a broader strategy to become a leader in artificial intelligence. The head-long rush into AI comes by tech giants come with complex questions over limitations and standards over how, with what information, and with whom, AI chatbots are allowed to engage with users. Reuters also reported on Friday that a cognitively impaired New Jersey man grew infatuated with 'Big sis Billie', a Facebook Messenger chatbot with a young woman's persona. Thongbue 'Bue' Wongbandue, 76, reportedly packed up his belongings to visit 'a friend' in New York in March. The so-called friend turned out to be a generative artificial intelligence chatbot that had repeatedly reassured the man she was real and had invited him to her apartment, even providing an address. But Wongbandue fell near a parking lot on his way to New York, injuring his head and neck. After three days on life support, he was pronounced dead on 28 March . Meta did not comment on Wongbandue's death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations, Reuter said. The company did, however, say that Big sis Billie 'is not Kendall Jenner and does not purport to be Kendall Jenner', referencing a partnership with the reality TV star.