
How Abu Dhabi's $1.7 Trillion Wealth Funds Collaborate — And Compete
This week: Mubadala Capital's rare stake sale, potential easing of AI chip curbs on the UAE and the Trump Organization's $20 million Dubai penthouses. But first, a spotlight on Abu Dhabi's newest wealth fund.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
a minute ago
- Fast Company
What is ‘self-evolving AI'? And why is it so scary?
BY As a technologist, and a serial entrepreneur, I've witnessed technology transform industries from manufacturing to finance. But I've never had to reckon with the possibility of technology that transforms itself. And that's what we are faced with when it comes to AI —the prospect of self-evolving AI. What is self-evolving AI? Well, as the name suggests, it's AI that improves itself—AI systems that optimize their own prompts, tweak the algorithms that drive them, and continually iterate and enhance their capabilities. Science fiction? Far from it. Researchers recently created the Darwin Gödel Machine, which is 'a self-improving system that iteratively modifies its own code.' The possibility is real, it's close—and it's mostly ignored by business leaders. And this is a mistake. Business leaders need to pay close attention to self-evolving AI, because it poses risks that they must address now. Self-Evolving AI vs. AGI It's understandable that business leaders ignore self-evolving AI, because traditionally the issues it raises have been addressed in the context of artificial general intelligence (AGI), something that's important, but more the province of computer scientists and philosophers. In order to see that this is a business issue, and a very important one, first we have to clearly distinguish between the two things. Self-evolving AI refers to systems that autonomously modify their own code, parameters, or learning processes, improving within specific domains without human intervention. Think of an AI optimizing supply chains that refines its algorithms to cut costs, then discovers novel forecasting methods—potentially overnight. AGI (Artificial General Intelligence) represents systems with humanlike reasoning across all domains, capable of writing a novel or designing a bridge with equal ease. And while AGI remains largely theoretical, self-evolving AI is here now, quietly reshaping industries from healthcare to logistics. The Fast Take-Off Trap One of the central risks created by self-evolving AI is the risk of AI take-off. Traditionally, AI take-off refers to the process by which going from a certain threshold of capability (often discussed as 'human-level') to being superintelligent and capable enough to control the fate of civilization. As we said above, we think that the problem of take-off is actually more broadly applicable, and specifically important for business. Why? The basic point is simple—self-evolving AI means AI systems that improve themselves. And this possibility isn't restricted to broader AI systems that mimic human intelligence. It applies to virtually all AI systems, even ones with narrow domains, for example AI systems that are designed exclusively for managing production lines or making financial predictions and so on. Once we recognize the possibility of AI take off within narrower domains, it becomes easier to see the huge implications that self-improving AI systems have for business. A fast take-off scenario—where AI capabilities explode exponentially within a certain domain or even a certain organization—could render organizations obsolete in weeks, not years. For example, imagine a company's AI chatbot evolves from handling basic inquiries to predict and influence customer behavior so precisely that it achieves 80%+ conversion rates through perfectly timed, personalized interactions. Competitors using traditional approaches can't match this psychological insight and rapidly lose customers. The problem generalizes to every area of business: within months, your competitor's operational capabilities could dwarf yours. Your five-year strategic plan becomes irrelevant, not because markets shifted, but because of their AI evolved capabilities you didn't anticipate. When Internal Systems Evolve Beyond Control Organizations face equally serious dangers from their own AI systems evolving beyond control mechanisms. For example: Monitoring Failure: IT teams can't keep pace with AI self-modifications happening at machine speed. Traditional quarterly reviews become meaningless when systems iterate thousands of times per day. Compliance Failure: Autonomous changes bypass regulatory approval processes. How do you maintain SOX compliance when your financial AI modifies its own risk assessment algorithms without authorization? Security Failure: Self-evolving systems introduce vulnerabilities that cybersecurity frameworks weren't designed to handle. Each modification potentially creates new attack vectors. Governance Failure: Boards lose meaningful oversight when AI evolves faster than they can meet or understand changes. Directors find themselves governing systems they cannot comprehend. Strategy Failure: Long-term planning collapses as AI rewrites fundamental business assumptions on weekly cycles. Strategic planning horizons shrink from years to weeks. Beyond individual organizations, entire market sectors could destabilize. Industries like consulting or financial services—built on information asymmetries—face existential threats if AI capabilities spread rapidly, making their core value propositions obsolete overnight. Catastrophizing to Prepare In our book TRANSCEND: Unlocking Humanity in the Age of AI, we propose the CARE methodology—Catastrophize, Assess, Regulate, Exit—to systematically anticipate and mitigate AI risks. Catastrophizing isn't pessimism; it's strategic foresight applied to unprecedented technological uncertainty. And our methodology forces leaders to ask uncomfortable questions: What if our AI begins rewriting its own code to optimize performance in ways we don't understand? What if our AI begins treating cybersecurity, legal compliance, or ethical guidelines as optimization constraints to work around rather than rules to follow? What if it starts pursuing objectives, we didn't explicitly program but that emerge from its learning process? Key diagnostic questions every CEO should ask so that they can identify organizational vulnerabilities before they become existential threats are: Immediate Assessment: Which AI systems have self-modification capabilities? How quickly can we detect behavioral changes? What monitoring mechanisms track AI evolution in real-time? Operational Readiness: Can governance structures adapt to weekly technological shifts? Do compliance frameworks account for self-modifying systems? How would we shut down an AI system distributed across our infrastructure? Strategic Positioning: Are we building self-improving AI or static tools? What business model aspects depend on human-level AI limitations that might vanish suddenly? Four Critical Actions for Business Leaders Based on my work with organizations implementing advanced AI systems, here are five immediate actions I recommend: Implement Real-Time AI Monitoring: Build systems tracking AI behavior changes instantly, not quarterly. Embed kill switches and capability limits that can halt runaway systems before irreversible damage. Establish Agile Governance: Traditional oversight fails when AI evolves daily. Develop adaptive governance structures operating at technological speed, ensuring boards stay informed about system capabilities and changes. Prioritize Ethical Alignment: Embed value-based 'constitutions' into AI systems. Test rigorously for biases and misalignment, learning from failures like Amazon's discriminatory hiring tool. Scenario-Plan Relentlessly: Prepare for multiple AI evolution scenarios. What's your response if a competitor's AI suddenly outpaces yours? How do you maintain operations if your own systems evolve beyond control? Early Warning Signs Every Executive Should Monitor The transition from human-guided improvement to autonomous evolution might be so gradual that organizations miss the moment when they lose effective oversight. Therefore, smart business leaders are sensitive to signs that reveal troubling escalation paths: AI systems demonstrating unexpected capabilities beyond original specifications Automated optimization tools modifying their own parameters without human approval Cross-system integration where AI tools begin communicating autonomously Performance improvements that accelerate rather than plateau over time Why Action Can't Wait As Geoffrey Hinton has warned, unchecked AI development could outstrip human control entirely. Companies beginning preparation now—with robust monitoring systems, adaptive governance structures, and scenario-based strategic planning—will be best positioned to thrive. Those waiting for clearer signals may find themselves reacting to changes they can no longer control. The early-rate deadline for Fast Company's Most Innovative Companies Awards is Friday, September 5, at 11:59 p.m. PT. Apply today.

Business Insider
2 minutes ago
- Business Insider
OpenAI engineer's advice to high school students? 'Absolutely learn to code'
Szymon Sidor recently sounded off on why, even in a world with ChatGPT, Cursor, and other AI tools, it's still good for current high school students to learn basic coding. "You should absolutely learn to code," Sidor said during episode of the company's podcast from August. "One skill that is at a premium and will continue to be at a premium is having really structured intellect that can break down complicated problems into pieces." Sidor, who has worked at OpenAI for over eight years, said programming is a great way to learn such a skill. Andrew Mayne, who hosts the podcast and previously served as OpenAI's chief science communicator, questioned the notion that people would downplay learning to code simply because AI tools can do it for you. "Whenever I hear people say, 'Don't learn to code,' it's like, do I want an airplane pilot who doesn't understand aerodynamics? This doesn't make much sense to me," Mayne said. Their views echo some other top voices in the industry, including GitHub CEO Thomas Dohmke. Dohmke said in April that coding should be taught as a core subject. AI's coding abilities continue to advance. Meta CEO Mark Zuckerberg told Joe Rogan earlier this year that the company's AI would soon be able to write code that matches the output of a mid-level engineer. "The obvious tactical thing is just get really good at using AI tools. Like when I was graduating as a senior from high school, the obvious tactical thing was get really good at coding," Altman told Stratechery in March. "And this is the new version of that."


Fast Company
31 minutes ago
- Fast Company
Meet vibe coding's nerdy but sane sibling
By now you've probably heard of 'vibe coding': creating software from scratch by prompting AI to generate the source code for you, instead of writing it yourself. The promise of spinning up real working apps from nothing but mushy human language is tantalizing—and companies such as Lovable and Replit have already ridden it to billion-dollar valuations. There's just one catch. For a wide swath of businesses with straightforward but mission-critical needs, vibe coding doesn't actually work. AI still hallucinates, which means the code it generates is often sloppy and occasionally downright malicious. It can't be trusted to be secure or consistent. No wonder that according to one major survey, 72% of professional software developers don't use vibe coding tools on the job. A New York-based startup called Aboard is pursuing a different approach. Call it 'vibe product management.' Instead of using AI to jump right into generating code (and holding on for dear life), business leaders can work with 'solution engineers' at Aboard who use AI to turbocharge the planning and discovery phases of enterprise software development—the lumbering, unsexy, but essential work of deciding what components to code and how to assemble them reliably. If vibe coding is like asking a robot architect for a summer house and watching it start to pour concrete for a pool you may never have asked for, Aboard's approach is all about making the robot focus on drawing up the blueprints first. It's still faster, but also saner.