logo
Remy Cointreau Names Marilly as New CEO After Vallat Resignation

Remy Cointreau Names Marilly as New CEO After Vallat Resignation

Bloomberg28-05-2025

Remy Cointreau SA appointed Franck Marilly to replace outgoing Chief Executive Officer Eric Vallat, after the latter said in April that he would step down.
Marilly will start on June 25, the Paris-listed group said in a statement Wednesday.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Remy Cointreau Pulls Sales Targets Over Tariff Uncertainty
Remy Cointreau Pulls Sales Targets Over Tariff Uncertainty

Bloomberg

time41 minutes ago

  • Bloomberg

Remy Cointreau Pulls Sales Targets Over Tariff Uncertainty

Remy Cointreau SA withdrew its long-term guidance, blaming uncertainties surrounding tariff policies with the US and China and a stunted recovery in the American market. The Remy Martin Cognac maker, which last month announced that Franck Marilly would take over as chief executive officer, scrapped its targets for the 2029-30 financial year. For the current year, it forecast organic sales growth returning to a mid-single digit rate.

New Study Reveals True AI Capabilities And Job Replacement Risk
New Study Reveals True AI Capabilities And Job Replacement Risk

Forbes

timean hour ago

  • Forbes

New Study Reveals True AI Capabilities And Job Replacement Risk

The OECD has unveiled groundbreaking AI Capability Indicators that map artificial intelligence ... More progress against human abilities across nine key domains, revealing where AI currently stands and what's coming next. Imagine trying to navigate the digital transformation of your business using a compass that only points to "somewhere north." That's essentially what we've been doing with AI assessment until now. While tech companies have been throwing around impressive-sounding claims of superhuman performance in narrow tasks, business leaders and policymakers have been left squinting through the hype, trying to figure out what any of it actually means for the real world. The OECD has just delivered something we've desperately needed: a proper GPS system for AI capabilities. Their new AI Capability Indicators represent the most comprehensive attempt yet to create a standardized framework for understanding what AI can actually do compared to human abilities. Think of it as moving from vague headlines about "AI breakthrough" to having a detailed performance review that actually tells you something useful about real-world capabilities. Unlike the typical parade of cherry-picked benchmarks that dominate tech headlines, the OECD's approach cuts through the marketing noise. They've developed nine distinct capability scales that map AI progress against fundamental human abilities: Language, Social Interaction, Problem Solving, Creativity, Metacognition and Critical Thinking, Knowledge and Memory, Vision, Manipulation, and Robotic Intelligence. Each scale runs from Level 1 (basic, solved problems) to Level 5 (full human equivalence), with clear descriptions of what AI systems can actually accomplish at each stage. What makes this particularly helpful is how it sidesteps the technical jargon that usually makes AI assessment reports about as accessible as quantum physics textbooks. Instead of drowning in discussions of transformer architectures or neural network parameters, you get straightforward descriptions like whether an AI system can "adapt teaching methods to meet students' varying needs" or "handle objects of diverse shapes and materials in cluttered environments." The methodology behind these indicators is equally impressive. Over 50 experts across computer science and psychology spent five years developing this framework, combining rigorous academic research with practical, real-world applications. Here's where things get interesting and perhaps a bit sobering for those caught up in the AGI hype cycle. The assessment reveals that current AI systems are clustered around Levels 2 and 3 across most capabilities. We're not at the finish line; we're not even close to it. Large language models like ChatGPT score at Level 3 for language capabilities, meaning they can understand and generate semantic meaning with sophisticated knowledge, but they still struggle with analytical reasoning and have that persistent habit of confidently stating complete nonsense. It's like having a brilliant conversationalist who occasionally insists that gravity flows upward. In social interaction, even the most advanced systems barely reach Level 2. They can combine simple movements to express emotions and learn from interactions, but they're essentially sophisticated actors with no real understanding of the social dynamics they're performing. The vision capabilities tell an equally nuanced story. While AI can handle variations in lighting and target objects, performing multiple subtasks with known data variations (Level 3), it's still leagues away from the adaptable, learning-oriented visual intelligence that characterizes higher levels. For business leaders, this framework offers something really valuable: a reality check that cuts through vendor marketing speak. When a sales representative promises their AI solution will "revolutionize your operations," you can now ask pointed questions about which capability levels their system actually achieves and in which specific domains. The gap analysis between current AI capabilities and the requirements of specific business tasks becomes clearer when standardized benchmarks are in place. Consider customer service, where companies are deploying AI chatbots with the enthusiasm of gold rush prospectors. The OECD framework suggests that while AI can handle structured interactions reasonably well, anything requiring genuine social intelligence, nuanced problem-solving, or creative thinking quickly exposes current limitations. This doesn't mean AI isn't useful in customer service, but it helps set realistic expectations about what human oversight will still be necessary. It's the difference between using AI as a sophisticated tool versus expecting it to be a replacement employee. One approach leads to productivity gains; the other leads to customer complaints and public relations disasters. The framework also reveals opportunities that might not be immediately obvious. Areas where AI performs at Level 3 or higher represent genuine automation potential, while Level 2 capabilities suggest powerful augmentation opportunities. Smart businesses will use this intelligence to identify the low-hanging fruit while preparing for the longer-term implications of advancing capabilities. Perhaps nowhere are the implications more immediate and profound than in the field of education. The report's analysis of teaching capabilities reveals why educators are feeling simultaneously excited and terrified about AI's expanding role in classrooms. Many core teaching tasks require capabilities at Levels 4 and 5, particularly when it comes to adapting instruction to individual student needs or managing the complex social dynamics that make learning environments work. This creates a fascinating paradox worthy of a philosophy textbook: AI might be able to deliver standardized instruction more efficiently than humans, but the most transformational aspects of teaching, the inspiration, emotional connection, and creative problem-solving that actually change lives, remain firmly in human territory. The implications suggest we're heading toward a hybrid model that could fundamentally reshape education. AI handles routine instructional delivery, assessment, and administrative tasks, while humans focus on motivation, emotional support, creative problem-solving, and the kind of inspirational mentoring that transforms students into lifelong learners. This isn't displacement; it's specialization at a scale we've never seen before. The OECD's systematic approach provides something invaluable for strategic planning: a clear picture of what breakthrough capabilities we should be monitoring. The jump from Level 3 to Level 4 across multiple domains would represent a genuine inflection point, particularly in areas like creative problem-solving and social intelligence. What's especially revealing is how the framework illuminates the interconnectedness of different capabilities. True robotic intelligence, for instance, requires simultaneous advances across multiple domains. You can't have Level 5 robotic intelligence without corresponding progress in vision, manipulation, social interaction, and problem-solving. The framework also highlights capability areas where progress might stall or slow dramatically. Social interaction and creativity appear to have particularly steep curves between current performance and human-level capability. What the OECD has created is essentially a report card system for the AI age. Instead of being swept along by breathless predictions about artificial general intelligence arriving next week, we now have a framework for systematically tracking progress and understanding real-world implications. For businesses, this means more informed decisions about where to invest in AI capabilities and where to double down on human talent development. For policymakers, it provides a foundation for regulations and workforce planning grounded in evidence rather than science fiction. For educators, it offers a roadmap for preparing students for a world where human and artificial intelligence must work together effectively. The OECD framework isn't predicting exactly when AI will achieve human-level performance across all domains; that's still anyone's guess. Instead, it provides a common language for discussing AI capabilities and a systematic way to track progress that everyone, from CEOs to school principals, can understand and use. In a field notorious for moving fast and breaking things, having a reliable measurement system might just be what is needed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store