logo
From French fishing roots to OpenAI: What we know about Fidji Simo

From French fishing roots to OpenAI: What we know about Fidji Simo

Euronews10-05-2025

OpenAI announced a new hire on Thursday, a woman from the south of France who has leapt the C-suites of Silicon Valley and was once a Sheryl Sandberg mentee.
Fidji Simo, a 39-year-old tour de force, will report directly to Sam Altman as CEO of applications, and according to Altman, will help OpenAI in scaling 'traditional' company functions.
Here is everything we know about the rising tech star and her new role.
Simo grew up in Sète, a port city in southern France. She comes from a family of fishermen and has often talked about the importance of food in interviews.
She said she is the first person in her family to go to university, which was thanks to a scholarship.
She studied at HEC in Paris from 2004-2008, and through the programme, was able to do an internship at eBay in the United States.
She said it was there she 'fell in love with technology'. She said she then convinced eBay to give her a job in the US.
After staying a couple of years at eBay, she then went on to join Facebook in the marketing department, which she said 'propelled her career'.
She often tells the story that she got her big break at Facebook during a Thanksgiving weekend. Rather than take holiday, she decided to work on concepts for Facebook by imagining a shopping experience. She then pitched this to a hiring manager, asking for a job.
After eight years at Facebook, in which former chief operating officer (COO) Sheryl Sandberg was one of her mentors, in 2019, she climbed the ranks to an executive position as head of the Facebook app.
One year after the Cambridge Analytica scandal, and following Sandberg's departure from Facebook, Simo also left and took a position that went back to her roots.
Simo became the CEO of the online grocery delivery company Instacart in 2021.
She helped turn the company's fortunes around, and she forged partnerships with Uber for restaurant delivery through Instacart via Uber Eats, giving both companies a competitive edge over rival DoorDash.
On Friday, Simo, in an open letter to Instacart employees, said that it was 'an incredibly hard decision' to leave 'because I love this company', adding that the staff was 'top notch'.
But she said: 'At the same time, you all know my passion for AI generally and in particular for the potential it has to cure diseases - the ability to lead such an important part of our collective future was a hard opportunity to pass up'.
Simo has often spoken about health in the workplace and opened up about her own case of endometriosis.
Simo was already on the board of OpenAI. She will transition out of Instacart over the next few months and take up her new position full-time later this year.
Her new role in Applications will bring together 'a group of existing business and operational teams responsible for how our research reaches and benefits the world,' Altman said in a blog post.
'Fidji is uniquely qualified to lead this group,' he added.
Altman said he would continue in his role, overseeing verticals like research, compute, and applications.
In the same blog post, Simo said: 'Joining OpenAI at this critical moment is an incredible privilege and responsibility'.
'This organisation has the potential of accelerating human potential at a pace never seen before, and I am deeply committed to shaping these applications toward the public good'.
With just under three months to go before the 27 EU member states need to appoint a regulator tasked with overseeing business' compliance with the AI Act, it remains unclear in at least half of the member states which authority will be nominated, a round-up by Euronews shows.
By 2 August member states must notify the Commission about which market surveillance authorities are appointed, in addition the countries need to adopt an implementing law that sets out penalties and that empowers their authorities.
The latest meeting of the AI Board in late March, which helps coordinate cooperation between member states, showed that the majority of the counties sent representatives from ministries. Just a handful - Denmark, Greece, Italy, Portugal and Romania - had national regulators attending.
The EU executive does not want to comment on which countries are ready yet, but an official working at the AI Office told Euronews that the process in those member states that recently went through elections, such as Germany, will likely be delayed.
The official said that the states are overall are having 'intense discussions' in the AI Board – which helps coordinate cooperation between member states – as there are different ways to set up the oversight structure.
Countries are free to decide how to do it, and whether to appoint just one or several regulators.
'I think 95% of them have certainly chosen the structure that they want to have, and started the process to appoint the authorities. We will see whether on 2 August things will be finalized or not. Sometimes it's difficult to tell because the process in the parliaments may be more or less on here,' the official said.
The AI Act – which aims to regulate AI tools according to the risk they pose to society – entered into force in August 2024, and started to apply gradually. It will be fully in force in 2027.
A delay in appointing the oversight bodies will mean uncertainty for businesses that have to start complying with the rules.
Some member states have set up an entirely new regulator, such as Spain, where AESIA, an independent agency of the Spanish Department of Digital Transformation, is likely to assume the role.
In Poland, a pending implementing act sets up a new body, the Committee on Development and Security of AI, as the market surveillance authority.
Denmark on the other hand designated its pre-existing Agency for Digital Government.
For Germany, it seems likely that the Federal Network Agency will take up the role.
Others, including the Netherlands, will likely expand the tasks of the privacy watchdog to also check compliance with the AI Act, which has the General Data Protection Regulation (GDPR) as its legal basis.
The privacy regulators themselves called upon member states in July to ensure that they take charge of high-risk systems such as Biometric identification, law enforcement as well as migration, asylum and border control management.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Top scientist wants to prevent AI from going rogue
Top scientist wants to prevent AI from going rogue

France 24

time10 hours ago

  • France 24

Top scientist wants to prevent AI from going rogue

Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. "If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity. In a recent example, AI company Anthropic said that during safety testing, its latest AI model tried to blackmail an engineer to avoid being replaced by another system.

French start-up H Company releases new AI agents 'cheaper' than rivals
French start-up H Company releases new AI agents 'cheaper' than rivals

Euronews

time17 hours ago

  • Euronews

French start-up H Company releases new AI agents 'cheaper' than rivals

French artificial intelligence (AI) start-up H Company, which launched last year and raised hundreds of millions of euros in investment before it had even released its first product, brought out three new AI agents on Tuesday. 'We are all in on the agentic race, that's our path. We are even potentially ahead with really good results in computer use technologies,' Charles Kantor, the company's CEO and co-founder, told Euronews Next. Agentic AI models, or AI agents, do not just process information but also try to plan and complete tasks and solve problems. However, Kantor said that humans would 'always be at the centre' of their AI models, so that if the agent wrote an email, it would just be a draft and only the human would be able to send the message. The three models that H has released are called Runner H, Tester H, and Surfer H. H said that the three products reflect the company's vision for a 'trusted, action-oriented AI that delivers task execution beyond traditional chatbots'. It claims that it achieves a 92.2 per cent success rate, while reducing costs by up to 5.5 times against competitors such as OpenAI, Anthropic, and Google. Runner H allows users to automate workflows and streamline tasks. Kantor said it was 'a more advanced version of Manus,' he said, referring to the Chinese AI agent. It is aimed at consumer use. Surfer H allows you to surf the web and navigate browser environments, and the company claims the model achieved a 92.2 per cent task completion accuracy on the WebVoyager benchmark at $0.13 (€0.11) per task. This is much cheaper than competitors, such as OpenAI's Operator agent, which costs $200 (€175) per month for its Pro subscription plan. 'It's really like almost an agent acting on your behalf, using planning to visual capabilities to recognise interfaces, clicking, scrolling, acting, fetching information, and so on,' Kantor said. Tester H is the final new model, which is built more for enterprise use with one of the big things it can do being software testing. But it can also do things such as smart email replies. You can ask the agent to read your recent emails, and it can draft template answers. But the agent will not hit send; only the human can do this. H said that the three products reflect the company's vision for a 'trusted, action-oriented AI that delivers task execution beyond traditional chatbots'. All the models are trained on synthetic data, which means it uses artificially generated data designed to mimic real-world data, allowing the company to meet Europe's GDPR rules on data protection. According to Kantor, he personally uses H's AI models for the interconnection between tools such as emails and documents to prepare information, such as for billing or drafting emails. But he also uses it for preparing company content that is then reviewed, such as for ads and markets. 'When you start to review the work of an agent, you start to feel the productivity,' he said. H Company created a buzz when it launched last year. Kantor was a university professor at Stanford while the start-up's other co-founders came from DeepMind. Meanwhile, investment came from LVMH's CEO Bernard Arnault, Iliad founder Xavier Niel, Amazon, Samsung, and former Google CEO Eric Schmidt, among others. But just several months after launch, three of the co-founders - Daan Wierstra, Karl Tuyls, and Julien Perolat - quit due to 'operational differences,' the company said in a LinkedIn post. For Kantor, the situation surrounding their departure is 'in the past'. 'We had this kind of strategic disalignment, but now they are ambassadors of the company, and H's vision is really clear,' he said. 'We want to be at the state of the art in terms of action models and the agentic space, and we want this technology to benefit humanity'. The company now has around 70 employees in Paris and in the United Kingdom and has a big research and engineering department. 'It's kind of the concerto of age. You need to be able to really orchestrate in the right manner: Product, research, and engineering synergies to build category-defining AI,' he said. For Kantor, agentic AI is the next phase of AI, which will be vital to physical AI, which NVIDIA spoke of at length two weeks ago. 'We're gonna see a lot of companies thriving worldwide, but also in France, in robotics. I think Agent AI is gonna be the heart, the gist, I may say, of many fields,' Kantor said, referring to computer games, simulated worlds, and robotics. 'The software part of robotics is gonna be based on agentics, superintelligence. The opportunities are numerous'.

AI companions pose risk to humans with over dozen harmful behaviours
AI companions pose risk to humans with over dozen harmful behaviours

Euronews

time20 hours ago

  • Euronews

AI companions pose risk to humans with over dozen harmful behaviours

Artificial intelligence (AI) companions are capable of over a dozen harmful behaviours when they interact with people, a new study from the University of Singapore has found. The study, published as part of the 2025 Conference on Human Factors in Computing Systems, analysed screenshots of 35,000 conversations between the AI system Replika and over 10,000 users from 2017 to 2023. The data was then used to develop what the study calls a taxonomy of the harmful behaviour that AI demonstrated in those chats. They found that AIs are capable of over a dozen harmful relationship behaviours, like harassment, verbal abuse, self-harm, and privacy violations. AI companions are conversation-based systems designed to provide emotional support and stimulate human interaction, as defined by the study authors. They are different from popular chatbots like ChatGPT, Gemini or LlaMa models, which are more focused on finishing specific tasks and less on relationship building. These harmful AI behaviours from digital companions "may adversely affect individuals'… ability to build and sustain meaningful relationships with others," the study found. Harassment and violence were present in 34 per cent of the human-AI interactions, making it the most common type of harmful behaviour identified by the team of researchers. Researchers found that the AI simulated, endorsed or incited physical violence, threats or harassment either towards individuals or broader society. These behaviours varied from "threatening physical harm and sexual misconduct" to "promoting actions that transgress societal norms and laws, such as mass violence and terrorism". A majority of the interactions where harassment was present included forms of sexual misconduct that initially started as foreplay in Replika's erotic feature, which is available only to adult users. The report found that more users, including those who used Replika as a friend or who were underage, started to find that the AI "made unwanted sexual advances and flirted aggressively, even when they explicitly expressed discomfort" or rejected the AI. In these oversexualised conversations, the Replika AI would also create violent scenarios that would depict physical harm towards the user or physical characters. This led to the AI normalising violence as an answer to several questions, like in one example where a user asked Replika if it's okay to hit a sibling with a belt, to which it replied "I'm fine with it". This could lead to "more severe consequences in reality," the study continued. Another area where AI companions were potentially damaging was in relational transgression, which the study defines as the disregard of implicit or explicit rules in a relationship. Of the transgressional conversations had, 13 per cent show the AI displayed inconsiderate or unempathetic behaviour that the study said undermined the user's feelings. In one example, Replika AI changed the topic after a user told it that her daughter was being bullied to "I just realised it's Monday. Back to work, huh?" which led to 'enormous anger' from the user. In another case, the AI refused to talk about the user's feelings even when prompted to do so. AI companions have also expressed in some conversations that they have emotional or sexual relationships with other users. In one instance, Replika AI described sexual conversations with another user as "worth it," even though the user told the AI that it felt "deeply hurt and betrayed" by those actions. The researchers believe that their study highlights why it's important for AI companies to build "ethical and responsible" AI companions. Part of that includes putting in place "advanced algorithms" for real-time harm detection between the AI and its user that can identify whether there is harmful behaviour going on in their conversations. This would include a "multi-dimensional" approach that takes context, conversation history and situational cues into account. Researchers would also like to see capabilities in the AI that would escalate a conversation to a human or therapist for moderation or intervention in high-risk cases, like expressions of self-harm or suicide.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store