
H&M Group's Linda Leopold steps down as head of AI strategy after seven years
H&M Group, the Swedish fashion giant known for its global retail footprint and tech-forward initiatives, has announced the departure of Linda Leopold, who served as Head of AI Strategy. After seven years in strategic leadership roles, Leopold is stepping down to focus on consulting, writing, and speaking engagements centered on artificial intelligence and its ethical development across industries.
Leopold joined H&M Group in 2018 and held several key roles within the company's growing AI division. As Head of AI Policy, she played a critical role in launching and expanding the brand's Responsible AI program. Under her guidance, H&M Group established frameworks for digital ethics and adopted strategic approaches to implementing generative AI technologies.
'These years were extraordinary—not only because I had the opportunity to help shape H&M's AI direction, but also because I witnessed AI evolve at lightning speed,' Leopold wrote on LinkedIn. 'I'm particularly proud of building the Responsible AI program from the ground up and contributing to the global conversation on ethical AI.'
Her leadership earned international recognition. In 2022, Forbes named her one of the world's nine most influential women in AI. Before her time at H&M Group, Leopold worked as an innovation strategist bridging fashion and technology and also served as editor-in-chief of the Scandinavian fashion and culture magazine Bon.
'Now it's time for the next chapter,' she added. 'With AI at such a pivotal point, I want to help guide its development across different industries and organizations.'
Leopold's exit comes as H&M Group continues its push into digital innovation. Earlier this month, the brand launched a new denim capsule collection powered by digital twin technology —part of a larger strategy to integrate generative AI into storytelling and customer engagement. According to Chief Creative Officer Jörgen Andersson, the goal is to create emotional connections with consumers without diluting brand identity. The first drop debuted on July 2 via H&M's global online store, with more launches planned this fall.
While investing in new technologies, H&M Group also faces mounting economic pressures. The company reported a 5% year-over-year decline in net sales for the second quarter, falling to SEK 56.7 billion. However, operating profit rose slightly to SEK 5.9 billion—beating analyst forecasts. The group also improved inventory management, though deeper price cuts are expected in the third quarter as customers become more cautious with spending.
'We're seeing greater price sensitivity among customers due to ongoing uncertainty,' Group CEO Daniel Erver said during the latest earnings call.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


France 24
6 hours ago
- France 24
China urges global consensus on balancing AI development, security
His remarks came just days after US President Donald Trump unveiled an aggressive low-regulation strategy aimed at cementing US dominance in the fast-moving field, promising to "remove red tape and onerous regulation" that could hinder private sector AI development. Opening the World AI Conference (WAIC) in Shanghai on Saturday, Li emphasised the need for governance and open-source development, announcing the establishment of a Chinese-led body for international AI cooperation. "The risks and challenges brought by artificial intelligence have drawn widespread attention... How to find a balance between development and security urgently requires further consensus from the entire society," the premier said. Li said China would "actively promote" the development of open-source AI, adding Beijing was willing to share advances with other countries, particularly developing ones. "If we engage in technological monopolies, controls and blockage, artificial intelligence will become the preserve of a few countries and a few enterprises," he said. "Only by adhering to openness, sharing and fairness in access to intelligence can more countries and groups benefit from (AI)." The premier highlighted "insufficient supply of computing power and chips" as a bottleneck. Washington has expanded its efforts in recent years to curb exports of state-of-the-art chips to China, concerned that these can be used to advance Beijing's military systems and erode US tech dominance. For its part, China has made AI a pillar of its plans for technological self-reliance, with the government pledging a raft of measures to boost the sector. In January, Chinese startup DeepSeek unveiled an AI model that performed as well as top US systems despite using less powerful chips. 'Pet tiger cub' At a time when AI is being integrated across virtually all industries, its uses have raised major ethical questions, from the spread of misinformation to its impact on employment, or the potential loss of technological control. In a speech at WAIC on Saturday, Nobel Prize-winning physicist Geoffrey Hinton compared the situation to keeping "a very cute tiger cub as a pet". "To survive", he said, you need to ensure you can train it not to kill you when it grows up. In a video message played at the WAIC opening ceremony, UN Secretary-General Antonio Guterres said AI governance would be "a defining test of international cooperation". The ceremony also saw the French president's AI envoy, Anne Bouverot, underscore the "an urgent need" for global action. At an AI summit in Paris in February, 58 countries including China, France and India -- as well as the European Union and African Union Commission -- called for enhanced coordination on AI governance. But the United States warned against "excessive regulation", and alongside the United Kingdom, refused to sign the summit's appeal for an "open", "inclusive" and "ethical" AI.


Euronews
2 days ago
- Euronews
This cannibal robot can grow and heal by eating other robots
This robot is not the first transformer mechanism revealed to the public, but the way it transforms is certainly novel – it grows and heals by consuming other robots. Researchers from Columbia University in the United States have developed a robot, called the Truss Link, that can detect and merge with pieces of robots nearby to fill in missing parts. "True autonomy means robots must not only think for themselves but also physically sustain themselves," Philippe Martin Wyder, lead author and researcher at Columbia Engineering and the University of Washington, wrote in a statement. Made with magnetic sticks, the Truss Link can expand or transform from a flat shape to a 3D structure to adapt to the environment. It can also add new bits from other robots or discard old parts that are not functional anymore to increase its performance. In a video posted by the team, the robot merges with a piece nearby and uses it as a walking stick to increase its speed by more than 50 per cent. 'Gives legs to AI' Researchers named the process in which the robot self-assembles bits of other robots 'robot metabolism'. It is described as a natural biological organism that can often absorb and integrate resources. Robots like the Truss Link can 'provide a digital interface to the physical world, and give legs to AI,' according to a video produced by Columbia Engineering School. Integrated with AI, they possess great potential, experts believe. "Robot metabolism provides a digital interface to the physical world and allows AI to not only advance cognitively, but physically – creating an entirely new dimension of autonomy," said Wyder. The Truss Link could, in future, be used to help develop groundbreaking technologies spanning from marine research to rescue services to extraterrestrial life. "Ultimately, it opens up the potential for a world where AI can build physical structures or robots just as it, today, writes or rearranges the words in your email," Wyder said. Programming robots has been a challenge for engineers; however, artificial intelligence is advancing developments in robotics. 'We now have the technology [AI] to make robots really programmable in a general-purpose way and make it so that normal people can programme them, not just specific robot programming engineers," Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia, told Euronews Next in May.


Euronews
2 days ago
- Euronews
No woke AI: What to know about Trump's AI plan for global dominance
US President Donald Trump has said he will keep "woke AI" models out of US government, turn the country into an 'AI export powerhouse,' and weaken environmental regulation on the technology. The announcements come as he also signed three artificial intelligence-focused executive orders on Wednesday, which are a part of the country's so-called AI action plan. Here is what he announced and what it means. 1. No Woke AI One order, called 'Preventing Woke AI in the Federal Government,' bans "woke AI" models and AI that isn't 'ideologically neutral' from government contracts. It also says diversity, equity, and inclusion (DEI) is a 'pervasive and destructive' ideology that can 'distort the quality and accuracy of the output'. It refers to information about race, sex, transgenderism, unconscious bias, intersectionality, and systemic racism. It aims to protect free speech and "American values," but by removing information on topics such as DEI, climate change, and misinformation, it could wind up doing the opposite, as achieving objectivity is difficult in AI. David Sacks, a former PayPal executive and now Trump's top AI adviser, has been criticising 'woke AI' for more than a year, fueled by Google's February 2024 rollout of an AI image generator. When asked to show an American Founding Father, it created pictures of Black, Asian, and Native American men. Google quickly fixed its tool, but the 'Black George Washington' moment remained a parable for the problem of AI's perceived political bias, taken up by X owner Elon Musk, venture capitalist Marc Andreessen, US Vice President JD Vance, and Republican lawmakers. 2. Global dominance, cutting regulations The plan prioritises AI innovation and adoption, urging the removal of any barriers that could slow down adoption across industries and government. The nation's policy, Trump said, will be to do 'whatever it takes to lead the world in artificial intelligence". Yet it also seeks to guide the industry's growth to address a longtime rallying point for the tech industry's loudest Trump backers: countering the liberal bias they see in AI chatbots such as OpenAI's ChatGPT or Google's Gemini. 3. Streamlining AI data centre permits and less environmental regulation Chief among the plan's goals is to speed up permitting and loosen environmental regulation to accelerate construction on new data centres and factories. It condemns 'radical climate dogma' and recommends lifting environmental restrictions, including clean air and water laws. Trump has previously paired AI's need for huge amounts of electricity with his own push to tap into US energy sources, including gas, coal, and nuclear. 'We will be adding at least as much electric capacity as China,' Trump said at the Wednesday event. 'Every company will be given the right to build their own power plant'. Many tech giants are already well on their way toward building new data centres in the US and around the world. OpenAI announced this week that it has switched on the first phase of a massive data centre complex in Abilene, Texas, part of an Oracle-backed project known as Stargate that Trump promoted earlier this year. Amazon, Microsoft, Meta, and xAI also have major projects underway. The tech industry has pushed for easier permitting rules to get its computing facilities connected to power, but the AI building boom has also contributed to spiking demand for fossil fuel production, which contributes to global warming. United Nations Secretary-General Antonio Guterres on Tuesday called on the world's major tech firms to power data centres completely with renewables by 2030. The plan includes a strategy to disincentivise states from aggressively regulating AI technology, calling on federal agencies not to provide funding to states with burdensome regulations. 'We need one common sense federal standard that supersedes all states, supersedes everybody,' Trump said, 'so you don't end up in litigation with 43 states at one time'. Call for a People's AI Action Plan There are sharp debates on how to regulate AI, even among the influential venture capitalists who have been debating it on their favourite medium: the podcast. While some Trump backers, particularly Andreessen, have advocated an 'accelerationist' approach that aims to speed up AI advancement with minimal regulation, Sacks has described himself as taking a middle road of techno-realism. 'Technology is going to happen. Trying to stop it is like ordering the tides to stop. If we don't do it, somebody else will,' Sacks said on the 'All-In' podcast. On Tuesday, more than 100 groups, including labour unions, parent groups, environmental justice organisations, and privacy advocates, signed a resolution opposing Trump's embrace of industry-driven AI policy and calling for a 'People's AI Action Plan' that would 'deliver first and foremost for the American people.' Anthony Aguirre, executive director of the non-profit Future of Life Institute, told Euronews Next that Trump's plan acknowledges the "critical risks presented by increasingly powerful AI systems," citing bioweapons, cyberattacks, and the unpredictability of AI. But in a statement, he said the White House should go further to protect citizens and workers. "By continuing to rely on voluntary safety commitments from frontier AI corporations, it leaves the United States at risk of serious accidents, massive job losses, extreme concentrations of power, and the loss of human control," Aguirre said. "We know from experience that Big Tech promises alone are simply not enough".