Read my lips: AI-dubbed films are debuting in cinemas
Foreign language cinephiles can be split into two distinct categories – subtitle-lovers and those who swear by the dubbed version.
Dubbing critics have long grumbled about the pitfalls of mismatched audio and awkward lip-syncing, but new technology is quietly changing the face (and mouths) of international cinema.
Last week, Swedish sci-fi film Watch The Skies opened in US theatres – marketed as the world's first full-length theatrical feature to use AI for an immersive dubbing – a process that makes the characters look as though they are speaking English.
XYZ Films partnered with AI start-up Flawless, which uses TrueSync, a visual tool which alters the character's mouth movements and speech to appear perfectly synced for an English-speaking audience.
'For the movie industry, this is a game changer,' producer Albin Pettersson declared in a behind-the-scenes trailer for the film.
'The Swedish language is a barrier when you want to reach out around the world.'
It's important to note the AI tool has not replaced the actors – the original cast of Watch The Skies, having shot the film in Swedish, then recorded their English lines in a studio. This kept them compliant with SAG-AFTRA guidelines.
'I think a lot of filmmakers and a lot of actors will be afraid of this new technology at first,' added writer and director Victor Danell. 'But we have creative control and to act out the film in English was a real exciting experience.'
Watch The Skies is the start of a long list of AI-dubbed international film collaborations between XYZ Films and Flawless set to be released in the US. They include French film The Book of Solutions, Korean flick Smugglers, Persian-language film Tatami, and German film The Light.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Australian
7 hours ago
- The Australian
Triumvirate of senior executives is the key to succeeding with AI
ROI is the goal for most organisations today as they work to conceive and execute a bold AI vision, but measurable value can be elusive. Whether owing to greater than expected costs, weaker than desired capabilities, or unanticipated risks, demonstrable results aren't guaranteed. Take generative AI, for example: a recent Deloitte survey revealed that more than 40 per cent of organisations struggled to define and measure the impact of their generative AI efforts. Leadership drives the pursuit of AI value, and many enterprises may benefit by taking a fresh look at which executives are playing a central part. A triumvirate of three executive roles tends to carry the banner for AI ROI: the CIO, the CFO, and the chief strategy officer (CSO). Defining what these executives own and how they work together can help pave a smoother path from AI dreams to concrete business outcomes. Leaders for the AI Journey The triumvirate of executive leaders is at the centre of AI collaboration and strategic decision-making. Each has distinct responsibilities and priorities. CFO: the capital allocator. The CFO has the final say in capital deployment and prioritisation and views technology programs through the lens of cost and return. Budgets, forecasts, and quantifiable outcomes are where the innovative rubber meets the road. The CFO is concerned with whether an AI use case can deliver more business value than it costs, as well as whether risks could imperil the desired value. To support and measure ROI, they use performance metrics and AI-specific profit and loss statements, as well as allocate budgets for AI investments and mitigate financial risks associated with them. CFOs are accountable for formalising and communicating economic value and results, so their focus is on monitoring how AI programs affect strategic targets such as key performance indicators and earnings per share. Importantly, decisions are made in conjunction with line-of-business leaders, including collaborating with IT leadership on vendor relationships and contract negotiations. Ultimately, the CFO's financial oversight and guidance helps ensure that investments are strategically aligned and deliver measurable value; if AI outcomes do not affect P&L statements, the programs may be coming up short. The CIO is the CSO's co-conspirator in pursuing value. CSO: the convener with a vision. The CSO owns the value narrative and facilitates fact-based strategy at the enterprise level. This executive's purview includes activities such as aligning stakeholders on use cases and applications, setting quantitative targets, sensing the ecosystem for partners and M&A targets, and helping the organisation track and measure success. CSOs can push a bold AI vision and, while they are not a proxy for AI expertise, should understand AI as a mechanism for enabling the commercial strategy. CSOs increasingly hold a scorekeeper function, ensuring strategy is executed through the leadership team's activities. If there is not an enterprise-level conversation happening around AI, the CSO should prompt the discussion. There is reason to think CSOs could be playing a larger role in AI strategy; a Deloitte survey found 54 per cent of CSOs play only a supporting role in shaping AI strategy, and 31 per cent reported having no role in the enterprise ecosystem strategy at all. Ultimately, the vision for value and technology trust is established by people, and the CSO plays a vital part. CIO: the technology catalyst. CIOs are the bridge between technology capabilities and business value, and they are positioned to bring forward opportunities that can make the enterprise strategy real. The CIO is the CSO's co-conspirator in pursuing value. The technology executive is responsible for execution and implementation and leads the hard work of making AI use cases deliver impact at scale, including by developing the technology infrastructure and governance frameworks, managing vendor relationships, and evaluating emerging technologies. Yet the CIO needs to consider not just technology capabilities but how technology choices support the AI vision codified in enterprise strategy. There may be room for CIOs to take a broader view of their role in driving strategy. A 2024 Deloitte survey found only 35 per cent of CIOs ranked embracing the potential of AI and analytics as their first priority. The CIO's role could include overseeing AI integration and risk management, fostering a data-driven culture, upskilling the workforce, and measuring AI's effect on business outcomes. Strategic Collaboration in the AI Era The C-suite triumvirate lays the foundation for AI ROI and charts the path towards capturing it. In AI, as with other technologies, the numerator of ROI is the expected value; this is owned by the business units that use the deployed AI to generate value, be it via efficiency, productivity, or other metrics. The denominator of ROI, meanwhile, is the cost to achieve the expected value. With this insight, the question becomes, what can the triumvirate do to strengthen the collaborative endeavour and show up accordingly? Regular, collaborative engagements are where leaders can gain insight into competing priorities and potential risks. They are also an opportunity to identify where other leaders should join the AI deliberations. For example, at what point should the chief legal officer be included to raise priorities around compliance and technology trust? Additionally, companies should recognise that competitive advantage from AI is not just operational excellence. AI at scale can play a role in different parts of the value chain. The triumvirate should think holistically about how to scale AI, including workforce needs, process changes, and technology orchestration. AI programs thrive (or falter) based on true understanding in the decision-making process and a continual focus on commercial value. In this, the triumvirate of AI ROI can and should be at the forefront. Lou DiLorenzo is principal and US CIO Program leader; Anjali Shaikh is managing director and US CIO Program experience director; Nick Jameson is principal; Gagan Chawla is principal; and Tomiko Partington is senior manager at Deloitte Consulting LLP. As published in the May 10, 2025 edition of The WSJ CIO Journal. Disclaimer This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional adviser. Deloitte shall not be responsible for any loss sustained by any person who relies on this publication. About Deloitte Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ('DTTL'), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as 'Deloitte Global') does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the 'Deloitte' name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see to learn more about our global network of member firms. Copyright © 2025 Deloitte Development LLC. All rights reserved.


The Advertiser
11 hours ago
- The Advertiser
Getty argues its UK copyright case does not threaten AI
Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development". Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development". Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development". Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development".


West Australian
14 hours ago
- West Australian
Getty argues its UK copyright case does not threaten AI
Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development".