logo
Britain and ChatGPT maker OpenAI sign new strategic partnership

Britain and ChatGPT maker OpenAI sign new strategic partnership

Globe and Mail21-07-2025
Britain and ChatGPT maker OpenAI have signed a new strategic partnership to deepen collaboration on AI security research and explore investing in British AI infrastructure, such as data centres, the government said on Monday.
'AI will be fundamental in driving the change we need to see across the country – whether that's in fixing the NHS (National Health Service), breaking down barriers to opportunity or driving economic growth,' Peter Kyle, secretary of state for technology, said in a statement.
'This can't be achieved without companies like OpenAI, who are driving this revolution forward internationally. This partnership will see more of their work taking place in the UK.'
The government has set out plans to invest £1-billion in computing infrastructure for AI development, hoping to increase public compute capacity 20 fold over the next five years.
The United States, China and India are emerging as front runners in the race to develop AI, putting pressure on Europe to catch up.
Gus Carlson: Can OpenAI really go the way of Apple and capture lightning in a bottle?
The partnership with OpenAI, whose tie-up with Microsoft once drew the scrutiny of Britain's competition regulator, will see the company possibly increase the size of its London office, and explore where it can deploy AI in areas such as justice, defence, security and education technology.
In the same statement, OpenAI head Sam Altman praised the government for being the first to recognize the technology's potential through its 'AI Opportunities Action Plan' – an initiative by Prime Minister Keir Starmer to turn the U.K. into an artificial intelligence superpower.
The Labour government, which has struggled to increase economic growth meaningfully in its first year in power and has since fallen behind in polls, has said that the technology could increase productivity by 1.5 per cent a year, worth an extra £47-billion (about $86.77-billion) annually over a decade.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Prince Harry, African charity row rumbles on as watchdog blames ‘all parties'
Prince Harry, African charity row rumbles on as watchdog blames ‘all parties'

CTV News

time24 minutes ago

  • CTV News

Prince Harry, African charity row rumbles on as watchdog blames ‘all parties'

Britain's Prince Harry speaks during the Clinton Global Initiative, on Sept. 24, 2024, in New York. (AP Photo/Andres Kudacki, file) An African charity said Wednesday it would consider further action in a row with its co-founder Prince Harry after a British watchdog criticised 'all parties' for letting the bitter internal dispute play out in public. Without naming individuals, Britain's Charity Commission pointed to 'mismanagement' at the AIDS charity Sentebale but said it found no evidence of 'bullying' -- a charge that had been levelled at Harry by the organisation's chairperson, Sophie Chandauka, in March. Days earlier, Harry -- the youngest son of King Charles III -- and co-founder Prince Seeiso of Lesotho had announced they were resigning from the charity they established in 2006, after the trustees quit when Chandauka refused their demand to step down. Harry, also known as the Duke of Sussex, launched the charity in honour of his mother, Princess Diana, to help young people with HIV and AIDS in Lesotho and later Botswana. After a months-long inquiry, the commission 'found no evidence of widespread or systemic bullying or harassment, including misogyny or misogynoir (prejudice against black women) at the charity,' it said in its conclusions published Wednesday. But it 'criticised all parties to the dispute for allowing it to play out publicly' saying the 'damaging internal dispute' had 'severely impacted the charity's reputation'. It found there was 'a lack of clarity in delegations' which led to 'mismanagement in the administration of the charity' and issued the organisation with a plan to 'address governance weaknesses'. 'Heartbreaking' Sentebale said in a statement it welcomed the findings. Chandauka, who was appointed to the voluntary post in 2023 and remains the charity's chair, said she 'appreciated' the conclusions, saying that they 'confirm the governance concerns I raised privately in February 2025'. But a Sentabale spokesperson said in a later statement that the watchdog 'has not made any findings in relation to individuals', meaning Prince Harry 'was not cleared of individual claims'. The spokesperson added that Sentebale 'would certainly consider' referring the issues not dealt with to a different organisation, such as the Advisory, Conciliation and Arbitration Service. A spokesperson for Prince Harry said the probe 'falls troublingly short in many regards. 'Primarily the fact that the consequences of the current chair's actions will not be borne by her -- but by the children who rely on Sentebale's support,' the spokesperson said in a statement. 'The Duke of Sussex will now focus on finding new ways to continue supporting the children of Lesotho and Botswana.' Harry said in an April statement that the events had 'been heartbreaking to witness, especially when such blatant lies hurt those who have invested decades in this shared goal'. Objections Speaking to British media after accusing the prince of trying to force her out, Chandauka criticised Harry for his decision to bring a Netflix camera crew to a fundraiser last year. She also objected to an unplanned appearance by his wife Meghan at the event. The accusations were a fresh blow for the prince, who kept up only a handful of his private patronages, including with Sentebale, after a dramatic split with the British royal family in 2020. That was when he left Britain to live in North America with his wife and children. Harry chose the name Sentebale as a tribute to Diana, who died in a Paris car crash in 1997 when the prince was just 12. It means 'forget me not' in the Sesotho language and is also used to say goodbye. 'Moving forward I urge all parties not to lose sight of those who rely on the charity's services,' said the commission's chief executive David Holdsworth. In her statement, Chandauka added: 'Despite the recent turbulence, we will always be inspired by the vision of our Founders, Prince Harry and Prince Seeiso.'

Deepfake AI Market Latest Trends, Future Outlook, Size, Share, Applications, Advance Technology And Forecast
Deepfake AI Market Latest Trends, Future Outlook, Size, Share, Applications, Advance Technology And Forecast

Globe and Mail

time24 minutes ago

  • Globe and Mail

Deepfake AI Market Latest Trends, Future Outlook, Size, Share, Applications, Advance Technology And Forecast

"Datambit (UK), Microsoft (US), AWS (US), Google (US), Intel (US), Veritone (US), Cogito Tech (US), Primeau Forensics (US), iProov (UK), Kairos (US), ValidSoft (US), MyHeritage (Israel), HyperVerge (US), BioID (Germany), DuckDuckGoose AI (Netherlands), Pindrop (US), Truepic (US), Synthesia (UK)." Deepfake AI Market by Offering (Deepfake Generation Software, Deepfake Detection & Authentication Software, Liveness Check Software, Services), Technology (Transformer Models, GANs, Autoencoders, NLP, RNNs, Diffusion Models) - Global Forecast to 2031. The size of the worldwide deepfake AI market is expected to increase at a compound annual growth rate (CAGR) of 42.8% from USD 857.1 million in 2025 to USD 7,272.8 million by 2031. Generative Adversarial Networks (GANs) and diffusion models, which enable hyper-realistic deepfake generation; the growing creator economy and social media's demand for creative content, which leads to wider adoption; and the concerning increase in deepfake frauds and misinformation, which feeds the urgent need for reliable detection solutions across industries, are the main factors driving the deepfake AI market. Download PDF Brochure@ The deepfake AI market is witnessing accelerated growth due to the rising adoption of multimodal detection systems that combine audio-visual signals with metadata analysis to enhance detection precision. As synthetic media becomes more layered, with deepfakes now blending facial animations, voice mimicry, and scene manipulation, enterprises are investing in tools that analyze cross-modal inconsistencies rather than relying on isolated visual cues. These advanced solutions are being embedded across high-stakes environments such as banking authentication flows, online proctoring, and digital onboarding platforms where real-time decisioning and high accuracy are critical. Multimodal detection also supports operational scalability by reducing false positives and improving model confidence, enabling enterprises to automate content trust decisions at volume. Regulatory scrutiny is further driving adoption, especially in sectors such as finance, government, and telecommunications, where content authenticity and user verification have become compliance priorities. With AI foundation models and transformer architectures now capable of jointly processing audio, video, and contextual metadata, the deepfake detection landscape is evolving into a strategic layer of enterprise risk management. Generative adversarial networks remain the backbone technology of deepfake AI development and detection, registering the largest share by market value in 2025 Among all core technologies underpinning the deepfake AI market, Generative Adversarial Networks (GANs) represent the largest and most commercially entrenched segment. Their bidirectional framework—comprising generator and discriminator models—forms the foundational mechanism for crafting synthetic media and serves as the analytical basis for detecting forgeries with increasing accuracy. GANs have matured from research prototypes to enterprise-grade engines that power a wide spectrum of deepfake capabilities, including face swapping, expression control, voice imitation, and image realism scoring. On the detection side, their adversarial structure is being reverse-engineered to identify digital fingerprints, compression artifacts, and inconsistencies in texture, lighting, or pixel alignment. GANs are also embedded in real-time media forensics and security pipelines, especially across sectors such as law enforcement and social platforms, where they aid in decoding malicious manipulation. The widespread availability of pre-trained GAN libraries and cloud-based tools is fueling enterprise adoption and reducing time-to-deployment for deepfake-centric solutions. Their continued evolution into variants like StyleGAN and conditional GANs is enabling more granular control and detection precision, positioning them as the dominant technology category in both deepfake generation and defense. BFSI is expected to be the fastest-growing vertical during the forecast period, fueled by a spike in synthetic fraud threats and regulatory pressure By vertical, the BFSI sector is expected to register the fastest growth in the deepfake AI market during the forecast period, driven by rising concerns around digital identity fraud, social engineering attacks, and synthetic KYC submissions. As financial institutions digitalize onboarding and service workflows, they are deploying advanced deepfake detection systems to validate customer identity during eKYC, video banking, and loan verification processes. Liveness detection and micro-expression analysis are increasingly being used to distinguish real users from AI-generated imposters, with regulatory mandates further accelerating deployment. Fraud analytics platforms are integrating deepfake-specific classifiers to monitor voice spoofing in call centers, manipulated transaction videos, and altered screenshots submitted in claims. Additionally, private banks and insurance providers are leveraging synthetic media analysis tools to prevent reputational and compliance risks linked to fake communications or phishing campaigns. Strategic partnerships with detection vendors and biometric verification startups are also rising, particularly in North America and Asia Pacific. With regulators in several jurisdictions issuing early-stage guidelines on synthetic identity detection, the BFSI segment is rapidly becoming the proving ground for enterprise-grade, compliant deepfake AI solutions. Asia Pacific to witness the fastest growth in the deepfake AI market, accelerated by a surge in synthetic media abuse and high-volume digital onboarding across financial institutions Asia Pacific is witnessing the fastest growth in the deepfake AI market, fueled by rapid digital transformation, a booming social media user base, and mounting cybersecurity threats. Countries such as China, India, South Korea, and Japan are experiencing a surge in manipulated media cases, ranging from identity fraud to misinformation campaigns, which are prompting governments and enterprises to invest in detection and liveness verification technologies. Financial institutions across the region are embedding deepfake identification tools within eKYC and fraud prevention systems, especially in emerging markets with high digital onboarding volumes. Regulatory bodies have also begun tightening guidelines on content authenticity and AI usage, encouraging the adoption of compliant AI governance and media authentication layers. The region's large pool of AI research talent, combined with public-private collaborations, is accelerating the development of multimodal detection models customized for regional languages and facial features. Additionally, Asia Pacific's growing investments in metaverse infrastructure and synthetic media production are creating parallel demand for quality control tools. Enterprises in sectors such as BFSI, government, and media are now embedding deepfake detection capabilities at the infrastructure level, positioning Asia Pacific as the most dynamic growth hub for deepfake AI during the forecast period. Request Sample Pages@ Unique Features in the Deepfake AI Market Generative Adversarial Networks (GANs) remain the backbone of deepfake generation, responsible for creating highly realistic synthetic media by pitting generator and discriminator models against each other. These systems capture subtle facial expressions, voice patterns, and micro‑motions. Meanwhile, transformer-based architectures—rapidly growing in adoption—are key in boosting realism, temporal coherence, and multimodal integration in deepfake outputs Platforms like Synthesia and Colossyan offer scalable generation of AI avatars that support dozens of languages, enabling video production without cameras or actors. Reid Hoffman's "deepfake twin" experiment shows how these tools can clone one's voice and extend it into multiple languages—used, for example, to deliver speeches in Hindi, Chinese, Japanese, and more Deepfake maturity now includes real-time and even autonomous generation, where AI-driven agents interact live across platforms. Check Point Research notes these can be used in scams like CEO fraud in live video calls, with losses exceeding tens of millions in recent incidents The detection segment has grown sophisticated: solutions like Vastav AI (India‑based), Intel FakeCatcher, BioID, Veritone, etc., offer forensic-level detection, metadata inspection, confidence scoring, and heatmaps to identify deepfakes in real time. These tools are increasingly offered on cloud platforms for scalable enterprise deployment Major Highlights of the Deepfake AI Market The deepfake AI market is witnessing explosive growth, driven by advancements in generative AI, computer vision, and natural language processing. Its use spans across entertainment, marketing, education, healthcare, and increasingly, malicious domains like misinformation and cyber fraud. The expansion of use cases—from Hollywood-grade face swapping to AI-generated avatars—underscores the growing versatility and commercial interest in the space. One of the most pressing highlights is the surge in cybercrime facilitated by deepfakes, particularly impersonation scams, political manipulation, and financial fraud. Real-time deepfake voice or video manipulation has been used in high-profile scams, including impersonation of CEOs during video calls to extract money or data. As technology becomes more accessible, threats to businesses and governments are becoming more sophisticated and harder to detect. To counteract misuse, the demand for deepfake detection technologies has surged. Tools from companies like Intel, Sensity AI, Deepware, and Vastav AI are being adopted by media platforms, financial institutions, and law enforcement. These tools use AI to identify manipulated content through metadata, facial distortions, lip sync mismatches, and contextual anomalies—ushering in a new age of content authentication. Despite the risks, the deepfake AI market is also evolving positively, with ethical applications growing in fields like education, accessibility, marketing, and film production. For instance, AI avatars are being used for personalized learning, digital actors for low-budget film production, and language dubbing across global markets. These uses are helping to legitimize and monetize the technology in regulated ways. Inquire Before Buying@ Top Companies in the Deepfake AI Market The major players in the deepfake AI market include Datambit (UK), Microsoft (US), AWS (US), Google (US), Intel (US), Veritone (US), Cogito Tech (US), Primeau Forensics (US), iProov (UK), Kairos (US), ValidSoft (US), MyHeritage (Israel), HyperVerge (US), BioID (Germany), DuckDuckGoose AI (Netherlands), Pindrop (US), Truepic (US), Synthesia (UK), (US), Deepware (Turkey), iDenfy (US), Q-Integrity (Switzerland), D-ID (Israel), Resemble AI (US), Sensity AI (Netherlands), Reality Defender (US), Attestiv (US), WeVerify (Germany), (US), Kroop AI (India), Respeecher (Ukraine), DeepSwap (US), Reface (Ukraine), (UK), Oz Forensics (UAE), Perfios (US), Illuminarty (US), Deepfake Detector (UK), buster (France), AutheticID (US), Jumio (US), and Paravision (US). Microsoft Microsoft has become one of the key players in the deepfake AI market through a broader strategy of embedding advanced AI ethics, trust, and safety measures across its expansive product ecosystem. Recognizing the threat posed by synthetic media to digital trust, Microsoft has developed and integrated technologies such as the Microsoft Video Authenticator, which can analyze photos and videos to provide a confidence score about whether the media is artificially manipulated. Additionally, Microsoft's strategic acquisition of startups and partnerships with academic institutions have strengthened its detection capabilities. A notable move was its collaboration with the AI Foundation to advance responsible content creation and fight deepfake misuse. By embedding deepfake detection and authenticity verification tools within its Azure AI and Microsoft 365 suites, Microsoft empowers enterprises, media outlets, and government agencies to protect against misinformation. The company has also backed initiatives like Project Origin and the Coalition for Content Provenance and Authenticity (C2PA) to promote industry-wide standards for digital media provenance. These strategic choices align with Microsoft's trust-first brand positioning, giving it an edge in addressing regulatory concerns and building customer confidence. Moreover, Microsoft invests heavily in educating its enterprise customers on synthetic media threats, positioning itself not just as a tech provider but as a key thought leader shaping policy discussions on deepfakes. This multi-faceted approach has helped Microsoft strengthen its share in the deepfake AI market while reinforcing its commitment to digital security and ethical AI innovation. Google Google has emerged as one of the most influential technology players tackling the challenges posed by deepfakes through a mix of pioneering research, robust product integration, and strategic ecosystem collaboration. Google's decision to publicly release one of the largest deepfake datasets, the DeepFake Detection Dataset, gave the global research community a valuable resource to train and benchmark detection models. This open-source approach demonstrates Google's commitment to transparency and collective progress in combating synthetic media threats. On the product side, Google has embedded detection capabilities within its YouTube platform to counter manipulated videos and misinformation campaigns, investing heavily in machine learning models that flag fake content at scale. Google has also been a driving force behind open standards for digital media authenticity through partnerships with the Content Authenticity Initiative (CAI) and the Coalition for Content Provenance and Authenticity (C2PA), aligning its strategy with industry leaders like Adobe and Twitter. Beyond detection, Google's AI research teams at DeepMind contribute foundational research on generative adversarial networks (GANs) and countermeasures, ensuring it stays at the forefront of both generation and detection advancements. By combining its technical expertise, vast computing infrastructure, and global reach, Google is uniquely positioned to address deepfake risks across platforms and devices. This proactive, research-driven approach enhances its reputation as a trusted steward of information integrity, bolstering its competitive advantage in the rapidly evolving deepfake AI market. Datambit Datambit is a UK-based AI company recognized for its innovative contributions to multimedia forensics and synthetic media detection. In the Deepfake AI market, Datambit focuses on developing advanced detection systems that leverage computer vision and machine learning to identify manipulated video and audio content. Their solutions are increasingly adopted by media companies, legal entities, and cybersecurity firms to combat misinformation, protect brand integrity, and enhance content authenticity in a rapidly evolving digital landscape. Amazon Web Services (AWS) Amazon Web Services (AWS) plays a pivotal role in the Deepfake AI market by offering scalable cloud infrastructure and machine learning tools that enable the development and deployment of deepfake generation and detection technologies. Through services like Amazon Rekognition and SageMaker, AWS supports researchers, developers, and enterprises in creating synthetic media as well as detecting manipulated content. AWS also emphasizes ethical AI use, providing resources and policies aimed at mitigating the misuse of generative models. Intel Corporation Intel is a key player in the Deepfake AI space, driving innovation through its hardware acceleration technologies and AI research. The company collaborates with academic and industry partners to develop tools for deepfake detection, including the FakeCatcher—a real-time deepfake detection platform that identifies synthetic content by analyzing subtle biological signals in videos. Intel's commitment to responsible AI development and content authenticity positions it as a trusted leader in countering the spread of manipulated media across industries.

Kingston, Ont. hospital the first in Canada to use AI heart imaging technology
Kingston, Ont. hospital the first in Canada to use AI heart imaging technology

CTV News

timean hour ago

  • CTV News

Kingston, Ont. hospital the first in Canada to use AI heart imaging technology

The Kingston Health Sciences Centre (KHSC) will be the first hospital in Canada to use artificial intelligence to diagnose coronary artery disease on CT scans, thanks to a $100,000 donation. The hospital in Kingston, Ont. is launching Heartflow, a 'revolutionary AI-based technology' that will allow radiologists and cardiologists to measure how the blood flows through a patient's coronary arteries, using a CT scan. 'This AI tool is a game changer for the way we triage patients,' Dr. Omar Islam, head of diagnostic radiology at Kingston Health Sciences Centre, said in a statement. 'Before, we had to send everyone with a possible significant blockage to the cardiovascular catheterization (cath) lab just to see if the flow was reduced. Now, we can do that non-invasively with Heartflow. If the flow is normal, the patient avoids an invasive procedure entirely. It helps our capacity in the cath lab and saves the health-care system money. From a patient perspective, it spares them a procedure they may not have needed.' Traditionally, many patients had to undergo cardiac catheterization, which is an invasive test that involves threading a wire into the arteries to measure blockages. The Kingston Health Sciences Centre says Heartflow can reduce unnecessary catheterizations by up to 30 per cent, as doctors can make the measurement directly from a CT scan. 'For patients living with chest pain and suspected coronary artery disease, Heartflow provides a safer, faster and more accurate diagnosis of low blood flow,' the hospital said in a media release. 'It also helps medical teams determine how severe a blockage in a patient's artery may be—without having to undergo an invasive procedure. Heartflow will be fully operational at the hospital this month. Officials credit a $100,000 donation from local donor Stephen Sorensen for allowing the hospital to launch the technology. 'Thanks to Stephen Sorensen's visionary support, KHSC is able to invest in state-of-the-art technology that is improving care for our patients,' says KHSC CEO Dr. David Pichora. 'His belief in the power of innovation, particularly in the field of medical imaging, is creating a healthier future for our patients—and we are grateful for his remarkable leadership and generosity.' Sorensen added, 'I'm always looking for innovative tools that can have an immediate impact on patients' lives and Heartflow fits the bill.' The Kingston Health Sciences Centre is the first hospital in Canada to use the AI heart imaging technology.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store