Latest news with #deepfake


Daily Mail
6 hours ago
- Health
- Daily Mail
Cyber security experts reveal the chilling number of images predators need to make deepfakes of children
Cybersecurity experts have revealed that predators need just 20 images to create deepfake videos of children, prompting urgent warnings over the growing dangers of sharing family photos online. Professor Carsten Maple, a leading expert from the University of Warwick and the Alan Turing Institute, said advanced AI tools can use a shockingly small number of pictures to generate realistic fake profiles and videos of minors. The consequences, he warned, can include identity theft, blackmail, and online exploitation. Parents are unknowingly giving criminals exactly what they need, with many doing it simply by uploading family pictures to social media and cloud storage platforms. 'It takes just 20 images for sophisticated AI tools to create a realistic profile of someone, or even a 30-second video,' said Professor Maple. New research commissioned by privacy tech firm Proton found that UK parents share an average of 63 photos each month, most of them including children. One in five parents post family pictures multiple times a week. Two in five do so several times a month. The findings suggest today's children often have a digital footprint from birth, long before they understand the internet, or can give consent. But it's not just criminals that experts are worried about. Big tech firms are also harvesting these images for their own purposes. Professor Maple pointed to Instagram's recent policy change, which allows the platform to use user photos to train its AI systems. He called the move 'deeply concerning.' He said: 'These companies use consumer data to build advertising profiles, analyse trends, train algorithms and track behaviour — often without people fully realising what's being collected.' Over half of parents now automatically back up their family images to cloud storage. The average parent has around 185 photos of their child saved online at any given time. Yet almost half admit they didn't know that tech companies can access and analyse those photos. The study found four in ten parents believe tech firms only gather basic metadata, things like time, location, or device used, while 11 percent had no idea what kind of information is being collected at all. Experts now warn that a generation of children could face serious long-term risks — including fraud, grooming, and deepfake abuse, simply because of the volume of images being shared. 'Oversharing can lead to digital records that are difficult or impossible to delete,' said Professor Maple. 'This opens the door not just to identity fraud, but also to more sinister forms of exploitation.' Despite this, many parents remain unaware of how vulnerable their images really are. While 72 per cent say photo privacy is important to them, a staggering 94 per cent believe tech firms should be more transparent about how they use stored data. Parental anxiety appears to be rising, with around 32 per cent of parents saying they are constantly worried about their phone or cloud accounts being hacked. Nearly half say they worry about it from time to time. More than half have already taken extra security steps, using Face ID, PIN codes, limiting app downloads, and keeping devices updated. But Professor Maple says that's not enough. With the rapid growth of AI and rising numbers of data breaches, the need to strengthen protection for children has never been more urgent. 'We are building digital profiles of children without their consent,' he said. 'The risks are real, and the damage, in many cases, irreversible.'


Forbes
2 days ago
- Politics
- Forbes
Deepfakes Are Spreading — Can The Law Keep Up?
A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg. (Elyse Samuels/The ... More Washington Post via Getty Images) When an explicit AI-generated video of Taylor Swift went viral in January, platforms were slow to take it down, and the law offered no clear answers. With the lack of an organized regulatory structure in place for victims — famous or not — states are scrambling to fill in the void, some targeting political ads, others cracking down on pornographic content or identity fraud. That has led to a patchwork of laws enforced differently across jurisdictions drawing varying lines between harm and protected speech. In April, prosecutors in Pennsylvania invoked a newly enacted law to charge a man in possession of 29 AI-generated images of child sexual abuse — one of the first known uses of a state law to prosecute synthetic child abuse imagery. What began as a fringe concern straight out of dystopian fiction — that software could persuasively mimic faces, voices and identities — is now a predominant issue in legal, political and national security debates. Just this year, over 25 deepfake-related bills have been enacted across the U.S., according to Ballotpedia. As laws finally begin to narrow the gap, the resistance and pushback is also escalating. Consumer-grade diffusion models — used to create realistic media of people, mimic political figures for misinformation and facilitate identity fraud — are spreading through servers and subreddits with a virality and scale that's making it difficult for regulators to track, legislate upon and take down. 'We almost have to create our own army,' said Laurie Segall, CEO of Mostly Human Media. 'That army includes legislation, laws and accountability at tech companies, and unfortunately, victims speaking up about how this is real abuse.' 'It's not just something that happens online,' added Segall. 'There's a real impact offline.' Many recent laws pertain directly to the accessibility of the technology. Tennessee's new felony statute criminalizes the creation and dissemination of nonconsensual sexual deepfakes, carrying up to 15 years in prison. In California, where a record eight bills on AI-generated content passed in a single month, legislators have been attempting to regulate a wide range of related issues, from election-related deepfakes to how Hollywood uses deepfake technology. These measures also reflect the increasing use of AI-generated imagery in crimes, often involving minors, and often on mainstream platforms, but the legal terrain remains a confusing minefield for victims. Depending on the state, the same deepfake image might be criminal in one jurisdiction but dismissed in another, indicating the growing chaos and discrepencies of state-level governance in the absence of federal standards. 'A young woman whose Instagram profile photo has been used to generate an explicit image would likely have legal recourse in California, but not Texas,' notes researcher Kaylee Williams. 'If the resulting image isn't considered realistic, it may be deemed criminal in Indiana, but not in Idaho or New Hampshire.' If the person who generated the image claims to have done so out of 'affection' rather than malice, the victim could seek justice in Florida, but not Virginia, Williams adds. Intimate deepfakes are the latest iteration of the dehumanization of women and girls in the digital sphere, states Williams, calling it 'a rampant problem that Congress has thus far refused to meaningly address.' According to a recent study by child exploitation prevention nonprofit Thorn, one in 10 teens say they know someone who had deepfake nude imagery created of them, while one in 17 say they have been a direct victim of this form of abuse. The harm also remains perniciously consistent: a 2019 study from cybersecurity firm Deeptrace found that a whopping 96% of online deepfake video content was of nonconsensual pornography. Despite the widespread harm, the recent legislative push has met with notable resistance. In California, a lawsuit filed last fall by right-wing content creator Chris Kohls — known as Mr Reagon on X — drew support from The Babylon Bee, Rumble and Elon Musk's X. Kohls challenged the state's enforcement of a deepfake law after posting an AI-generated video parodying a Harris campaign ad, arguing that the First Amendment protects his speech as satire. The plaintiffs contend that laws targeting political deepfakes, particularly those aimed at curbing election misinformation, risk silencing legitimate satire and expression. A federal judge agreed, at least partially, issuing an injunction that paused enforcement of one of the California laws, warning that it 'acts as a hammer instead of a scalpel.' Theodore Frank, an attorney for Kohls, said in a statement they were 'gratified that the district court agreed with our analysis.' Meanwhile, Musk's X in April filed a separate suit against Minnesota over a similar measure, contending that the law infringes on constitutional rights and violates federal and state free speech protections. 'This system will inevitably result in the censorship of wide swaths of valuable political speech and commentary,' the lawsuit states. 'Rather than allow covered platforms to make their own decisions about moderation of the content at issue here, it authorizes the government to substitute its judgment for those of the platforms,' it argues. This tug-of-war remains a contentious topic in Congress. On May 22, the House of Representatives passed the 'One Big Beautiful' bill which includes a sweeping 10-year federal moratorium on state-level AI laws. Legal scholar and Emory University professor Jessica Roberts says Americans are entirely left vulnerable without state involvement. 'AI and related technologies are a new frontier, where our existing law can be a poor fit,' said Roberts. 'With the current congressional gridlock, disempowering states will effectively leave AI unregulated for a decade. That gap creates risks — including bias, invasions of privacy and widespread misinformation.' Meanwhile, earlier this month, President Trump signed the Take It Down Act, which criminalizes the distribution of non-consensual explicit content — including AI-generated images — and mandates rapid takedown protocols by platforms. It passed with broad bipartisan support, but its enforcement mechanisms remain unclear at best. Financial institutions are increasingly sounding the alarm over identity fraud. In a speech in March, Michael S. Barr of the Federal Reserve warned that 'deepfake technology has the potential to supercharge impersonation fraud and synthetic identity scams.' There's merit to that: In 2024, UK-based engineering giant Arup was defrauded out of $25 million via a deepfake video call with what appeared to be a legitimate senior executive. And last summer, Ferrari executives reportedly received WhatsApp voice messages mimicking their CEO's voice, down to the regional dialect. Against that evolving threat landscape, the global regulatory conversation remains a hot-button issue with no clear consensus. In India, where deepfakes currently slip through glaring legal lacunae, there is growing demand for targeted legislation. The European Union's AI Act takes a more unified approach, classifying deepfakes as high-risk and mandating clear labeling. China has gone even further, requiring digital watermarks on synthetic media and directing platforms to swiftly remove harmful content — part of its broader strategy of centralized content control. However, enforcement across the board continues to be difficult and elusive, especially when the source code is public, servers are offshore, perpetrators operate anonymously and the ecosystem continues to enable rampant harm. In Iowa, Dubuque County Sheriff Joseph L. Kennedy was reportedly dealing with a local case where high school boys shared AI-generated explicit pictures of their female classmates. The tech was rudimentary, but worked enough to cause serious reputational damage. 'Sometimes, it just seems like we're chasing our tails,' Kennedy told the New York Times. That sense of disarray may also be relevant to regulators as they look to govern a future whose rules are being constantly written — and rewritten — in code. In many ways, the deepfake issue appears increasingly Kafkaesque: a bewildering maze of shifting identities, elusive culprits, a tech bureaucracy sprawling beyond regulatory reach — and laws that are always lagging at least a few steps behind.


Daily Mail
2 days ago
- Entertainment
- Daily Mail
Glamorous TV presenter reveals her horror after discovering video showing her doing 'explicit things'
A rugby league television presenter in New Zealand has opened up on the horror of discovering she had become the latest victim of deepfake AI attacks. Tiffany Salmond, a popular NRL sideline reporter known for her coverage of New Zealand Warriors games on Fox League, became the target of a disturbing deepfake AI attack. After posting a bikini photo on Instagram, a manipulated video was created and circulated online within hours. Salmond condemned the act, stating, 'You don't make deepfakes of women you overlook. You make them of women you can't control.' Now she has spoken about the horror she felt when the video emerged. 'Felt important to speak up on this. Glad it's opening up a wider conversation,' she shared on Instagram. 'I'll be honest, it was shocking,' Salmond said. 'Having the public profile that I do, especially as a woman working in a male-dominated sport, I'm no stranger to having my looks discussed or being the subject of sometimes perverse conversations. 'But this was the first time it went beyond just chatter. 'To actually see photos of myself - ones I had posted confidently on social media - turned into videos where I'm moving and doing explicit actions, was surreal. 'If deepfakes were purely about attraction, we would see women making them about men, but we don't - and it's because in those dynamics, that power imbalance doesn't exist. 'We live in a society where men can't get enough of women's bodies, but it's only when they get a sneaky view that they weren't meant to see.' It comes after Gold Coast Titans and New South Wales Blues star Jaime Chapman, 23, was recently targeted by a deepfake AI attack involving manipulated images of her in a bikini circulated online without her consent. Chapman publicly condemned the incident on Instagram, expressing that it was not the first time she had been subjected to such attacks and highlighting the emotional toll it has taken on her. In response, the Gold Coast Titans, alongside the NRL Integrity Unit and NSW Police, have launched an investigation to identify those responsible for creating and distributing the doctored images. The issue of deepfake images and videos has become a global issue. WNBA star Angel Reese has also become a victim, portayed to be committing sexual acts in photos that were AI generated. She debunked any authenticity to the 'crazy and weird' images at the time. 'Creating fake AI pictures of me is crazy and weird AF!' Reese wrote on X. 'Like I know I'm fine & seem to have an appeal to some but I'm literally 21 and yall doing this bs when I would neverrrrrr.' The deepfakes are not isolated to pornographic or salacious images and videos. High profile sport stars including Cristiano Ronaldo, LeBron James, Patrick Mahomes, Lionel Messi and Tiger Woods have also been targeted in videos with them appearing to endorse products, carry out interviews or say things they never really said. But CEO of Crime Stoppers International and Founder of SocialProtect Shane Britten previously told News Corp that women and indigenous athletes were more likely to be subject to online abuse, threats and deepfake content. 'On average, the top level female athletes we've seen will get what we would call a rape threat once a week,' he said.


South China Morning Post
3 days ago
- Entertainment
- South China Morning Post
Johnny Somali's trial in South Korea highlights rising concern over ‘nuisance influencers'
The trial of an American content creator whose disruptive and culturally insensitive acts sparked outrage in South Korea has fuelled calls for sterner responses to these so-called nuisance influencers. Johnny Somali, whose real name is Ramsey Khalid Ismael, has been barred from leaving South Korea and faces seven charges, including obstruction of business and violations of the Minor Offences Act, according to local media. Ismael's earlier charges were relatively minor but at his second hearing on May 16, he faced two serious charges of creating pornographic deepfakes, each carrying a maximum penalty of 10½ years. The 24-year-old pleaded guilty to the five minor charges and not guilty to the sex charges. His next trial hearing is scheduled for August 13. Among the actions he has been charged with are brandishing a dead fish on the subway, kissing a statue commemorating Korean World War II sex slaves as well as holding up a Japanese 'Rising Sun' flag and calling the disputed Liancourt Rocks by their Japanese name of Takeshima. The sex charges have to do with AI-generated deepfake pornographic videos featuring Ismael and a female South Korean live-streamer.


CBC
3 days ago
- General
- CBC
Followed, threatened and smeared — attacks by China against its critics in Canada are on the rise
For Yao Zhang, the news came as a shock. Sexually explicit, deepfake images of her were circulating widely online — an attack that Ottawa blamed on the Chinese government. It wasn't the first time Zhang had been targeted by China. Shortly after the Quebec-based accountant-turned-influencer travelled to Taiwan in January 2024 to support its independence, China's national police paid a visit to her aunt in Chifeng, in mainland China. Zhang was also doxxed — private information about her and members of her family was posted to a website listing people who weren't loyal to China — information only the Chinese government would know. False rumours began to spread online designed to discredit her, alleging that she had an affair with her stepbrother, that she was being paid by the U.S. government. Zhang isn't alone. CBC News spoke with several other Canadian activists who have spoken out against the People's Republic of China (PRC), all of whom described similar attacks: Family members in China questioned by police. Dissidents followed and surveilled in Canada. Threatening phone calls. Online attacks like spamouflage, using a bot network to push spam-like content and propaganda across multiple social media platforms. WATCH | Zhang's family paid a price: While Zhang says she still feels physically safe in Canada, the attacks take a mental toll. "I mean, they can reach you, of course, online or through your relatives in China. I don't think there's anything the Canadian government can do." An investigation by CBC News, in conjunction with the International Consortium of Investigative Journalists (ICIJ), has found attacks by the Chinese government on dissidents living in Canada — and around the world — are on the rise. It's a trend that worries experts on China, who say the attacks damage democracy and national security in Canada. "You've got a foreign government that is causing Canadian citizens and permanent residents to not feel safe in Canada, to not feel they can exercise their own rights and freedoms and speak out," said Michael Kovrig, a former diplomat and expert on Asia who was detained by China for more than 1,000 days. "By undermining those communities, they are ultimately undermining Canadian society and politics and ultimately national security." In June 2024, Parliament adopted Bill C-70 which was supposed to counter the rising threat of transnational repression and foreign interference in Canada by giving government departments and agencies more powers to fight it and by creating a foreign agent registry and a foreign interference transparency commissioner. However, nearly a year later, as reports indicate China has become more brazen, little has been done to put those measures in place, leaving it to Prime Minister Mark Carney's government to implement. In many cases, dissidents are targeted for expressing opinions contrary to the Chinese government's positions on what it calls "the five poisons": democracy in Hong Kong, treatment of Uyghurs, Tibetan freedom, the Falun Gong and Taiwanese independence. "[China] believes that a lot of the main threats to their dominance emanate from overseas," said Dan Stanton, a former CSIS intelligence officer who ran its China desk for four years. "So they need to go abroad to basically neutralize them." The ICIJ's "China Targets" investigation, in which 43 media organizations in 30 countries interviewed more than 100 victims, also documented how the Chinese Communist Party and its proxies have used international organizations such as Interpol and the United Nations to go after its critics and how little some countries have done to stop China's attacks on people living within their borders. After reviewing Chinese government guidelines, the investigation found that "tactics recently deployed against the subjects mirrored the guidelines on how to control individuals labeled as domestic security threats," the ICIJ wrote. The Chinese Embassy in Canada has yet to respond to questions from CBC News. A 'genuine scourge' Most of those interviewed didn't report the incidents to authorities in the countries where they were living, the ICIJ found, because they either feared retaliation or doubted the ability of local authorities to help. A number of victims in Canada declined interview requests from CBC News, saying they feared repercussions on themselves or their families. The ICIJ and CBC News found similar tactics being used against critics. In Canada, Justice Marie-Josée Hogue's Public Inquiry into Foreign Interference heard from a number of witnesses — some in public and others behind closed doors — who described incidents of China targeting Canadian residents on Canadian soil. Hogue's conclusion — transnational repression in Canada was a "genuine scourge" and the PRC was the "most active perpetrator of foreign interference targeting Canadian democratic institutions." "What I have learned about it is sufficient for me to sound the alarm that the government must take this seriously and consider ways to address it, Hogue wrote in her final report in January. Hogue said assessing the extent of transnational repression in Canada by China and other countries is difficult because those targeted "may fear reprisals." China uses "a wide range of tradecraft… including using a person's family and friends in China as leverage against them," she wrote. "The PRC uses its diplomatic missions, PRC international students, community organizations and private individuals, among others, to carry out its transnational repression activities." Uyghur advocate stalked Mehmet Tohti, an Ottawa-based advocate for China's minority Uyghur community in Canada, knows what it is like to be under surveillance. Shortly after the House of Commons adopted a motion recognizing that China was carrying out genocide of Uyghurs in the province of Xinjiang, Tohti was leaving a dinner in Montreal when one of the other diners, who worked with Global Affairs, warned him two cars with covered licence plates were following him that evening. "It was the kind of moment that deeply affected my daily program," said Tohti. "Since then, even if sometimes it takes a little longer, every day I take a different route to my office and a different route from my office to my home." WATCH | How Tohti stays safe: This April, Tohti's three cellphones and his laptop were attacked. After reporting it to the RCMP and the University of Toronto's Citizen Lab, he learned that the attack originated in mainland China. Tohti said many Uyghurs living in Canada are cut off from their families back in China but are also afraid to travel to some other countries for fear that China will use Interpol red notices to have local authorities arrest them and extradite them to China. Uyghur rights advocate Huseyin Celil, was arrested 19 years ago while visiting family in Uzbekistan and handed over to Chinese authorities, who refuse to recognize his Canadian citizenship. He was tried and convicted on what human rights groups have described as trumped-up terrorism charges. It is not known if Cecil is alive or dead. Canadian MPs, candidates targeted While China has gone after sitting members of Parliament, like Conservative Michael Chong and New Democrat Jenny Kwan, one of its highest profile attacks in recent months was on Joe Tay, a Toronto-area resident who has advocated for democracy in his birthplace of Hong Kong. In December, the Hong Kong Police Force issued a reward of $1 million HK ($177,111 Cdn) for information leading to his arrest for alleged national security violations. During the federal election, as Tay was running as the Conservative candidate in the riding of Don Valley North, the Canadian government's Security and Intelligence Threats to Elections (SITE) Task Force reported a transnational repression operation on Chinese-language social media platforms, amplifying posts related to the bounty and arrest warrant against Tay and suppressing search results on platforms based in the PRC. "The search engine only returns information about the bounty," the task force wrote. "This is not about a single incident with high levels of engagement. It is a series of deliberate and persistent activity across multiple platforms — those in which Chinese-speaking users in Canada are active, including: Facebook, WeChat, TikTok, RedNote and Douyin, a sister-app of TikTok for the Chinese market." At one point during the campaign, police advised that Tay stop campaigning door to door for his own safety, he confirmed. Shortly after the federal election, on May 8, news reports in Hong Kong said Tay's cousin and his wife were brought to a police station from their home in Hong Kong's Fo Tan district to "assist in an investigation" relating to Tay. Tay declined an interview request from CBC News. "I will need a much longer time to reflect on a lot of things still," he wrote in a text. Hugh Yu campaigned for Tay and leads a pro-democracy group in Toronto. He said his members are often reluctant to grant interviews or openly participate in his organization. "They walk away … a lot of people come and say, 'I'm sorry, Hugh, because I have a lot of pressure from family,'" he said, describing how "almost all" of their families in China would have their jobs or pensions threatened because of their public opposition to the Chinese government. Yu said when his group holds pro-democracy demonstrations at Toronto City Hall or at the Chinese consulate, they are watched, with people taking photos and videos. "I think at this point the CCP is very, very successful [at] controlling all of the community, the Chinese community in Canada." Gloria Fung is past president of Canada-Hong Kong Link and has lobbied for Canada to have a registry of foreign agents. She has also received phone calls warning her to stop interfering with Hong Kong's and China's affairs and notices from Google about attempts by state-level hackers to get into her computer systems. 'Trying to censor and silence' Kovrig says China tries to influence how it is perceived and control the message. If influence doesn't work, it resorts to transnational repression. "You're either trying to incentivize people to be supportive of the PRC… or you're trying to censor and silence and coerce potential critics and dissidents to be afraid to speak out," he said. "And that's the repression part." Kovrig says the PRC tends to target Chinese diaspora communities more because it is easier to intimidate people who have relatives back in China or who belong to a community where many people are sympathetic to the CCP. It's also harder for police or intelligence agencies to get inside those communities and understand what is going on. WATCH | China becoming more brazen, Kovrig says: Kovrig has also observed how the PRC has become more aggressive over time. "Whereas previously, Chinese actors might have been relatively reluctant to be more heavy-handed or coercive for fear of negative consequences, increasingly, as China has become more powerful as a state, it's become increasingly brazen about what it's willing to do." Stanton, the former CSIS officer, says where once China might have tried to bring a dissident back to China, now the surveillance and the tactics are more sophisticated. "They may approach extended family members in the PRC, starting with a subtle message, and then it gets a little graver that their relative or counterpart over in Canada is doing anti-state activity.… Maybe someone will lose a job in China to get the message to that person in Canada that they can't speak freely." Stanton, who would like to see a public inquiry on transnational repression, said the government needs a more cohesive approach to dealing with it. "You can't deal with that if the community is not prepared to come forward and talk about it," he said, adding that they're reticent about talking about it because, generally speaking, there's never any action from Canadian officials. "They're left speaking out about it and nothing's done about it from their perception." WATCH | Stanton describes China's tactics: In their responses to the ICIJ and other media organizations, other Chinese embassies dismissed reports China was engaging in transnational repression. "There is no such thing as 'reaching beyond borders' to target so-called dissidents and overseas Chinese… the Chinese government strictly abides by international law and the sovereignty of other countries," Liu Pengyu, spokesperson for the Chinese Embassy in the United States, told the ICIJ. "The notion of 'transnational repression' is a groundless accusation, fabricated by a handful of countries and organizations to slander China." As for affairs related to Hong Kong, Tibetans and Uyghurs, they "are entirely China's internal matters," Pengyu wrote. "China firmly opposes the politicization, instrumentalization, or weaponization of human rights issues, as well as foreign interference under the pretext of human rights." Activists 'dismayed' at lack of protection Dennis Molinaro, of Ontario Tech University, who recently wrote Under Siege, a book on foreign interference by China in Canadian society, said other countries like Australia and the United States have taken more steps to curb transnational repression. "A lot of activists are particularly dismayed and upset by how little has been done to protect people in Canada and Canadian citizens," he said. "There's sometimes this view that this is akin to community infighting, and it's not. "This is an aggressive state that is targeting Canadian citizens within Canada. These are citizens that are a part of Canada. They shouldn't be ignored," said Molinaro. While direct attacks and threatening phone calls have been largely confined to more active members of the Chinese diaspora in Canada, Fung said transnational repression has had a chilling effect on the entire community. "There's a very famous idiom in China that you kill the chicken to scare all the monkeys." Fung said by delaying the implementation of the foreign agent registry provided for in Bill C-70, the government is giving "a green light" to foreign agents to continue to operate on Canadian soil without any consequences. Public Safety Minister Gary Anandasangaree's office has yet to respond to requests from CBC News for an interview. Max Watson, spokesperson for the ministry said the government has been actively responding to transnational repression, working with communities and with international partners to address the threat. However, Watson said several steps are still required to implement the provisions of C-70, such as drafting regulations, setting up the office, appointing the commissioner and building the IT infrastructure for a registry. But advocates like Tohti and Yu say their sense of safety and security in Canada has deteriorated. Unlike 20 years ago, when he first arrived here, Yu says he doesn't feel safe in Canada. Zhang, however, has no plans to stop speaking out — even if what she says angers the Chinese government. "At the end of the day, Canadians will protect me from the Chinese government's hand. I truly believe that."