logo
#

Latest news with #ThomsonReutersFoundation

Special feature - Philippine marine life under threat from industrial fishing
Special feature - Philippine marine life under threat from industrial fishing

The Star

timea day ago

  • General
  • The Star

Special feature - Philippine marine life under threat from industrial fishing

MANILA (Thomson Reuters Foundation): Impoverished fishing communities in the Philippines are caught in a David-and-Goliath fight with industrial fishing companies after the country's top court loosened restrictions on commercial operations in protected coastal waters. Already facing threats from extreme weather and urban development that have destroyed fish breeding grounds, fishers fear they may now have to compete with large vessels in municipal waters, the 15-km (9 mile) stretch of sea off the coastlines of cities. "Once commercial fishing vessels enter our area, there will come a time when we will no longer be able to catch any fish," said Rommel Escarial, 37, who has fished Manila Bay since he was a teenager. Mayors, environmentalists, fishing communities and the national government have all appealed against a Supreme Court ruling last year that invalidated a ban on large-scale fishing operations in municipal waters, where about 2 million people rely on fishing for their livelihoods. While the Supreme Court decision is not yet final during the appeals process, lawyer Grizelda Mayo-Anda of the Environmental Legal Assistance Center said that some commercial fishers have already been entering municipal waters. "In Palawan province … commercial fishers now use the ruling to their advantage," said Mayo-Anda, whose NGO has joined one of the appeals. Municipal waters not only provide income to communities, who are among the country's poorest, but also act as extensions of protected marine areas by preventing overfishing in productive habitats to allow recovery of depleted stocks. Small-scale Filipino fishers, who use more sustainable methods such as hand-lining, cast net fishing or bamboo fish corrals, have struggled for years with the encroachment of commercial fishing vessels in municipal waters. Some 370,000 municipal fishing vessels and more than 5,000 commercial vessels are registered in the Philippines, according to 2022 data from the government's Bureau of Fisheries and Aquatic Resources. Last year fisheries production fell 5%, according to government figures, while production from both commercial and municipal fishers declined by nearly one-quarter from 2010 to 2023 due to overfishing, illegal fishing and habitat destruction. Commercial fishing often involves trawling, in which a vessel uses nets to collect everything in its path, damaging coral reefs, seagrass beds and other habitats. "If we totally allow commercial fishers even into municipal waters, it will only further decline our fisheries production," said Jerwin Baure, a marine biologist and member of the Advocates of Science and Technology for the People, an association of Filipino scientists. FEW PROTECTIONS The League of Municipalities, an association of more than 1,400 mayors, questioned the ruling in February, saying preferential rights for small-scale boats was a "matter of social justice, economic stability and environmental sustainability." Alfredo Coro, mayor of the coastal town of Del Carmen, also appealed to Philippines President Ferdinand Marcos Jr. and Supreme Court justices to overturn the ruling. "The small fishers are ... continuously being exposed to multiple threats including impacts of climate change, low income without social protection and limited access to public services due to their remote habitation," he said in an open letter shared with shared with the Thomson Reuters Foundation. Of the Philippines' more than 2 million square km (772,000 square miles) of marine waters, 15% are classified as municipal waters, while commercial fishing vessels are allowed to fish within 84% of territorial waters, according to the Philippine Association of Marine Science. Fishers compete in shallow waters of up to 50 metres depth, because these are the richest fishing grounds due to their proximity to sunlight and nutrients. Under the Philippines' fisheries code, small to medium commercial vessels may be given permission to fish in municipal waters without active gears like trawlers or towed nets which damage ecosystems. Trawls are used to catch saltwater species like shrimps and anchovies, while purse seine - in which a large net surrounds a school of fish - is used for surface-dwelling and midwater species such as sardines, tuna and mackerel. Baure said the court ruling may force municipal and commercial boats to fight for fish stocks, with smaller boats at a clear disadvantage in fuel and equipment. "Our country is already facing a lot of cases of illegal fishing, such as commercial vessels illegally entering municipal waters. That alone was a challenge to control," he said. Scientists at the Philippine Association of Marine Science have called for a long-term and science-based harvest strategies that will provide equitable access to fisheries without harming marine biodiversity. (Reporting by Mariejo Ramos. Editing by Jack Graham and Ayla Jean Yackley. The Thomson Reuters Foundation is the charitable arm of Thomson Reuters. Visit

Romance scams plumb new depths with deepfakes
Romance scams plumb new depths with deepfakes

New Straits Times

time17-05-2025

  • New Straits Times

Romance scams plumb new depths with deepfakes

Beth Hyland thought she had met the love of her life on Tinder. In reality, the Michigan-based administrative assistant had been manipulated by an online scam artist. He had posed as a French man named "Richard", used deepfake video on Skype calls and posted photos of another man to pull off his con. A "deepfake" is manipulated video or audio made using artificial intelligence (AI) to look and sound real. They are often difficult to detect without specialised tools. In a matter of months, Hyland, 53, had taken out loans totalling US$26,000, sent "Richard" the money and fallen prey to a classic case of romance baiting or pig butchering, named for the exploitative way in which scammers cultivate their victims. A projected eight million deepfakes will be shared worldwide this year, up from 500,000 in 2023, says the British government. About a fifth of those will be part of romance scams, according to a January report from cyber firm McAfee. Hyland lives in Portage, about 230km west of Detroit, and had been divorced for four years when she began dating again. She matched on Tinder with a man whose profile seemed to complement hers well. Now, she says this "perfect match" was likely orchestrated. "Richard" said he was born in Paris but lived in Indiana and worked as a freelance project manager for a construction company that required a lot of travel. Months of emotional manipulation, lies, fake photos and AI-doctored Skype calls followed. The scammer pledged his undying love, but had myriad reasons to miss every potential meet-up. Weeks after they matched, "Richard" convinced Hyland that he needed her help to pay for a lawyer and a translator in Qatar. "I told him I was gonna take out loans and he started crying, telling me no one's ever loved him like this before," said Hyland in an online interview. But "Richard" kept asking for more money and when Hyland eventually told her financial adviser what was happening, he said she was most likely the victim of a romance scam. "I couldn't believe it, but I couldn't ignore it," said Hyland. She confronted "Richard"; he initially denied it all but then went silent when Hyland asked him to "prove her wrong" and return her money. Police told Hyland they could not take her case further because there was no "coercion, threat or force involved", according to a letter from Portage's director of public safety, seen by the Thomson Reuters Foundation. A Tinder spokesperson said the company has "zero tolerance" for fraudsters, and uses AI to root our potential scammers and warn its users, in addition to offering factsheets on romance scams. The United States reported more than US$4 billion in losses to pig-butchering scams in 2023, according to the FBI. Jason Lane-Sellers, a director of fraud and identity at LexisNexis Risk Solutions, said only seven per cent of scams are reported, with victims often held back by shame. Jorij Abraham, managing director of the Global Anti-Scam Alliance, a Netherlands-based organisation to protect consumers, said humans will not be able to detect manipulated media for long. "In two or three years, it will be AI against AI," he said. "(Software exists) that can follow your conversation — looking at the eyes, if they're blinking — these are giveaways that something is going on that humans can't see, but software can." Lane-Sellers from LexisNexis Risk Solutions described it as an AI "arms race" between scammers and anti-fraud companies trying to protect consumers and businesses. Richard Whittle, an AI expert at Salford Business School in northern England, said he expects future deepfake detection technology will be built in by hardware makers such as Apple, Google and Microsoft that can access users' webcams. Neither Apple nor Google responded to requests for comment on how they protect consumers against deepfakes, or on future product developments. Abraham said the real challenge was to catch the scammers, who often work in different countries to those they target. Despite her dead end, Hyland still believes it is good to report scams and help authorities crack down on scammers. And she wants scam victims to know it is not their fault.

Deep love or deepfake? Dating in the time of AI
Deep love or deepfake? Dating in the time of AI

The Star

time16-05-2025

  • The Star

Deep love or deepfake? Dating in the time of AI

JOHANNESBURG/LONDON: Beth Hyland thought she had met the love of her life on Tinder. In reality, the Michigan-based administrative assistant had been manipulated by an online scam artist who posed as a French man named 'Richard', used deepfake video on Skype calls and posted photos of another man to pull off his con. A 'deepfake' is manipulated video or audio made using artificial intelligence (AI) to look and sound real. They are often difficult to detect without specialised tools. In a matter of months, Hyland, 53, had taken out loans totalling US$26,000 (RM110,968), sent 'Richard' the money, and fallen prey to a classic case of romance baiting or pig butchering, named for the exploitative way in which scammers cultivate their victims. A projected 8 million deepfakes will be shared worldwide in 2025, up from 500,000 in 2023, says the British government. About a fifth of those will be part of romance scams, according to a January report from cyber firm McAfee. "It's like grieving a death," Hyland told the Thomson Reuters Foundation. "When I saw him on video, it was the same as the pictures he had been sending me. He looked a little fuzzy, but I didn't know about deepfakes," she said. Manipulation and lies Hyland lives in Portage, about 230km west of Detroit, and had been divorced for four years when she began dating again. She matched on Tinder with a man whose profile seemed to complement hers well. Now, she says this 'perfect match' was likely orchestrated. 'Richard' said he was born in Paris but lived in Indiana and worked as a freelance project manager for a construction company that required a lot of travel, including to Qatar. Months of emotional manipulation, lies, fake photos and AI-doctored Skype calls followed. The scammer pledged his undying love but had myriad reasons to miss every potential meet-up. Weeks after they matched, 'Richard' convinced Hyland that he needed her help to pay for a lawyer and a translator in Qatar. "I told him I was gonna take out loans and he started crying, telling me no one's ever loved him like this before," said Hyland in an online interview. But 'Richard' kept asking for more money and when Hyland eventually told her financial advisor what was happening, he said she was most likely the victim of a romance scam. "I couldn't believe it, but I couldn't ignore it," said Hyland. She confronted 'Richard'; he initially denied it all but then went silent when Hyland asked him to "prove her wrong" and return her money. Police told Hyland they could not take her case further because there was no "coercion, threat or force involved", according to a letter from Portage's director of public safety, seen by the Thomson Reuters Foundation. The office of public safety – which oversees both the police and fire services – did not respond to a request for comment. In an email sent to Hyland after she flagged the scammer's account to Tinder, which was seen by the Thomson Reuters Foundation, the company said it removes users who violate their terms of service or guidelines. While Tinder said it could not share the outcome of the investigation due to privacy reasons, it said Hyland's report was "evaluated" and "actioned in accordance with our policies". A Tinder spokesperson said the company has "zero tolerance" of fraudsters and uses AI to root our potential scammers and warn its users, as well as offering factsheets on romance scams. In March, Hyland attended a US Senate committee hearing when a bill was introduced to require dating apps to remove scammers and notify users who interact with fake accounts. The senator proposing the bill said Hyland's story showed why the legislation was needed. In general, dating apps do not notify users who have communicated with a scammer once the fraudster's account has been removed or issue alerts about how to avoid being scammed, as required in the proposed new bill. The United States reported more than US$4bil (RM17.07bil) in losses to pig-butchering scams in 2023, according to the FBI. Microsoft, which owns Skype, directed the Thomson Reuters Foundation to blog posts informing users how to prevent romance scams and steps it had taken to tackle AI-generated content, such as adding watermarks to images. The company did not provide further comment. Jason Lane-Sellers, a director of fraud and identity at LexisNexis Risk Solutions, said only 7% of scams are reported, with victims often held back by shame. 'AI arms race' Jorij Abraham, managing director of the Global Anti-Scam Alliance, a Netherlands-based organisation to protect consumers, said humans won't be able to detect manipulated media for long. "In two or three years, it will be AI against AI," he said. "[Software exists] that can follow your conversation – looking at the eyes, if they're blinking – these are giveaways that something is going on that humans can't see, but software can." Lane-Sellers from LexisNexis Risk Solutions described it as an AI "arms race" between scammers and anti-fraud companies trying to protect consumers and businesses. Richard Whittle, an AI expert at Salford Business School in northern England, said he expects future deepfake detection technology will be built in by hardware makers such as Apple, Google, and Microsoft who can access users' webcams. Neither Apple nor Google responded to requests for comment on how they protect consumers against deepfakes, or on future product developments. Abraham said the real challenge was to catch the scammers, who often work in different countries to those they target. Despite her dead end, Hyland still believes it is good to report scams and help authorities crack down on scammers. And she wants scam victims to know it is not their fault. "I've learned terminology ... we don't lose (money) or give it away – it's stolen. We don't fall for scams – we're manipulated and victimised." – Thomson Reuters Foundation

AI will help make 'life-or-death' calls in rammed UK asylum system
AI will help make 'life-or-death' calls in rammed UK asylum system

The Star

time12-05-2025

  • Politics
  • The Star

AI will help make 'life-or-death' calls in rammed UK asylum system

LONDON: Britain is hoping to clear a record backlog of asylum claims with artificial intelligence (AI), outsourcing life-and-death decisions to dehumanising technology, rights groups say. As global displacement soars, Britain said it would deploy AI to speed asylum decisions, arming caseworkers with country-specific advice and summaries of key interviews. It will also introduce new targets to streamline parts of the overstretched and badly backlogged decision-making process. Migrant charities and digital rights groups say the use of automation could endanger vulnerable lives. "Relying on AI to help decide who gets to stay here and who gets thrown back into danger is a deeply alarming move," said Laura Smith, a legal director at the Joint Council for the Welfare of Immigrants (JCWI). "The government should focus on investing in well-trained, accountable decision-makers – not outsourcing life-or-death decisions to machines," she told the Thomson Reuters Foundation. The governing Labour party has pledged to hire more asylum caseworkers and set up a new returns and enforcement unit to fast-track removals for applicants who have no right to stay. At the end of 2024, the government had 90,686 asylum cases awaiting an initial decision, official data showed. Most asylum seekers wait at least six months for an initial ruling, a scenario that will cost taxpayers £15.3bil (RM87.48bil) in housing over the next decade, according to the National Audit Office, the government spending watchdog. AI biases In a government-run pilot study, less than half of caseworkers who tested the proposed AI summary tool said it gave them the correct information with some users saying it did not provide references to the asylum seeker's interview transcript. Nearly a quarter said they were not "fully confident" in the summaries provided and about 9% of the summaries were inaccurate, the pilot study reported in April. But the government wants to go ahead with AI, as the issue of immigration gains ever more traction with disgruntled voters. "Asylum decisions are some of the most serious that the government makes – the wrong decision can put lives at risk. There are therefore potentially lethal consequences resulting from these faulty summaries," said Martha Dark, founder of tech rights group Foxglove. "While the government claims that a human will always be 'in the loop' when it comes to making the decision, there are still clearly risks if the human is making that decision on the basis of inaccurate information in an AI-generated summary." Digital rights advocates point to the tendency of AI tools to generate "hallucinations" – answers or information that look real but are in fact fabricated – which make them dangerous to use in critical situations such as asylum claims. Automated tools can also reinforce biases against certain groups of people, rights groups say, since AI trains up on old data that can reinforce historic prejudices. In 2020, the Home Office, Britain's interior ministry, scrapped a tool that automatically assigned risk scores to visa applicants from certain countries after a legal challenge. Possible prejudice aside, AI-generated synopses of applicant interviews are also highly dehumanising, said Caterina Rodelli, a policy analyst at tech rights group Access Now. "People have to undergo so much re-traumatisation with these processes ... and then you reduce it to a summary. So that's a testament to the dehumanisation of the asylum system." The Home Office did not immediately respond to requests to comment on its proposed use of AI to process asylum claims and what safeguards it will have in place to ensure human oversight. Record migration Britain has experienced record migration in recent years, with net arrivals hitting 728,000 for the year ending June 2024, most migrants coming legally to work or study. More than 10,000 asylum seekers have also arrived in small boats this year, up about 40% on the same period last year. The Refugee Council said previous efforts to speed up processing times have led to poor initial decisions, more asylum appeals and a bigger backlog in the courts. "The use of AI therefore must be carefully considered before potentially life-or-death decisions become a testing ground for the technology," said Enver Solomon, chief executive of the Refugee Council. Human rights barrister Susie Alegre said immigration lawyers seeking to challenge asylum decisions could also hit roadblocks if they are "unpicking decisions based on automated outputs". "Lawyers looking at asylum decisions with a view to challenging them will need to know what role AI played in any decision making," Alegre said. Tip of the iceberg As the numbers fleeing war, poverty, climate disaster and other tumult reach record levels worldwide, states are increasingly turning to digital fixes to manage migration. President Donald Trump is ramping up the use of surveillance and AI tools – from facial recognition to robotic patrol dogs – as part of his crackdown on illegal immigration. Since 2017, Germany has used a dialect recognition tool to determine an asylum seeker's true country of origin. Access Now's Rodelli said governments were testing digital tools on migrants and asylum seekers without due accountability, warning of AI's potential mission creep into other areas of public life such as welfare and debt recovery. "These types of applications are just the tip of the iceberg," she said. – Thomson Reuters Foundation

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store