Latest news with #MarthaDark


The Star
12-05-2025
- Politics
- The Star
AI will help make 'life-or-death' calls in rammed UK asylum system
LONDON: Britain is hoping to clear a record backlog of asylum claims with artificial intelligence (AI), outsourcing life-and-death decisions to dehumanising technology, rights groups say. As global displacement soars, Britain said it would deploy AI to speed asylum decisions, arming caseworkers with country-specific advice and summaries of key interviews. It will also introduce new targets to streamline parts of the overstretched and badly backlogged decision-making process. Migrant charities and digital rights groups say the use of automation could endanger vulnerable lives. "Relying on AI to help decide who gets to stay here and who gets thrown back into danger is a deeply alarming move," said Laura Smith, a legal director at the Joint Council for the Welfare of Immigrants (JCWI). "The government should focus on investing in well-trained, accountable decision-makers – not outsourcing life-or-death decisions to machines," she told the Thomson Reuters Foundation. The governing Labour party has pledged to hire more asylum caseworkers and set up a new returns and enforcement unit to fast-track removals for applicants who have no right to stay. At the end of 2024, the government had 90,686 asylum cases awaiting an initial decision, official data showed. Most asylum seekers wait at least six months for an initial ruling, a scenario that will cost taxpayers £15.3bil (RM87.48bil) in housing over the next decade, according to the National Audit Office, the government spending watchdog. AI biases In a government-run pilot study, less than half of caseworkers who tested the proposed AI summary tool said it gave them the correct information with some users saying it did not provide references to the asylum seeker's interview transcript. Nearly a quarter said they were not "fully confident" in the summaries provided and about 9% of the summaries were inaccurate, the pilot study reported in April. But the government wants to go ahead with AI, as the issue of immigration gains ever more traction with disgruntled voters. "Asylum decisions are some of the most serious that the government makes – the wrong decision can put lives at risk. There are therefore potentially lethal consequences resulting from these faulty summaries," said Martha Dark, founder of tech rights group Foxglove. "While the government claims that a human will always be 'in the loop' when it comes to making the decision, there are still clearly risks if the human is making that decision on the basis of inaccurate information in an AI-generated summary." Digital rights advocates point to the tendency of AI tools to generate "hallucinations" – answers or information that look real but are in fact fabricated – which make them dangerous to use in critical situations such as asylum claims. Automated tools can also reinforce biases against certain groups of people, rights groups say, since AI trains up on old data that can reinforce historic prejudices. In 2020, the Home Office, Britain's interior ministry, scrapped a tool that automatically assigned risk scores to visa applicants from certain countries after a legal challenge. Possible prejudice aside, AI-generated synopses of applicant interviews are also highly dehumanising, said Caterina Rodelli, a policy analyst at tech rights group Access Now. "People have to undergo so much re-traumatisation with these processes ... and then you reduce it to a summary. So that's a testament to the dehumanisation of the asylum system." The Home Office did not immediately respond to requests to comment on its proposed use of AI to process asylum claims and what safeguards it will have in place to ensure human oversight. Record migration Britain has experienced record migration in recent years, with net arrivals hitting 728,000 for the year ending June 2024, most migrants coming legally to work or study. More than 10,000 asylum seekers have also arrived in small boats this year, up about 40% on the same period last year. The Refugee Council said previous efforts to speed up processing times have led to poor initial decisions, more asylum appeals and a bigger backlog in the courts. "The use of AI therefore must be carefully considered before potentially life-or-death decisions become a testing ground for the technology," said Enver Solomon, chief executive of the Refugee Council. Human rights barrister Susie Alegre said immigration lawyers seeking to challenge asylum decisions could also hit roadblocks if they are "unpicking decisions based on automated outputs". "Lawyers looking at asylum decisions with a view to challenging them will need to know what role AI played in any decision making," Alegre said. Tip of the iceberg As the numbers fleeing war, poverty, climate disaster and other tumult reach record levels worldwide, states are increasingly turning to digital fixes to manage migration. President Donald Trump is ramping up the use of surveillance and AI tools – from facial recognition to robotic patrol dogs – as part of his crackdown on illegal immigration. Since 2017, Germany has used a dialect recognition tool to determine an asylum seeker's true country of origin. Access Now's Rodelli said governments were testing digital tools on migrants and asylum seekers without due accountability, warning of AI's potential mission creep into other areas of public life such as welfare and debt recovery. "These types of applications are just the tip of the iceberg," she said. – Thomson Reuters Foundation


Time of India
09-05-2025
- Business
- Time of India
AI will help make 'life-or-death' calls in rammed UK asylum system
Britain is hoping to clear a record backlog of asylum claims with artificial intelligence (AI), outsourcing life-and-death decisions to dehumanising technology, rights groups say. #Operation Sindoor India-Pakistan Clash Live Updates| Missiles, shelling, and attacks — here's all that's happening Pakistani Air Force jet shot down in Pathankot by Indian Air Defence: Sources India on high alert: What's shut, who's on leave, and state-wise emergency measures As global displacement soars, Britain said it would deploy AI to speed asylum decisions, arming caseworkers with country-specific advice and summaries of key interviews. It will also introduce new targets to streamline parts of the overstretched and badly backlogged decision-making process. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Play War Thunder now for free War Thunder Play Now Undo Migrant charities and digital rights groups say the use of automation could endanger vulnerable lives. "Relying on AI to help decide who gets to stay here and who gets thrown back into danger is a deeply alarming move," said Laura Smith, a legal director at the Joint Council for the Welfare of Immigrants (JCWI). Live Events "The government should focus on investing in well-trained, accountable decision-makers - not outsourcing life-or-death decisions to machines," she told the Thomson Reuters Foundation. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories The governing Labour party has pledged to hire more asylum caseworkers and set up a new returns and enforcement unit to fast-track removals for applicants who have no right to stay. At the end of 2024, the government had 90,686 asylum cases awaiting an initial decision, official data showed. Most asylum seekers wait at least 6 months for an initial ruling, a scenario that will cost taxpayers 15.3 billion pounds ($20.40 billion) in housing over the next decade, according to the National Audit Office, the government spending watchdog. AI biases In a government-run pilot study, less than half of caseworkers who tested the proposed AI summary tool said it gave them the correct information with some users saying it did not provide references to the asylum seeker's interview transcript. Nearly a quarter said they were not "fully confident" in the summaries provided and about 9% of the summaries were inaccurate, the pilot study reported in April. But the government wants to go ahead with AI, as the issue of immigration gains ever more traction with disgruntled voters. "Asylum decisions are some of the most serious that the government makes - the wrong decision can put lives at risk. There are therefore potentially lethal consequences resulting from these faulty summaries," said Martha Dark, founder of tech rights group Foxglove. "While the government claims that a human will always be 'in the loop' when it comes to making the decision, there are still clearly risks if the human is making that decision on the basis of inaccurate information in an AI-generated summary." Digital rights advocates point to the tendency of AI tools to generate "hallucinations" - answers or information that look real but are in fact fabricated - which make them dangerous to use in critical situations such as asylum claims. Automated tools can also reinforce biases against certain groups of people, rights groups say, since AI trains up on old data that can reinforce historic prejudices. In 2020, the Home Office, Britain's interior ministry, scrapped a tool that automatically assigned risk scores to visa applicants from certain countries after a legal challenge. Possible prejudice aside, AI-generated synopses of applicant interviews are also highly dehumanising, said Caterina Rodelli, a policy analyst at tech rights group Access Now. "People have to undergo so much re-traumatisation with these processes ... and then you reduce it to a summary. So that's a testament to the dehumanisation of the asylum system." The Home Office did not immediately respond to requests to comment on its proposed use of AI to process asylum claims and what safeguards it will have in place to ensure human oversight. Record migration Britain has experienced record migration in recent years, with net arrivals hitting 728,000 for the year ending June 2024, most migrants coming legally to work or study. More than 10,000 asylum seekers have also arrived in small boats this year, up about 40% on the same period last year. The Refugee Council said previous efforts to speed up processing times have led to poor initial decisions, more asylum appeals and a bigger backlog in the courts. "The use of AI therefore must be carefully considered before potentially life-or-death decisions become a testing ground for the technology," said Enver Solomon, chief executive of the Refugee Council. Human rights barrister Susie Alegre said immigration lawyers seeking to challenge asylum decisions could also hit roadblocks if they are "unpicking decisions based on automated outputs". "Lawyers looking at asylum decisions with a view to challenging them will need to know what role AI played in any decision making," Alegre said. Tip of the iceberg As the numbers fleeing war, poverty, climate disaster and other tumult reach record levels worldwide, states are increasingly turning to digital fixes to manage migration. President Donald Trump is ramping up the use of surveillance and AI tools - from facial recognition to robotic patrol dogs - as part of his crackdown on illegal immigration. Since 2017, Germany has used a dialect recognition tool to determine an asylum seeker's true country of origin. Access Now's Rodelli said governments were testing digital tools on migrants and asylum seekers without due accountability, warning of AI's potential mission creep into other areas of public life such as welfare and debt recovery. "These types of applications are just the tip of the iceberg," she said.


Economic Times
02-05-2025
- Business
- Economic Times
Lawyers probe 'dire' conditions for Meta content moderators in Ghana
Live Events Lawyers are probing "dire" workplace conditions for Meta content moderators in Ghana, attorneys involved in the investigation told AFP, in what could turn into the latest dispute over the social media giant's labour practices in moderators including those in Ghana have long had to contend with a harrowing work environment as they scrub posts containing child abuse and even murders from sites like Facebook and legal experts at the Accra-based consultancy Agency Seven Seven and London-based nonprofit Foxglove are investigating allegations that moderators have had to view "distressing" and "bloody" content, including sexual assault, without adequate mental health care -- as well as accusations that workers have been sacked for trying to unionise."What we are talking about here is potential psychological injury," said Carla Olympio, founder and managing partner at Agency Seven Seven, who has met with workers in recent weeks."Everyone is suffering in terms of their mental health -- whether that's post-traumatic stress disorder, insomnia, depression, suicidal thoughts and more," Martha Dark, founder of Foxglove, told AFP. "The situation is pretty dire."The probe follows multiple labour-related lawsuits launched in recent years over conditions at the Facebook and Instagram parent company's now-shuttered content moderating hub in Nairobi, Kenya. That centre -- like the hub in Ghana -- was run by a third-party contractor, not Meta suit in Kenya alleges that Facebook's algorithm amplified hate speech in neighbouring Ethiopia, with deadly Nairobi hub shut down in 2023, though the lawsuits are still ongoing. But the establishment of a new content moderation centre in Ghana had been until recently kept secret by lawyers say around 150 content moderators work in the Ghanaian capital for Majorel, a firm owned by Paris-headquartered tech contractor Teleperformance, which is paid by Meta for content worker in Accra, who moved to Ghana from east Africa, told British newspaper The Guardian that his work as a content moderator drove him to attempt housing requires workers to share rooms, Dark said, while low base pay and an "opaque" salary structure incentivises moderators seeking bonus pay "to look at more and more content".Neither Teleperformance nor Meta responded to AFP's request for comment.A Teleperformance spokesperson told The Guardian that the company has "robust people management systems and workplace practices, including a robust wellbeing programme staffed by fully licensed psychologists".It also defended what it called "strong pay and benefits".Meta told the newspaper that it took "the support of content reviewers seriously".Foxglove is also involved in the lawsuits in Kenya, where it alleges the Nairobi hub illegally fired workers after they moved to unionise and voiced similar concerns about the mental health harms faced on the is a way for content moderators to do their job safely, Dark insisted, citing limits in Ireland on the amount of content police investigating child abuse can be exposed to, as well as the provision of "proper psychiatric care".AFP is involved in a partnership with Meta providing fact-checking services in Asia-Pacific, Europe, the Middle East, Latin America and Africa.


eNCA
02-05-2025
- Business
- eNCA
Lawyers probe 'dire' conditions for Meta content moderators in Ghana
Lawyers are probing "dire" workplace conditions for Meta content moderators in Ghana, attorneys involved in the investigation told AFP, in what could turn into the latest dispute over the social media giant's labour practices in Africa. Content moderators including those in Ghana have long had to contend with a harrowing work environment as they scrub posts containing child abuse and even murders from sites like Facebook and Instagram. But legal experts at the Accra-based consultancy Agency Seven Seven and London-based nonprofit Foxglove are investigating allegations that moderators have had to view "distressing" and "bloody" content, including sexual assault, without adequate mental health care -- as well as accusations that workers have been sacked for trying to unionise. "What we are talking about here is potential psychological injury," said Carla Olympio, founder and managing partner at Agency Seven Seven, who has met with workers in recent weeks. "Everyone is suffering in terms of their mental health -- whether that's post-traumatic stress disorder, insomnia, depression, suicidal thoughts and more," Martha Dark, founder of Foxglove, told AFP. "The situation is pretty dire." The probe follows multiple labour-related lawsuits launched in recent years over conditions at the Facebook and Instagram parent company's now-shuttered content moderating hub in Nairobi, Kenya. That centre -- like the hub in Ghana -- was run by a third-party contractor, not Meta itself. Another suit in Kenya alleges that Facebook's algorithm amplified hate speech in neighbouring Ethiopia, with deadly consequences. The Nairobi hub shut down in 2023, though the lawsuits are still ongoing. But the establishment of a new content moderation centre in Ghana had been until recently kept secret by Meta. The lawyers say around 150 content moderators work in the Ghanaian capital for Majorel, a firm owned by Paris-headquartered tech contractor Teleperformance, which is paid by Meta for content moderation. One worker in Accra, who moved to Ghana from east Africa, told British newspaper The Guardian that his work as a content moderator drove him to attempt suicide. Employer-provided housing requires workers to share rooms, Dark said, while low base pay and an "opaque" salary structure incentivises moderators seeking bonus pay "to look at more and more content". Neither Teleperformance nor Meta responded to AFP's request for comment. - Lawsuits in Kenya - A Teleperformance spokesperson told The Guardian that the company has "robust people management systems and workplace practices, including a robust wellbeing programme staffed by fully licensed psychologists". It also defended what it called "strong pay and benefits". Meta told the newspaper that it took "the support of content reviewers seriously". Foxglove is also involved in the lawsuits in Kenya, where it alleges the Nairobi hub illegally fired workers after they moved to unionise and voiced similar concerns about the mental health harms faced on the job. There is a way for content moderators to do their job safely, Dark insisted, citing limits in Ireland on the amount of content police investigating child abuse can be exposed to, as well as the provision of "proper psychiatric care". AFP is involved in a partnership with Meta providing fact-checking services in Asia-Pacific, Europe, the Middle East, Latin America and Africa. By Nicholas Roll
Yahoo
02-05-2025
- Business
- Yahoo
Lawyers probe 'dire' conditions for Meta content moderators in Ghana
Lawyers are probing "dire" workplace conditions for Meta content moderators in Ghana, attorneys involved in the investigation told AFP, in what could turn into the latest dispute over the social media giant's labour practices in Africa. Content moderators including those in Ghana have long had to contend with a harrowing work environment as they scrub posts containing child abuse and even murders from sites like Facebook and Instagram. But legal experts at the Accra-based consultancy Agency Seven Seven and London-based nonprofit Foxglove are investigating allegations that moderators have had to view "distressing" and "bloody" content, including sexual assault, without adequate mental health care -- as well as accusations that workers have been sacked for trying to unionise. "What we are talking about here is potential psychological injury," said Carla Olympio, founder and managing partner at Agency Seven Seven, who has met with workers in recent weeks. "Everyone is suffering in terms of their mental health -- whether that's post-traumatic stress disorder, insomnia, depression, suicidal thoughts and more," Martha Dark, founder of Foxglove, told AFP. "The situation is pretty dire." The probe follows multiple labour-related lawsuits launched in recent years over conditions at the Facebook and Instagram parent company's now-shuttered content moderating hub in Nairobi, Kenya. That centre -- like the hub in Ghana -- was run by a third-party contractor, not Meta itself. Another suit in Kenya alleges that Facebook's algorithm amplified hate speech in neighbouring Ethiopia, with deadly consequences. The Nairobi hub shut down in 2023, though the lawsuits are still ongoing. But the establishment of a new content moderation centre in Ghana had been until recently kept secret by Meta. The lawyers say around 150 content moderators work in the Ghanaian capital for Majorel, a firm owned by Paris-headquartered tech contractor Teleperformance, which is paid by Meta for content moderation. One worker in Accra, who moved to Ghana from east Africa, told British newspaper The Guardian that his work as a content moderator drove him to attempt suicide. Employer-provided housing requires workers to share rooms, Dark said, while low base pay and an "opaque" salary structure incentivises moderators seeking bonus pay "to look at more and more content". Neither Teleperformance nor Meta responded to AFP's request for comment. - Lawsuits in Kenya - A Teleperformance spokesperson told The Guardian that the company has "robust people management systems and workplace practices, including a robust wellbeing programme staffed by fully licensed psychologists". It also defended what it called "strong pay and benefits". Meta told the newspaper that it took "the support of content reviewers seriously". Foxglove is also involved in the lawsuits in Kenya, where it alleges the Nairobi hub illegally fired workers after they moved to unionise and voiced similar concerns about the mental health harms faced on the job. There is a way for content moderators to do their job safely, Dark insisted, citing limits in Ireland on the amount of content police investigating child abuse can be exposed to, as well as the provision of "proper psychiatric care". AFP is involved in a partnership with Meta providing fact-checking services in Asia-Pacific, Europe, the Middle East, Latin America and Africa. str-nro/sn/kjm