logo
#

Latest news with #BerfinSirinTunc

Content moderators unite to tackle trauma from extreme content
Content moderators unite to tackle trauma from extreme content

New Straits Times

time3 hours ago

  • Health
  • New Straits Times

Content moderators unite to tackle trauma from extreme content

Content moderators from the Philippines to Turkiye are uniting to push for greater mental health support to help them cope with the psychological effects of exposure to a rising tide of disturbing images online. The people tasked with removing harmful content from tech giants like Meta Platforms or TikTok report a range of noxious health effects from loss of appetite to anxiety and suicidal thoughts. "Previously, I could sleep for seven hours," said one Filipino content moderator who asked to remain anonymous to avoid problems with their employer. "Now, I only sleep for around four hours." Workers are gagged by non-disclosure agreements with the tech platforms or companies that do the outsourced work, meaning they cannot discuss details of the content they are seeing. But videos of babies dying in Gaza and gruesome pictures from the Air India crash in June were given as examples by moderators. Social media companies, which often outsource content moderation to third parties, are facing increasing pressure to address the emotional toll of moderation. Meta, which owns Facebook, WhatsApp and Instagram, has already been hit with workers' rights suits in Kenya and Ghana, and in 2020 the firm paid a US$52 million settlement to American content moderators suffering long-term mental health issues. The Global Trade Union Alliance of Content Moderators was launched in Nairobi in April to establish worker protections for what they dub "a 21st-century hazardous job", similar to the work of emergency responders. Their first demand is for tech companies to adopt mental health protocols, such as exposure limits and trauma training, in their supply chains. "They say we're the ones protecting the Internet, keeping kids safe online," the Filipino worker said. "But we are not protected enough." Globally, tens of thousands of content moderators spend up to 10 hours a day scrolling through social media posts to remove harmful content — and the mental toll is well-documented. "I've had bad dreams because of the graphic content, and I'm smoking more, losing focus," said Berfin Sirin Tunc, a content moderator for TikTok in Turkiye employed via Canadian-based tech company Telus, which also does work for Meta. She said the first time she saw graphic content as part of her job she had to leave the room and go home. While some employers do provide psychological support, some workers say it is just for show — with advice to count numbers or do breathing exercises. Therapy is limited to either group sessions or a recommendation to switch off for a certain number of "wellness break" minutes. But taking them is another thing. "If you don't go back to the computer, your team leader will ask where are you and (say) that the queue of videos is growing," said Tunc, "Bosses see us just as machines." Moderators have seen an uptick in violent videos. However, Telus said in its emailed response that internal estimates show that distressing material represents less than five per cent of the total content reviewed. Adding to the pressure on moderators is a fear of losing jobs as companies shift towards artificial intelligence-powered moderation. Meta, which invested billions and hired thousands of content moderators globally over the years to police extreme content, scrapped its US fact-checking programme in January, following the election of Donald Trump. In April, 2,000 Barcelona-based workers were sent home after Meta severed a contract with Telus. "I'm waiting for Telus to fire me," said Tunc, "because they fired my friends from our union." Fifteen workers in Turkiye are suing the company after being dismissed, they say, after organising a union and attending protests this year. Moderators in low-income countries say that the low wages, productivity pressure and inadequate mental health support can be remedied if companies sign up to the Global Alliance's eight protocols. These include limiting exposure time, making realistic quotas and 24/7 counselling, as well as living wages, mental health training and the right to join a union. "With better conditions, we can do this better. If you feel like a human, you can work like a human," said Tunc. The writer is from Reuters

'Bosses see us as machines': Content moderators for Big Tech unite to tackle mental trauma
'Bosses see us as machines': Content moderators for Big Tech unite to tackle mental trauma

The Star

time8 hours ago

  • Health
  • The Star

'Bosses see us as machines': Content moderators for Big Tech unite to tackle mental trauma

BRUSSELS: Content moderators from the Philippines to Turkey are uniting to push for greater mental health support to help them cope with the psychological effects of exposure to a rising tide of disturbing images online. The people tasked with removing harmful content from tech giants like Meta Platforms or TikTok, report a range of noxious health effects from loss of appetite to anxiety and suicidal thoughts. "Before I would sleep seven hours," said one Filipino content moderator who asked to remain anonymous to avoid problems with their employer. "Now I only sleep around four hours." Workers are gagged by non-disclosure agreements with the tech platforms or companies that do the outsourced work, meaning they cannot discuss exact details of the content they are seeing. But videos of people being burned alive by terrorists, babies dying in Gaza and gruesome pictures from the Air India crash in June were given as examples by moderators who spoke to the Thomson Reuters Foundation. Social media companies, which often outsource content moderation to third parties, are facing increasing pressure to address the emotional toll of moderation. Meta, which owns Facebook, WhatsApp and Instagram, has already been hit with workers' rights lawsuits in Kenya and Ghana, and in 2020 the firm paid a US$52mil (RM219.54mil) settlement to American content moderators suffering long-term mental health issues. The Global Trade Union Alliance of Content Moderators was launched in Nairobi in April to establish worker protections for what they dub "a 21st century hazardous job", similar to the work of emergency responders. Their first demand is for tech companies to adopt mental health protocols, such as exposure limits and trauma training, in their supply chains. "They say we're the ones protecting the Internet, keeping kids safe online," the Filipino worker said, "But we are not protected enough." Scrolling trauma Globally, tens of thousands of content moderators spend up to 10 hours a day scrolling through social media posts to remove harmful content – and the mental toll is well-documented. "I've had bad dreams because of the graphic content, and I'm smoking more, losing focus," said Berfin Sirin Tunc, a content moderator for TikTok in Turkey employed via Canadian-based tech company Telus, which also does work for Meta. In a video call with the Thomson Reuters Foundation, she said the first time she saw graphic content as part of her job she had to leave the room and go home. While some employers do provide psychological support, some workers say it is just for show – with advice to count numbers or do breathing exercises. Therapy is limited to either group sessions or a recommendation to switch off for a certain number of "wellness break" minutes. But taking them is another thing. "If you don't go back to the computer, your team leader will ask where are you and (say) that the queue of videos is growing," said Tunc, "Bosses see us just as machines." In emailed statements to the Thomson Reuters Foundation, Telus and Meta said the well-being of their employees is a top priority and that employees should have access to 24/7 healthcare support. Rising pressure Moderators have seen an uptick in violent videos. A report by Meta for the first quarter of 2025 showed a rise in the sharing of violent content on Facebook, after the company changed its content moderation policies in a commitment to "free expression". However, Telus said in its emailed response that internal estimates show that distressing material represents less than 5% of the total content reviewed. Adding to the pressure on moderators is a fear of losing jobs as companies shift towards artificial intelligence-powered moderation. Meta, which invested billions and hired thousands of content moderators globally over the years to police extreme content, scrapped its US fact-checking programme in January, following the election of Donald Trump. In April, 2,000 Barcelona-based workers were sent home after Meta severed a contract with Telus. A Meta spokesperson said the company has moved the services that were being performed from Barcelona to other locations. "I'm waiting for Telus to fire me," said Tunc, "because they fired my friends from our union." Fifteen workers in Turkey are suing the company after being dismissed, they say, after organising a union and attending protests this year. A spokesperson for Telus said in an emailed response that the company "respects the rights of workers to organise". Telus said a May report by Turkey's Ministry of Labour found contract terminations were based on performance and it could not be concluded that the terminations were union-related. The Labour Ministry did not immediately respond to a request for comment. Protection protocols Moderators in low-income countries say that the low wages, productivity pressure and inadequate mental health support can be remedied if companies sign up to the Global Alliance's eight protocols. These include limiting exposure time, making realistic quotas and 24/7 counselling, as well as living wages, mental health training and the right to join a union. Telus said in its statement that it was already in compliance with the demands, and Meta said it conducts audits to check that companies are providing required on-site support. New European Union rules – such as the Digital Services Act, the AI Act and supply chain regulations which demand tech companies address risks to workers – should give stronger legal grounds to protect content moderators' rights, according to labour experts. "Bad things are happening in the world. Someone has to do this job and protect social media," said Tunc. "With better conditions, we can do this better. If you feel like a human, you can work like a human." – Thomson Reuters Foundation

Content moderators for Big Tech unite to tackle mental trauma
Content moderators for Big Tech unite to tackle mental trauma

Time of India

time13 hours ago

  • Health
  • Time of India

Content moderators for Big Tech unite to tackle mental trauma

Content moderators from the Philippines to Turkey are uniting to push for greater mental health support to help them cope with the psychological effects of exposure to a rising tide of disturbing images online. The people tasked with removing harmful content from tech giants like Meta Platforms or TikTok, report a range of noxious health effects from loss of appetite to anxiety and suicidal thoughts. "Before I would sleep seven hours," said one Filipino content moderator who asked to remain anonymous to avoid problems with their employer. "Now I only sleep around four hours." Workers are gagged by non-disclosure agreements with the tech platforms or companies that do the outsourced work, meaning they cannot discuss exact details of the content they are seeing. But videos of people being burned alive by the Islamic State, babies dying in Gaza and gruesome pictures from the Air India crash in June were given as examples by moderators who spoke to the Thomson Reuters Foundation. Social media companies, which often outsource content moderation to third parties, are facing increasing pressure to address the emotional toll of moderation. Meta, which owns Facebook, WhatsApp and Instagram, has already been hit with workers' rights lawsuits in Kenya and Ghana, and in 2020 the firm paid a $52 million settlement to American content moderators suffering long-term mental health issues. The Global Trade Union Alliance of Content Moderators was launched in Nairobi in April to establish worker protections for what they dub "a 21st century hazardous job", similar to the work of emergency responders. Their first demand is for tech companies to adopt mental health protocols, such as exposure limits and trauma training, in their supply chains. "They say we're the ones protecting the internet, keeping kids safe online," the Filipino worker said, "But we are not protected enough." Scrolling trauma Globally, tens of thousands of content moderators spend up to 10 hours a day scrolling through social media posts to remove harmful content - and the mental toll is well-documented. "I've had bad dreams because of the graphic content, and I'm smoking more, losing focus," said Berfin Sirin Tunc, a content moderator for TikTok in Turkey employed via Canadian-based tech company Telus, which also does work for Meta. In a video call with the Thomson Reuters Foundation, she said the first time she saw graphic content as part of her job she had to leave the room and go home. While some employers do provide psychological support, some workers say it is just for show - with advice to count numbers or do breathing exercises. Therapy is limited to either group sessions or a recommendation to switch off for a certain number of "wellness break" minutes. But taking them is another thing. "If you don't go back to the computer, your team leader will ask where are you and (say) that the queue of videos is growing," said Tunc, "Bosses see us just as machines." In emailed statements to the Thomson Reuters Foundation, Telus and Meta said the well-being of their employees is a top priority and that employees should have access to 24/7 healthcare support. Rising pressure Moderators have seen an uptick in violent videos. A report by Meta for the first quarter of 2025 showed a rise in the sharing of violent content on Facebook, after the company changed its content moderation policies in a commitment to "free expression." However, Telus said in its emailed response that internal estimates show that distressing material represents less than 5% of the total content reviewed. Adding to the pressure on moderators is a fear of losing jobs as companies shift towards artificial intelligence-powered moderation. Meta, which invested billions and hired thousands of content moderators globally over the years to police extreme content, scrapped its US fact-checking programme in January, following the election of Donald Trump. In April, 2,000 Barcelona-based workers were sent home after Meta severed a contract with Telus. A Meta spokesperson said the company has moved the services that were being performed from Barcelona to other locations. "I'm waiting for Telus to fire me," said Tunc, "because they fired my friends from our union." Fifteen workers in Turkey are suing the company after being dismissed, they say, after organising a union and attending protests this year. A spokesperson for Telus said in an emailed response that the company "respects the rights of workers to organise". Telus said a May report by Turkey's Ministry of Labour found contract terminations were based on performance and it could not be concluded that the terminations were union-related. The Labour Ministry did not immediately respond to a request for comment. Protection protocols Moderators in low-income countries say that the low wages, productivity pressure and inadequate mental health support can be remedied if companies sign up to the Global Alliance's eight protocols. These include limiting exposure time, making realistic quotas and 24/7 counselling, as well as living wages, mental health training and the right to join a union. Telus said in its statement that it was already in compliance with the demands, and Meta said it conducts audits to check that companies are providing required on-site support. New European Union rules - such as the Digital Services Act, the AI Act and supply chain regulations which demand tech companies address risks to workers - should give stronger legal grounds to protect content moderators' rights, according to labour experts. "Bad things are happening in the world. Someone has to do this job and protect social media," said Tunc. "With better conditions, we can do this better. If you feel like a human, you can work like a human."

Content moderators for Big Tech unite to tackle mental trauma
Content moderators for Big Tech unite to tackle mental trauma

The Hindu

time13 hours ago

  • Health
  • The Hindu

Content moderators for Big Tech unite to tackle mental trauma

Content moderators from the Philippines to Turkey are uniting to push for greater mental health support to help them cope with the psychological effects of exposure to a rising tide of disturbing images online. The people tasked with removing harmful content from tech giants like Meta Platforms or TikTok, report a range of noxious health effects from loss of appetite to anxiety and suicidal thoughts. "Before I would sleep seven hours," said one Filipino content moderator who asked to remain anonymous to avoid problems with their employer. "Now I only sleep around four hours." Workers are gagged by non-disclosure agreements with the tech platforms or companies that do the outsourced work, meaning they cannot discuss exact details of the content they are seeing. But videos of people being burned alive by the Islamic State, babies dying in Gaza and gruesome pictures from the Air India crash in June were given as examples by moderators who spoke to the Thomson Reuters Foundation. Social media companies, which often outsource content moderation to third parties, are facing increasing pressure to address the emotional toll of moderation. Meta, which owns Facebook, WhatsApp and Instagram, has already been hit with workers' rights lawsuits in Kenya and Ghana, and in 2020 the firm paid a $52 million settlement to American content moderators suffering long-term mental health issues. The Global Trade Union Alliance of Content Moderators was launched in Nairobi in April to establish worker protections for what they dub "a 21st century hazardous job", similar to the work of emergency responders. Their first demand is for tech companies to adopt mental health protocols, such as exposure limits and trauma training, in their supply chains. "They say we're the ones protecting the internet, keeping kids safe online," the Filipino worker said, "But we are not protected enough." Globally, tens of thousands of content moderators spend up to 10 hours a day scrolling through social media posts to remove harmful content, and the mental toll is well-documented. "I've had bad dreams because of the graphic content, and I'm smoking more, losing focus," said Berfin Sirin Tunc, a content moderator for TikTok in Turkey employed via Canadian-based tech company Telus, which also does work for Meta. In a video call with the Thomson Reuters Foundation, she said the first time she saw graphic content as part of her job she had to leave the room and go home. While some employers do provide psychological support, some workers say it is just for show, with advice to count numbers or do breathing exercises. Therapy is limited to either group sessions or a recommendation to switch off for a certain number of "wellness break" minutes. But taking them is another thing. "If you don't go back to the computer, your team leader will ask where are you and (say) that the queue of videos is growing," said Tunc, "Bosses see us just as machines." In emailed statements to the Thomson Reuters Foundation, Telus and Meta said the well-being of their employees is a top priority and that employees should have access to 24/7 healthcare support. Moderators have seen an uptick in violent videos. A report by Meta for the first quarter of 2025 showed a rise in the sharing of violent content on Facebook, after the company changed its content moderation policies in a commitment to "free expression." However, Telus said in its emailed response that internal estimates show that distressing material represents less than 5% of the total content reviewed. Adding to the pressure on moderators is a fear of losing jobs as companies shift towards artificial intelligence-powered moderation. Meta, which invested billions and hired thousands of content moderators globally over the years to police extreme content, scrapped its U.S. fact-checking programme in January, following the election of Donald Trump. In April, 2,000 Barcelona-based workers were sent home after Meta severed a contract with Telus. A Meta spokesperson said the company has moved the services that were being performed from Barcelona to other locations. "I'm waiting for Telus to fire me," said Tunc, "because they fired my friends from our union." Fifteen workers in Turkey are suing the company after being dismissed, they say, after organising a union and attending protests this year. A spokesperson for Telus said in an emailed response that the company "respects the rights of workers to organise". Telus said a May report by Turkey's Ministry of Labour found contract terminations were based on performance and it could not be concluded that the terminations were union-related. The Labour Ministry did not immediately respond to a request for comment. Moderators in low-income countries say that the low wages, productivity pressure and inadequate mental health support can be remedied if companies sign up to the Global Alliance's eight protocols. These include limiting exposure time, making realistic quotas and 24/7 counselling, as well as living wages, mental health training and the right to join a union. Telus said in its statement that it was already in compliance with the demands, and Meta said it conducts audits to check that companies are providing required on-site support. New European Union rules, such as the Digital Services Act, the AI Act and supply chain regulations which demand tech companies address risks to workers, should give stronger legal grounds to protect content moderators' rights, according to labour experts. "Bad things are happening in the world. Someone has to do this job and protect social media," said Tunc. "With better conditions, we can do this better. If you feel like a human, you can work like a human."

Content moderators for Big Tech unite to tackle mental trauma
Content moderators for Big Tech unite to tackle mental trauma

Time of India

timea day ago

  • Health
  • Time of India

Content moderators for Big Tech unite to tackle mental trauma

Academy Empower your mind, elevate your skills Content moderators from the Philippines to Turkey are uniting to push for greater mental health support to help them cope with the psychological effects of exposure to a rising tide of disturbing images people tasked with removing harmful content from tech giants like Meta Platforms or TikTok, report a range of noxious health effects from loss of appetite to anxiety and suicidal thoughts."Before I would sleep seven hours," said one Filipino content moderator who asked to remain anonymous to avoid problems with their employer. "Now I only sleep around four hours."Workers are gagged by non-disclosure agreements with the tech platforms or companies that do the outsourced work, meaning they cannot discuss exact details of the content they are videos of people being burned alive by the Islamic State, babies dying in Gaza and gruesome pictures from the Air India crash in June were given as examples by moderators who spoke to the Thomson Reuters media companies, which often outsource content moderation to third parties, are facing increasing pressure to address the emotional toll of which owns Facebook, WhatsApp and Instagram, has already been hit with workers' rights lawsuits in Kenya and Ghana, and in 2020 the firm paid a $52 million settlement to American content moderators suffering long-term mental health Global Trade Union Alliance of Content Moderators was launched in Nairobi in April to establish worker protections for what they dub "a 21st century hazardous job", similar to the work of emergency first demand is for tech companies to adopt mental health protocols, such as exposure limits and trauma training, in their supply chains."They say we're the ones protecting the internet, keeping kids safe online," the Filipino worker said, "But we are not protected enough."Globally, tens of thousands of content moderators spend up to 10 hours a day scrolling through social media posts to remove harmful content - and the mental toll is well-documented."I've had bad dreams because of the graphic content, and I'm smoking more, losing focus," said Berfin Sirin Tunc, a content moderator for TikTok in Turkey employed via Canadian-based tech company Telus, which also does work for a video call with the Thomson Reuters Foundation, she said the first time she saw graphic content as part of her job she had to leave the room and go some employers do provide psychological support, some workers say it is just for show - with advice to count numbers or do breathing is limited to either group sessions or a recommendation to switch off for a certain number of "wellness break" minutes. But taking them is another thing."If you don't go back to the computer, your team leader will ask where are you and (say) that the queue of videos is growing," said Tunc, "Bosses see us just as machines."In emailed statements to the Thomson Reuters Foundation, Telus and Meta said the well-being of their employees is a top priority and that employees should have access to 24/7 healthcare have seen an uptick in violent videos. A report by Meta for the first quarter of 2025 showed a rise in the sharing of violent content on Facebook, after the company changed its content moderation policies in a commitment to "free expression."However, Telus said in its emailed response that internal estimates show that distressing material represents less than 5% of the total content to the pressure on moderators is a fear of losing jobs as companies shift towards artificial intelligence-powered which invested billions and hired thousands of content moderators globally over the years to police extreme content, scrapped its US fact-checking programme in January, following the election of Donald April, 2,000 Barcelona-based workers were sent home after Meta severed a contract with Telus.A Meta spokesperson said the company has moved the services that were being performed from Barcelona to other locations."I'm waiting for Telus to fire me," said Tunc, "because they fired my friends from our union." Fifteen workers in Turkey are suing the company after being dismissed, they say, after organising a union and attending protests this year.A spokesperson for Telus said in an emailed response that the company "respects the rights of workers to organise".Telus said a May report by Turkey's Ministry of Labour found contract terminations were based on performance and it could not be concluded that the terminations were Labour Ministry did not immediately respond to a request for in low-income countries say that the low wages, productivity pressure and inadequate mental health support can be remedied if companies sign up to the Global Alliance's eight include limiting exposure time, making realistic quotas and 24/7 counselling, as well as living wages, mental health training and the right to join a said in its statement that it was already in compliance with the demands, and Meta said it conducts audits to check that companies are providing required on-site European Union rules - such as the Digital Services Act, the AI Act and supply chain regulations which demand tech companies address risks to workers - should give stronger legal grounds to protect content moderators' rights, according to labour experts."Bad things are happening in the world. Someone has to do this job and protect social media," said Tunc."With better conditions, we can do this better. If you feel like a human, you can work like a human."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store