logo
#

Latest news with #Poynter

Kari Lake brought back a skeleton crew to Voice of America. They're ‘angry most of the time'
Kari Lake brought back a skeleton crew to Voice of America. They're ‘angry most of the time'

Yahoo

time23-05-2025

  • Politics
  • Yahoo

Kari Lake brought back a skeleton crew to Voice of America. They're ‘angry most of the time'

While a federal appeals court appears to have given its blessing to the Trump administration's efforts to completely gut Voice of America, the bare-bones staff that Kari Lake brought back earlier this month has been wracked with low morale and confusion. 'I am angry most of the time I'm in there,' one staffer told Poynter this week. In March, President Donald Trump issued an executive order calling for the U.S. Agency for Global Media, which oversees VOA and its sister outlets, to reduce staffing down to the 'statutory minimum.' Lake, the failed Arizona politician who now serves as senior adviser overseeing the agency, subsequently laid off hundreds of contracted employees and placed the rest of VOA's staff on indefinite leave. Following a series of lawsuits from VOA employees and executives, Lake was ordered by a district court judge last month to restore Voice of America and bring back its workforce. Additionally, the judge ruled that the administration needed to reinstate Radio Free Asia and Middle East Broadcasting Networks. 'Not only is there an absence of 'reasoned analysis' from the defendants; there is an absence of any analysis whatsoever,' Judge Royce C. Lamberth wrote. Earlier this month, however, a three-judge appellate court panel decided to freeze the lower court's injunction, saying it needed more time to consider the merits of the case. On Thursday, the full appellate court said it would not intervene at this time. 'We are devastated and concerned that this ruling might lead to further adverse reactions from the administration,' Patsy Widakuswara, the lead plaintiff in the lawsuit and VOA's White House bureau chief for Voice of America, told The Independent about the appeals court decision. 'But our day in court is not over yet, and we are committed to fighting until we can return to our congressionally mandated right to broadcast factual, balanced, and comprehensive new.' After Lamberth's initial order to return Voice of America to the air and staff it back up, a small group of 30 employees – from a staff of 1300 workers on leave – was brought back by Lake earlier this month. In an article for Poynter, Liam Scott – VOA's press freedom reporter until he was placed on leave in March and informed he would be terminated this month – spoke to several of the staffers who returned this month and described the 'grim and confusing' atmosphere in VOA headquarters. 'People who are in there do not see this as some kind of hopeful return,' one employee told Scott. 'I am angry most of the time I'm in there… They can't credibly say that they haven't shut us down when zero people are working,' Prior to the president's executive order, VOA broadcast in 49 languages around the world to a weekly measured audience of roughly 360 million people, some of whom live in highly censored authoritarian states. Now, according to those at the pared-down network, Voice of America's content is only translated into Dari, Mandarin Chinese, Pashto and Persian. 'The amount of programming that's being produced is not a credible replacement for what was on air before,' a staffer said. 'We were a 24/7 news operation. Now we're a five-minutes-a-day, five-days-a-week operation,' another source added. 'We all know that this is not what this place is meant to be doing.' Voice of America's primary English-language newsroom, meanwhile, produces just one television segment and a handful of articles a day, which are then translated into four different languages and published, according to Poynter. Notably, with press freedom experts expressing concern about Kremlin propaganda filling the airspace left vacant by VOA's absence, the network is not publishing in Russian in its current depleted state. At the same time, the small cohort that is currently working to produce what little VOA content they can is still following the network's charter, noting that they haven't received any editorial requests from the agency since returning. Interestingly, despite Lake's recent announcement that VOA had partnered up with MAGA cable channel One America News to provide a news feed, Voice of America has yet to air any OAN content. 'No one's really in charge,' a staffer told Poynter, noting the lack of clear leadership at VOA right now. Mike Abramowitz, the network's director, remains on administrative leave. The Independent has reached out to Lake and the USAGM for comment. While fewer than three dozen employees man the ship, hundreds of other full-time VOA staffers remain on the sidelines and in limbo as they wait to hear from the administration about their fate. All the while, Lake has cut other 'frivolous expenditures' from VOA and its sister broadcasters. In March, for instance, she canceled the agency's contracts to carry reporting from wire services such as Reuters, Associated Press and Agence France-Presse. The media agency also reneged on a 15-year lease for new office headquarters – even though it actually saved the government more than $150 million. Though much of the network's full-time staff remains on administrative leave, such as Widakuswara, hundreds of others have already been told they are gone. Last week, Lake announced that 584 total employees were terminated across the agency, the majority of whom came from VOA. 'We will continue to scale back the bloat at [the agency] and make an archaic dinosaur into something worthy of being funded by hardworking Americans,' she told The Washington Post of the terminations before adding: 'Buckle up. There's more to come.' Widakuswara, meanwhile, bluntly described how she feels that Lake and the administration are treating the VOA staff at the moment. 'My assessment of the situation is that this is just more emotional terror that they're applying to us,' she told The Independent. 'There's no rhyme or reason why they're bringing people back and then kicking them out. To me, it feels like emotional terror to ensure obedience.'

‘Grok, verify': Why AI chatbots shouldn't be considered reliable fact-checkers
‘Grok, verify': Why AI chatbots shouldn't be considered reliable fact-checkers

Indian Express

time17-05-2025

  • Politics
  • Indian Express

‘Grok, verify': Why AI chatbots shouldn't be considered reliable fact-checkers

At the height of the recent India-Pakistan conflict, a parallel battle unfolded online – a battle of narratives. While independent fact-checkers and the government-run Press Information Bureau scrambled to debunk fake news, unsubstantiated claims, and AI-generated misinformation, many users turned to AI chatbots like Grok and Ask Perplexity to verify claims circulating on X. Here is an example: On May 10, India and Pakistan agreed to cease all military activity — on land, air and sea — at 5 PM. While responding to some user queries the next day, Grok called it a 'US-brokered ceasefire'. However, on May 10, when a user asked about Donald Trump's role in mediating the ceasefire, Grok added some missing context, saying, 'Indian officials assert the ceasefire was negotiated directly between the two countries' military heads. Pakistan acknowledges US efforts alongside others,' presenting a more rounded version of the events. Such inconsistencies demonstrate a deeper issue with AI responses. Experts warned that though AI chatbots can provide accurate information, they are far from reliable 'fact-checkers'. These chatbots can give real-time responses, but more often than not, they may add to the chaos, especially in evolving situations. Prateek Waghre, an independent tech policy researcher, attributed this to the 'non-deterministic' nature of AI models: 'The same question won't always give you the same answer,' he said, 'It depends on a setting called 'temperature'.' Large language models (LLMs) work by predicting the next word amid a range of probabilities. The 'temperature' determines the variability of responses the AI can generate. A lower temperature would mean that the most probable next word is picked, generating less variable and more predictable responses. A higher temperature allows LLMs to give unpredictable, creative responses. According to Waghre, what makes the use of AI bots for fact-checking claims more worrisome is that 'they are not objectively bad.' 'They are not outright terrible. On some occasions, they do give you accurate responses, which means that people tend to have a greater amount of belief in their capability than is warranted,' he said. What makes AI chatbots unreliable? 1. Hallucinations The term 'hallucination' is used to describe situations when AI chatbots generate false or fabricated information and present it as factual information. Alex Mahadevan, director of MediaWise, said AI chatbots like Grok and Ask Perplexity 'hallucinate facts, reflect online biases and tend to agree with whatever the user seems to want,' and hence, 'are not reliable fact-checkers.' 'They don't vet sources or apply any editorial standard,' Mahadevan said. MediaWise is a digital literacy programme of Poynter, a non-profit journalism school based in the US, which helps people spot misinformation online. xAI admits to this in the 'terms of service' available on its website. 'Artificial intelligence is rapidly evolving and is probabilistic in nature; it may therefore sometimes result in Output that contains 'hallucinations,' may be offensive, may not accurately reflect real people, places or facts, may be objectionable, inappropriate, or otherwise not be suitable for your intended purpose,' the company states. Perplexity's terms of service, too, carry a similar disclaimer: 'You acknowledge that the Services may generate Output containing incorrect, biased, or incomplete information.' 2. Bias and lack of transparency Mahadevan flagged another risk with AI chatbots — inherent bias. 'They are built and beholden to whoever spent the money to create them. For example, just yesterday (May 14), X's Grok was caught spreading misleading statements about 'white genocide', which many attribute to Elon Musk's views on the racist falsehood,' he wrote in an e-mail response to The 'white genocide' claims gained traction after US President Donald Trump granted asylum to 54 white South Africans earlier this year, citing genocide and violence against white farmers. The South African government has strongly denied these allegations. Waghre said that users assume AI is objective because it's not human, and that is misleading. 'We don't know to what extent or what sources of data were used for training them,' he said. Both xAI and Perplexity say their tools rely on real-time internet searches; Grok also taps into public posts on X. But it's unclear how they assess credibility or filter misinformation. reached out to both firms to understand this better, but did not receive any response at the time of publishing. 3. Scale and speed Perhaps the most concerning issue is the scale at which these chatbots operate. With Grok embedded directly into X, AI-generated errors can be amplified instantly to millions. 'We're not using these tools to assist trained fact-checkers,' Waghre said, 'They're operating at population scale – so their mistakes are too.' Waghre also said that these AI chatbots are likely to learn and improve from mistakes, but 'You have situations where they are putting out incorrect answers, and those are then being used as further evidence for things.' What AI firms should change Mahadevan questioned the 'design choice' that AI firms employ. 'These bots are built to sound confident even when they're wrong. Users feel they are talking to an all-knowing assistant. That illusion is dangerous,' he said. He recommended stronger accuracy safeguards – chatbots should refuse to answer if they can't cite credible sources, or flag 'low-quality and speculative responses'. Vibhav Mithal, a lawyer specialising in AI and intellectual property, has a different take. He insisted there is no need to write off AI chatbots entirely since their reliability as fact-checkers depends largely on context, and more importantly, on the quality of data they've been trained on. But responsibility, in his opinion, lies squarely with the companies building these tools. 'AI firms must identify the risks in their products and seek proper advice to fix them,' Mithal said. What can users do? Mithal stressed that this isn't about AI versus human fact-checkers. 'AI can assist human efforts, it's not an either/or scenario,' he said. Concurring, Mahadevan listed two simple steps users can take to protect themselves: Always double-check: If something sounds surprising, political or too good to be true, verify it through other sources. Ask for sources: If the chatbot can't point to a credible source or just name-drops vague websites, be skeptical. According to Mahadevan, users should treat AI chatbots like overconfident interns: useful, fast, but not always right. 'Use them to gather context, not confirm truth. Treat their answers as leads, not conclusions,' he said. Sonal Gupta is a senior sub-editor on the news desk. She runs The Indian Express's weekly climate newsletter, Icebreaker. Apart from this, her interests range from politics and world affairs to art and culture and AI. She also curates the Morning Expresso, a daily briefing of top stories of the day, which won gold in the 'best newsletter' category at the WAN-IFRA South Asian Digital Media Awards 2023. ... Read More

Kentucky volunteer group uses decoy accounts to bust child predators
Kentucky volunteer group uses decoy accounts to bust child predators

Yahoo

time14-05-2025

  • Yahoo

Kentucky volunteer group uses decoy accounts to bust child predators

Editor's note: Details of the story may be disturbing to some readers. LANCASTER, Ky. (FOX 56) — Within the last couple of weeks, two men have been arrested and charged after planning to meet with teenage girls in Lancaster who ended up being decoys in a sting operation. The operations were led by a new group that works to track down child predators. According to court documents, 40-year-old Jeffrey Poynter believed he was meeting up with a 14-year-old girl last Saturday after exchanging several messages online. 'They were fully planning on having an intimate moment,' said a volunteer with the group who wished to remain anonymous. Instead, he was met by a representative with a new group called 'KY Predo Poachers.' The volunteer explained to Poynter that it was really them he had been messaging the whole time. 'There are groups like us all over right now,' he said. 'It's truly an epidemic what's going on right now.' Public forum held to address Jessamine County school safety concerns Kentucky's oldest WWII veteran dies at 108 Kentucky volunteer group uses decoy accounts to bust child predators Another volunteer with the group, who wanted to remain anonymous, said they set up decoy accounts on social media sites such as Facebook, posing as underage girls. 'One decoy account in itself currently has 550 friend requests sitting in it in just the last two weeks of older men,' she said. Once a person reaches out, they begin messaging back and forth. Investigators said some of the messages are sexually centered, and some even send nude photos. Group members said they make the age of the decoys clear, but they are sent inappropriate messages anyway. 'So, they know from pretty much the very beginning, within the first few messages, that we are 14, 13, underage girls,' the volunteer added. After about a 45-minute interview, the video shows Poynter was taken away in handcuffs by Lancaster police and put behind bars. Public forum held to address Jessamine County school safety concerns Kentucky's oldest WWII veteran dies at 108 Kentucky volunteer group uses decoy accounts to bust child predators Brandon Hensley, 37, of Georgetown, was also arrested for similar reasons about a week prior. Both face charges, including unlawful transaction with a minor. 'Our whole goal really is to protect children from social media. And the people in social media,' she continued. Group members said they also hope to use their experience to help parents learn how to monitor their children's social media accounts. 'It is really important to make sure that your kids trust you. And they feel like they can open up to you. Really, in the end, that's what it is. You know, that's what we need as parents, is for our children to be able to trust us,' the volunteer concluded. We reached out to the Lancaster Police Department for a response, and the police chief sent back a statement saying, quote, 'While I'm sure their intent may be good, I do not condone any civilian vigilantism.' Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Fearing Trump crackdown US govt workers, green card holders ask editors to remove byline from old stories
Fearing Trump crackdown US govt workers, green card holders ask editors to remove byline from old stories

Time of India

time01-05-2025

  • Politics
  • Time of India

Fearing Trump crackdown US govt workers, green card holders ask editors to remove byline from old stories

Government workers, green card holders, teachers, and student journalists across the US are asking editors to remove their names from old news stories, fearing backlash from President Donald Trump's administration for views or identities now considered "politically risky." Tired of too many ads? go ad free now According to Poynter's Kelly McBride, some government employees and others have asked journalists to remove their names from old news stories. These individuals are concerned that documented evidence of their sexual identity or positions opposing Trump's policies could make them vulnerable to employment termination. Kevin Naff, who serves as the editor of the Washington Blade, told the outlet that taking down stories "is not something I've ever done before with the exception of foreign sources who are facing life and death situations." The Washington Blade has a long history of reporting on governmental discrimination against gay employees. Currently, the publication continues its role in recording instances of intimidation and discrimination. "We're going backwards," he stated. "The fear is really intense. People are scared. They're scared to lose their pensions and, you know, all of that," he added. This wave of anxiety is not limited to career professionals, student journalists, especially those with ties to pro-Palestinian activism , are also asking for their bylines to be removed, fearing legal consequences, deportation, or professional retaliation, the Guardian reported. The case of Rumeysa Ozturk, a Tufts University graduate student currently in ICE detention, has intensified fears. Ozturk had written a pro-Palestinian op-ed in her campus paper and is now fighting deportation despite no official evidence tying her writing to the visa revocation. Ozturk joins approximately ten other academics and students detained by immigration officials since March 8, when Columbia graduate student and permanent resident Mahmoud Khalil faced arrest and deportation proceedings for participating in pro-Palestinian demonstrations. Editors note particular concern among international student contributors, though requests for content removal due to fear of consequences now extend to American citizens as well.

Here's the Roundup for the Week Ending March 21
Here's the Roundup for the Week Ending March 21

Yahoo

time22-03-2025

  • Climate
  • Yahoo

Here's the Roundup for the Week Ending March 21

Happy Friday to all who celebrate. In this week's edition of stories we didn't write up for one reason or another, we see a couple of stories from Poynter reminding us we can't always go all in on new tech and that politicizing weather and science could actually kill people. Here's the stuff: When viewers tune into their local television station for the weather report, chances are that the forecast they see was made possible by the National Oceanic and Atmospheric Administration. Through agencies like the National Hurricane Center and the Weather Prediction Center, NOAA collects observational data, runs models, develops forecasts and issues warnings - information that broadcast meteorologists use to create weather forecasts Those agencies are currently being targeted for cuts by billionaire Elon Musk's Department of Governmental Efficiency. Click here for the story. Retired KDKA reporter Jon Delano received a special honor from Pittsburgh City Council members on Tuesday. Click here for more. An Italian newspaper launched a generative AI experiment. It's not going well. Click here to read about it. For 12 years, Kris Betts was behind the news desk and in front of the action every chance she could get during natural disaster events and more as an Emmy-award winning journalist whose career has stretched from Texas to New York. But for the last three years, she has taken that passion for storytelling and transitioned it into doing it through a different medium in her own interior design business. Here's the story.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store