Latest news with #Shedd
Yahoo
23-04-2025
- Yahoo
Copiah County judge overturns woman's manslaughter conviction
COPIAH COUNTY, Miss. (WJTV) – District Attorney Daniella Shorter Levy announced a verdict was overturned for the woman who was found guilty in connection to the death of Christina Lynn Howard, 49. Investigators said Charlotte Shedd, 53, hit and dragged Howard with her 2012 Chevy Traverse on March 3, 2022. Howard died from her injuries while in route to the University of Mississippi Medical Center (UMMC) in Jackson. Mississippi woman arrested for SNAP benefits fraud When questioned, investigators said Shedd told police that she did not remember the incident and that she suffered from seizures and had not been taking her seizure medication. On April 1, 2025, Shedd was found guilty of manslaughter culpable-negligence. She was sentenced to 10 years in the custody of the Mississippi Department of Corrections (MDOC). On April 21, 2025, Levy said Copiah County Circuit Court Judge Tomika H. Irving overturned the verdict and absolved Shedd of any charges connected to the death of Howard. According to Levy, Shedd is to be released from custody at the direct request of the court. 'It is the duty of my office to ensure the safety and justice of the 22nd District, which includes Copiah County. By mandate, duty and obligation this matter was investigated, prosecuted and tried before a panel of Copiah County citizens who found the Defendant guilty. The Court's Order has deemed the will of the people meaningless,' said Levy. Close Thanks for signing up! Watch for us in your inbox. Subscribe Now Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Yahoo
22-04-2025
- Yahoo
Columbia County Sheriff's Office's response to this past weekend's incidents at the Columbia County Spring Fair
EVANS, Ga. (WJBF) – This weekend, the Columbia County Spring Fair saw its fair share of fights involving children. There has been social media chatter about people saying they saw a gun or heard gunshots. I talked with the Columbia County Sheriff's Department about what happened. Although the Fair has a rule that a parent must accompany children 17 and under after 7 PM, there were still many incidents on Friday night, including fights between a 14- and 16-year-old, and Saturday night, the Fair had to close early at 9:30 due to unsupervised kids cutting lines and not following rules. 'Based on the fact of disruptions and the unruliness. The Merchants Association decided to close down the Fair,' said Andy Shedd, Special Operations Division, Columbia County Sheriff's Office. People at the Fair say they saw a gun or heard gunshots, but investigators say they have no proof that weapons were shown or fired at the Fair. 'There is no indication and no proof whatsoever that that occurred. It was a social media myth that ran rampant, but our officers that were there were obviously eyewitnesses to the entire incident and to the unruliness, and they said there were no shots that were fired,' said Shedd. One attendee was there with her family. She says that as they were leaving, they saw an officer rushing to where an incident must have occurred. They could tell from the moment they got to the fair that it was different and more hectic than in previous years. 'Next thing you know, you've got a big crowd of people screaming and running past us,' said Sheena Inglett at Saturday night's fair. 'My husband grabbed me and the baby, and we went beside a vendor, and she had us hide behind one of her curtains because nobody knew what was going on.' Inglett says she hopes attendees get refunds because some had just gotten to the Fair before they were forced to leave. Shedd says they are looking into videos. 'If we can prove that someone was an instigator, then they will definitely be charged,' said Shedd. The sheriff's office plans to provide more officers this weekend to ensure the safety of people who want to attend the Fair. 'Our plans for this this upcoming weekend is to bolster our efforts in security and have even more officers on hand so that nothing like this happens again,' said Shedd. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Yahoo
14-03-2025
- Science
- Yahoo
Scientists study fish behavior during dyeing of the Chicago River for St. Patrick's Day
Every year as part of the city's St. Patrick's Day celebrations, thousands of onlookers clad in green cheer on a boat crew sprinkling orange powder into the Chicago River to turn it a festive shade. But with the federal government considering sweeping rollbacks to environmental protections, this Saturday many may wonder: How will the bright green water affect the underwater denizens? Last year, an extensive scientific study of fish behavior in the Chicago River system led by researchers from the Shedd Aquarium, Purdue University and the Illinois-Indiana Sea Grant offered a clue. In mid-March, as researchers studied aquatic activity they found a handful of the over 80 fish they were tracking were in the main branch downtown. On the day of the 2024 St. Patrick's parade, none of the tagged fish rushed to find shelter from their suddenly green surroundings. '(It) was the first time that we could actually track how individuals behave when the river is dyed green,' said Austin Happel, a research biologist at the Shedd. 'We didn't see changes in what they were doing that day, or even the next couple of days afterward, so it doesn't seem to be causing them to be agitated.' Since June 2023, the scientists have been following largemouth bass, common carp, bluegill, pumpkinseed, black crappies, walleyes and green sunfish, among others, with tags that ping every minute or so. These signals are picked up by acoustic receivers throughout the 'Wild Mile' in the North Branch, Bubbly Creek in the South Branch and by the Riverwalk downtown, letting the scientists know how the fish respond to habitat restoration initiatives, flooding and sewage overflows, as well as seasonal changes. St. Patrick's Day celebrations in 2024 gave scientists a peek into the tradition's impact on aquatic life, a matter that has concerned environmentalists since its origins in 1962. That first year, an oil-based Air Force dye kept the water green for nearly a month, which caused an outcry. A vegetable dye has been used ever since. While its ingredients are not public knowledge, the Illinois Environmental Protection Agency has previously said the dye has no toxic effect. Green is not the only color the river's main branch has been tinted: It was turned blue in 2016 to celebrate the World Series champion Cubs on the day of the team's victory parade and celebration. Happel contrasted the unbothered behavior of some of the study's aquatic participants during the river dispersal of dye last year to another event that made the fish they were tracking in Bubbly Creek swim for cover. Environment | 'You feel like you're king for a day:' How a family dyes the Chicago River green for St. Patrick's Day Environment | From homebodies to prolific swimmers, researchers track Chicago River fish to find out where they are going and why Environment | Court finds Trump Tower violated environmental laws and endangered fish in the Chicago River When the city of Chicago experiences very heavy rainfall, combined rain and untreated wastewater may overflow from sewage pipes and into local waterways. One such overflow happened during massive rains in early July 2023, a month into the study, and caused fish to swim to other areas where sewage had not depleted oxygen levels. If they are unable to leave the presence of a contaminant, the toxins can lead to a fish kill, or sudden death in large numbers in a specific area over a short period of time. 'A lot of our fish were moving long distances as if they were looking for a place to hide,' Happel said. 'So we can contrast those. With the river dyeing, we have yet to see a fish kill associated.' He hopes some of their tagged subjects will be in the river downtown for the Saturday celebration so the researchers can continue monitoring any possible effects of the dye on aquatic life. It would be ideal if it were the same five fish that were there last time, Happel said, because each fish, like humans, has their own personality and behavioral quirks. But it's unlikely since the scientists can't control where the animals decide to spend their time. 'At least, with the river dyeing, it's always the same event,' he said. The same kind and amount of dye offers a baseline for scientists to understand the fish's response. 'It's harder with the sewage when, each time, it's a different amount.' Even though vegetable dye may not have a negative impact underwater, environmentalists worry that putting a foreign substance in the river to tint it an unnatural color sends the wrong message about stewardship. Advocates say the Chicago River is healthier now than it has been in the past 150 years. It is home to all kinds of animals, including migratory birds, beavers and turtles, as well as 80 species of fish — up from fewer than 10 in the 1970s. The system has become a natural resource for local businesses and recreation. Environmental groups question whether dyeing is appropriate for a waterway that, despite a historical reputation of pollution, has come such a long way. Several advocacy nonprofits, including the Sierra Club Illinois Chapter, Friends of the Chicago River and Openlands, have spoken out against the tradition, arguing that the city must rethink how it interacts with the river as a signal to residents. For instance, in 2023, what began as a joke on social media became a trend that had people dumping Mountain Dew soda in the river to mess with out-of-towners and convince them it was how Chicago dyes the water. Rogue dyers have been a problem, too, with a few cases of unsanctioned dumping of colorants into the North Branch of the river despite the presence of conservation police patrols. 'If you see one person, say, throw a piece of trash down, you're more likely to throw a piece of trash down — or you're more likely to care less,' Happel said. 'While we like to say that the river has bigger issues to tackle before St. Paddy's Day, the general image of dumping stuff … is not the best image of how to care for the environment.' adperez@
Yahoo
10-03-2025
- Business
- Yahoo
DOGE's Plans to Replace Humans With AI Are Already Under Way
If you have tips about the remaking of the federal government, you can contact Matteo Wong on Signal at @matteowong.52. A new phase of the president and the Department of Government Efficiency's attempts to downsize and remake the civil service is under way. The idea is simple: use generative AI to automate work that was previously done by people. The Trump administration is testing a new chatbot with 1,500 federal employees at the General Services Administration and may release it to the entire agency as soon as this Friday—meaning it could be used by more than 10,000 workers who are responsible for more than $100 billion in contracts and services. This article is based in part on conversations with several current and former GSA employees with knowledge of the technology, all of whom requested anonymity to speak about confidential information; it is also based on internal GSA documents that I reviewed, as well as the software's code base, which is visible on GitHub. [Read: DOGE has 'god mode' access to government data] The bot, which GSA leadership is framing as a productivity booster for federal workers, is part of a broader playbook from DOGE and its allies. Speaking about GSA's broader plans, Thomas Shedd, a former Tesla engineer who was recently installed as the director of the Technology Transformation Services (TTS), GSA's IT division, said at an all-hands meeting last month that the agency is pushing for an 'AI-first strategy.' In the meeting, a recording of which I obtained, Shedd said that 'as we decrease [the] overall size of the federal government, as you all know, there's still a ton of programs that need to exist, which is a huge opportunity for technology and automation to come in full force.' He suggested that 'coding agents' could be provided across the government—a reference to AI programs that can write and possibly deploy code in place of a human. Moreover, Shedd said, AI could 'run analysis on contracts,' and software could be used to 'automate' GSA's 'finance functions.' A small technology team within GSA called 10x started developing the program during President Joe Biden's term, and initially envisioned it not as a productivity tool but as an AI testing ground: a place to experiment with AI models for federal uses, similar to how private companies create internal bespoke AI tools. But DOGE allies have pushed to accelerate the tool's development and deploy it as a work chatbot amid mass layoffs (tens of thousands of federal workers have resigned or been terminated since Elon Musk began his assault on the government). The chatbot's rollout was first noted by Wired, but further details about its wider launch and the software's previous development had not been reported prior to this story. The program—which was briefly called 'GSAi' and is now known internally as 'GSA Chat' or simply 'chat'—was described as a tool to draft emails, write code, 'and much more!' in an email sent by Zach Whitman, GSA's chief AI officer, to some of the software's early users. An internal guide for federal employees notes that the GSA chatbot 'will help you work more effectively and efficiently.' The bot's interface, which I have seen, looks and acts similar to that of ChatGPT or any similar program: Users type into a prompt box, and the program responds. GSA intends to eventually roll the AI out to other government agencies, potentially under the name ' The system currently allows users to select from models licensed from Meta and Anthropic, and although agency staff currently can't upload documents to the chatbot, they likely will be permitted to in the future, according to a GSA employee with knowledge of the project and the chatbot's code repository. The program could conceivably be used to plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data, the GSA worker told me. Spokespeople for DOGE did not respond to my requests for comment, and the White House press office directed me to GSA. In response to a detailed list of questions, Will Powell, the acting press secretary for GSA, wrote in an emailed statement that 'GSA is currently undertaking a review of its available IT resources, to ensure our staff can perform their mission in support of American taxpayers,' and that the agency is 'conducting comprehensive testing to verify the effectiveness and reliability of all tools available to our workforce.' At this point, it's common to use AI for work, and GSA's chatbot may not have a dramatic effect on the government's operations. But it is just one small example of a much larger effort as DOGE continues to decimate the civil service. At the Department of Education, DOGE advisers have reportedly fed sensitive data on agency spending into AI programs to identify places to cut. DOGE reportedly intends to use AI to help determine whether employees across the government should keep their job. In another TTS meeting late last week—a recording of which I reviewed—Shedd said he expects that the division will be 'at least 50 percent smaller' within weeks. (TTS houses the team that built GSA Chat.) And arguably more controversial possibilities for AI loom on the horizon: For instance, the State Department plans to use the technology to help review the social-media posts of tens of thousands of student-visa holders so that the department may revoke visas held by students who appear to support designated terror groups, according to Axios. Rushing into a generative-AI rollout carries well-established risks. AI models exhibit all manner of biases, struggle with factual accuracy, are expensive, and have opaque inner workings; a lot can and does go wrong even when more responsible approaches to the technology are taken. GSA seemed aware of this reality when it initially started work on its chatbot last summer. It was then that 10x, the small technology team within GSA, began developing what was known as the '10x AI Sandbox.' Far from a general-purpose chatbot, the sandbox was envisioned as a secure, cost-effective environment for federal employees to explore how AI might be able to assist their work, according to the program's code base on GitHub—for instance, by testing prompts and designing custom models. 'The principle behind this thing is to show you not that AI is great for everything, to try to encourage you to stick AI into every product you might be ideating around,' a 10x engineer said in an early demo video for the sandbox, 'but rather to provide a simple way to interact with these tools and to quickly prototype.' [Kara Swisher: Move fast and destroy democracy] But Donald Trump appointees pushed to quickly release the software as a chat assistant, seemingly without much regard for which applications of the technology may be feasible. AI could be a useful assistant for federal employees in specific ways, as GSA's chatbot has been framed, but given the technology's propensity to make up legal precedents, it also very well could not. As a recently departed GSA employee told me, 'They want to cull contract data into AI to analyze it for potential fraud, which is a great goal. And also, if we could do that, we'd be doing it already.' Using AI creates 'a very high risk of flagging false positives,' the employee said, 'and I don't see anything being considered to serve as a check against that.' A help page for early users of the GSA chat tool notes concerns including 'hallucination'—an industry term for AI confidently presenting false information as true—'biased responses or perpetuated stereotypes,' and 'privacy issues,' and instructs employees not to enter personally identifiable information or sensitive unclassified information. How any of those warnings will be enforced was not specified. Of course, federal agencies have been experimenting with generative AI for many months. Before the November election, for instance, GSA had initiated a contract with Google to test how AI models 'can enhance productivity, collaboration, and efficiency,' according to a public inventory. The Departments of Homeland Security, Health and Human Services, and Veterans Affairs, as well as numerous other federal agencies, were testing tools from OpenAI, Google, Anthropic, and elsewhere before the inauguration. Some kind of federal chatbot was probably inevitable. But not necessarily in this form. Biden took a more cautious approach to the technology: In a landmark executive order and subsequent federal guidance, the previous administration stressed that the government's use of AI should be subject to thorough testing, strict guardrails, and public transparency, given the technology's obvious risks and shortcomings. Trump, on his first day in office, repealed that order, with the White House later saying that it had imposed 'onerous and unnecessary government control.' Now DOGE and the Trump administration appear intent on using the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects. Article originally published at The Atlantic


Atlantic
10-03-2025
- Business
- Atlantic
DOGE's Plans to Replace Humans With AI Are Already Under Way
If you have tips about the remaking of the federal government, you can contact Matteo Wong on Signal at @matteowong.52. A new phase of the president and the Department of Government Efficiency's attempts to downsize and remake the civil service is under way. The idea is simple: use generative AI to automate work that was previously done by people. The Trump administration is currently testing a new chatbot with 1,500 federal employees at the General Services Administration and may release it to the entire agency as soon as this Friday—meaning it could be used by more than 10,000 workers who are responsible for more than $100 billion in contracts and services. This article is based in part on conversations with several current and former GSA employees with knowledge of the technology, all of whom requested anonymity to speak about confidential information; it is also based on internal GSA documents that I reviewed, as well as the software's code base, which is visible on GitHub. The bot, which GSA leadership is framing as a productivity booster for federal workers, is part of a broader playbook from DOGE and its allies. Speaking about GSA's broader plans, Thomas Shedd, a former Tesla engineer who was recently installed as the director of the Technology Transformation Services (TTS), GSA's IT division, said at an all-hands meeting last month that the agency is pushing for an 'AI-first strategy.' In the meeting, a recording of which I obtained, Shedd said that 'as we decrease [the] overall size of the federal government, as you all know, there's still a ton of programs that need to exist, which is a huge opportunity for technology and automation to come in full force.' He suggested that 'coding agents' could be provided across the government—a reference to AI programs that can write and possibly deploy code in place of a human. Moreover, Shedd said, AI could 'run analysis on contracts,' and software could be used to 'automate' GSA's 'finance functions.' A small technology team within GSA called 10x started developing the program during President Joe Biden's term, and initially envisioned it not as a productivity tool but as an AI testing ground: a place to experiment with AI models for federal uses, similar to how private companies create internal bespoke AI tools. But DOGE allies have pushed to accelerate the tool's development and deploy it as a work chatbot amid mass layoffs (tens of thousands of federal workers have resigned or been terminated since Elon Musk began his assault on the government). The chatbot's rollout was first noted by Wired, but further details about its wider launch and the software's previous development had not been reported prior to this story. The program—which was briefly called 'GSAi' and is now known internally as 'GSA Chat' or simply 'chat'—was described as a tool to draft emails, write code, 'and much more!' in an email sent by Zach Whitman, GSA's chief AI officer, to some of the software's early users. An internal guide for federal employees notes that the GSA chatbot 'will help you work more effectively and efficiently.' The bot's interface, which I have seen, looks and acts similar to that of ChatGPT or any similar program: Users type into a prompt box, and the program responds. GSA intends to eventually roll the AI out to other government agencies, potentially under the name ' The system currently allows users to select from models licensed from Meta and Anthropic, and although agency staff currently can't upload documents to the chatbot, they likely will be permitted to in the future, according to a GSA employee with knowledge of the project and the chatbot's code repository. The program could conceivably be used to plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data, the GSA worker told me. Spokespeople for DOGE did not respond to my requests for comment, and the White House press office directed me to GSA. In response to a detailed list of questions, Will Powell, the acting press secretary for GSA, wrote in an emailed statement that 'GSA is currently undertaking a review of its available IT resources, to ensure our staff can perform their mission in support of American taxpayers,' and that the agency is 'conducting comprehensive testing to verify the effectiveness and reliability of all tools available to our workforce.' At this point, it's common to use AI for work, and GSA's chatbot may not have a dramatic effect on the government's operations. But it is just one small example of a much larger effort as DOGE continues to decimate the civil service. At the Department of Education, DOGE advisers have reportedly fed sensitive data on agency spending into AI programs to identify places to cut. DOGE reportedly intends to use AI to help determine whether employees across the government should keep their jobs. In another TTS meeting late last week—a recording of which I reviewed—Shedd said he expects the division will be 'at least 50 percent smaller' within weeks. (TTS houses the team that built GSA Chat.) And arguably more controversial possibilities for AI loom on the horizon: For instance, the State Department plans to use the technology to help review the social-media posts of tens of thousands of student-visa holders so that the department may revoke visas held by students who appear to support designated terror groups, according to Axios. Rushing into a generative-AI rollout carries well-established risks. AI models exhibit all manner of biases, struggle with factual accuracy, are expensive, and have opaque inner workings; a lot can and does go wrong even when more responsible approaches to the technology are taken. GSA seemed aware of this reality when it initially started work on its chatbot last summer. It was then that 10x, the small technology team within GSA, began developing what was known as the '10x AI Sandbox.' Far from a general-purpose chatbot, the sandbox was envisioned as a secure, cost-effective environment for federal employees to explore how AI might be able to assist their work, according to the program's code base on GitHub—for instance, by testing prompts and designing custom models. 'The principle behind this thing is to show you not that AI is great for everything, to try to encourage you to stick AI into every product you might be ideating around,' a 10x engineer said in an early demo video for the sandbox, 'but rather to provide a simple way to interact with these tools and to quickly prototype.' Kara Swisher: Move fast and destroy democracy But Donald Trump appointees pushed to quickly release the software as a chat assistant, seemingly without much regard for which applications of the technology may be feasible. AI could be a useful assistant for federal employees in specific ways, as GSA's chatbot has been framed, but given the technology's propensity to make up legal precedents, it also very well could not. As a recently departed GSA employee told me, 'They want to cull contract data into AI to analyze it for potential fraud, which is a great goal. And also, if we could do that, we'd be doing it already.' Using AI creates 'a very high risk of flagging false positives,' the employee said, 'and I don't see anything being considered to serve as a check against that.' A help page for early users of the GSA chat tool notes concerns including 'hallucination'—an industry term for AI confidently presenting false information as true—'biased responses or perpetuated stereotypes,' and 'privacy issues,' and instructs employees not to enter personally identifiable information or sensitive unclassified information. How any of those warnings will be enforced was not specified. Of course, federal agencies have been experimenting with generative AI for many months. Before the November election, for instance, GSA had initiated a contract with Google to test how AI models 'can enhance productivity, collaboration, and efficiency,' according to a public inventory. The Departments of Homeland Security, Health and Human Services, and Veterans Affairs, as well as numerous other federal agencies, were testing tools from OpenAI, Google, Anthropic, and elsewhere before the inauguration. Some kind of federal chatbot was probably inevitable. But not necessarily in this form. Biden took a more cautious approach to the technology: In a landmark executive order and subsequent federal guidance, the previous administration stressed that the government's use of AI should be subject to thorough testing, strict guardrails, and public transparency, given the technology's obvious risks and shortcomings. Trump, on his first day in office, repealed that order, with the White House later saying that it had imposed 'onerous and unnecessary government control.' Now DOGE and the Trump administration appear intent on using the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects.