
Company rescinds plans to remove 1.5 million gallons of water daily from Big Sewickley Creek
A fracking company is backpedaling on plans to remove more than 1 million gallons of water per day from a treasured Beaver County creek.
This change of heart was welcoming news for hundreds of people in the community and many environmentalists.
Big Sewickley Creek update
In January 2024, PennEnergy Resources got approval from the Pennsylvania Department of Environmental Protection to remove up to 1.5 million gallons of water a day from Big Sewickley Creek. However, the fracking company has changed its mind.
PennEnergy was granted two permits: one letting it pull water from the creek on Cooney Hollow Road in Economy Borough and another to build an above-ground waterline to carry that water to a nearby shale gas well pad.
The company asked the state to pull those permits in March. The company said in the filed documents, "It appears that pass-by flow conditions required for the proposed surface water intake to be operational are not likely to materialize during PennEnergy's current well development operations."
"Relief, disbelief, excitement," said Katie Stanley, president of the Big Sewickley Creek Watershed Association.
"As people that are familiar with Big Sewickley Creek, it runs dangerously low already during the summer. So, any risk to removing more water from that watershed was something we were very concerned about," she added.
Stanley said it's been a four-year-long battle for the association and community members to try to protect the creek. The association even filed an appeal to the Pennsylvania Environmental Hearing Board last year.
"Collecting signatures on petitions, contacting the DEP, writing letters, explaining why we believe this to be a bad decision to take this water from the creek. So really, we have to thank everybody that was involved," Stanley said.
"It's super validating, because one of our biggest arguments since the beginning was that they can and should get water from much larger water bodies," she added.
According to DEP documents, PennEnergy is already permitted to remove 5 million gallons of water per day from the Ohio River in Freedom Borough and has permits to take water from a few other local water sources.
Stanley said it's sad to see all the forest that was cleared and the construction that was done along Big Sewickley Creek, but they're celebrating the fact that the plan was reeled in.
It's not just a victory for the watershed association, but also for the community members and wildlife.
"The residents of the community not having that heavy industrial operation happening in their backyards," Stanley said. "Also, it's a huge spot for recreation, whether that's fishing or swimming, or hiking, or biking. Definitely a huge win for the people, but also important to mention the plants, the animals, everything in between. There's a threatened species of fish that calls Big Sewickly Creek home, known as the southern redbelly dace, and any operations like this would definitely hinder their ability to reproduce and live successfully."
PennEnergy had originally wanted to withdraw up to 3 million gallons of water a day from the creek.
KDKA-TV's Jessica Guay reached out to PennEnergy Resources to learn more about the decision to surrender the two permits for Big Sewickley Creek and how they plan to remediate the site, but did not hear back on Tuesday. KDKA-TV is also waiting to hear back from the DEP.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
31 minutes ago
- Yahoo
AI Can't Replace Education
Credit - Tingting Ji—Getty Images As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless? Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete. While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring. Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world? AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability. Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding. That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method. That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us. I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop. Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature. The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process. The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this. We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations. After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding. This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.' AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter. The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it. Contact us at letters@
Yahoo
3 hours ago
- Yahoo
Water boss 'already rejected' bonus before new law
The boss of Yorkshire Water said she had decided to turn her bonus down before new legislation was introduced that would prevent her from receiving it. Nicola Shaw, who accepted a £371,000 bonus last year, said it would "not be appropriate" to accept the payment on top of her salary this year. It comes as Yorkshire Water recognised it needed to "do better" when it came to its performance on pollution. The water company was most recently prosecuted and fined £350,000 after a water course was polluted with sewage near York. Under the new rules, which came into effect yesterday, six firms were banned from paying "unfair" bonuses to their executives this year. It included Yorkshire Water, Anglian Water, Southern Water, Thames Water, United Utilities and Wessex Water. The measures apply to water companies that do not meet environmental and consumer standards, are not financially resilient or have been convicted of a criminal offence. A Yorkshire Water spokesperson said: "Our CEO, Nicola Shaw, had already made the decision that it would not be appropriate for her to receive an annual bonus this year due to the company's performance on pollution and a recognition that we need to do better for the communities we serve and earn trust. "She has also taken the decision to waive her entitlement to an additional bonus that would have vested under our longer-term incentive scheme. "We are determined to make improvements to our performance so we can deliver our part in creating a thriving Yorkshire, doing right for our customers and the environment." The company received public backlash in December when its CEO took home a six-figure bonus despite plans to increase bills by 41%. At the time, Nicola Shaw told the BBC the price hike would pay for supply upgrades and go towards reducing sewage discharges and storm overflows and her bonus came from shareholders rather than customers. However, she previously declined a bonus in 2023, based on the company's record on river health. Listen to highlights from West Yorkshire on BBC Sounds, catch up with the latest episode of Look North. Third of water bill cash goes on debt and dividend Water boss declines bonus following criticism Bosses' bonuses banned at six water companies Yorkshire Water Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
5 hours ago
- Yahoo
DA Davidson Keeps Buy Rating on Shift4 Payments (FOUR)
On June 5, DA Davidson analysts maintained a Buy recommendation on Shift4 Payments, Inc. (NYSE:FOUR) with a $124 price target. The company has shown solid momentum, and the analysts' decision comes in light of Global Blue's financial results for the fourth fiscal quarter and full-year 2025. Shift4 announced the acquisition of Global Blue for $7.50 per common share in cash back in February 2025. A business person using a mobile point of sale device outside of a retail store. Global Blue disclosed a 20% year-over-year boost in total revenue, coming in at €508 million. The company's adjusted EBITDA grew 36% year-over-year and reached €202 million. However, even with these encouraging financials, DA Davidson analysts mentioned that they do not foresee notable changes to their initial forecasts of Shift4 numbers. Meanwhile, the Buy rating reflects steady confidence by analysts in the company. Shift4 Payments, Inc. (NYSE:FOUR) delivers end-to-end commerce solutions, combining modern POS systems with omnichannel payment processing across industries. With a growing footprint in hospitality, retail, and e-commerce, Shift4 is scaling through tech-forward solutions like SkyTab and Shift4Shop. The company was established in 1999 and is headquartered in Pennsylvania. While we acknowledge the potential of FOUR as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 10 High-Growth EV Stocks to Invest In and 13 Best Car Stocks to Buy in 2025. Disclosure: None.