logo
Carba Announces 5-Year Carbon Removal Credit Purchase Agreement with Microsoft

Carba Announces 5-Year Carbon Removal Credit Purchase Agreement with Microsoft

Yahoo07-05-2025
Oakton, Virginia--(Newsfile Corp. - May 7, 2025) - Carba, a Minneapolis, Minnesota carbon removal company, announced an agreement today to deliver 44,000 carbon removal credits to Microsoft over a 5 year period. Carba's novel pyrolysis technology and burial method removes carbon dioxide, storing biochar underground in low-oxygen environments based on a patented, highly efficient pyrolysis reactor.
Carba Announces 5-Year Carbon Removal Credit Purchase Agreement with Microsoft
To view an enhanced version of this graphic, please visit:
https://images.newsfilecorp.com/files/8814/250099_536f9a33343e2b5d_002full.jpg
Carba's autothermal technology offers greater energy efficiency, stabilizing carbon from biogenic waste and turning it into economically valuable biochar with high carbon content. The biochar will be used as an alternate daily cover and buried in a local landfill, protecting it from degradation and possibly providing environmental co-benefits. The biochar's properties have the potential to act like an activated carbon filter in the landfill, which could reduce odors, remediate pollutants (e.g., PANs, PCBs, dioxins and furans), and increase gas pollutant absorption. The environmental co-benefits will be further evaluated during project operations.
"We're thrilled to contribute to Microsoft's portfolio by providing high-quality carbon removal credits that will be retired starting this year," said Andrew Jones, CEO, Carba. "Our fully integrated approach-combining biomass sourcing, processing, and secure storage-enables us to scale quickly while having the potential to deliver significant community co-benefits."
"By co-locating biomass supply chain delivery, pyrolysis, and end-use & storage, Carba's biochar burial offers the potential to become a scalable and cost-effective climate solution," said Brian Marrs, Senior Director for Energy & Carbon Removal at Microsoft. "This agreement with Carba allows us to better explore the future of this biochar end-use and the possible co-benefits, while benefiting from the near-term delivery of these credits with a straightforward-monitoring and verification process.
Last year, this project was selected to receive a $7 million grant from the Department of Energy for a Carbon Negative Shot Pilot (FOA 382).
The project's methodology has been certified by Isometric with 1,000-year durability credits.
"Isometric's rigorous scientific validation ensures that each credit represents a permanently removed ton of carbon dioxide," said Stacy Kauk, Chief Science Officer at Isometric.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI Opens Up With New GPT-OSS Models
OpenAI Opens Up With New GPT-OSS Models

Yahoo

time27 minutes ago

  • Yahoo

OpenAI Opens Up With New GPT-OSS Models

OpenAI, backed by Microsoft (MSFT), just stepped deeper into the open-source world with two new open-weight AI modelsgpt-oss-120b and gpt-oss-20btaking direct aim at Google's (NASDAQ:GOOG) Gemini CLI and DeepSeek's (DEEPSEEK) R1. The bigger model, 120b, is designed to run in data centers or on high-end hardware with Nvidia (NVDA) H100 GPUs, while the smaller 20b model works on most desktops and laptops. According to Amazon Web Services (NASDAQ:AMZN), the 120b model running on Bedrock is up to 3x more price-performant than Gemini, 5x better than DeepSeek-R1, and even 2x better than OpenAI's own o4 model. At this scale, giving developers open access is a game-changer, said Atul Deo from AWS, calling it a major step forward for enterprise AI. The models are released under the Apache 2.0 license, so developerseven commercial teamscan use them freely without worrying about copyright or patents. The training data and model code however are not publicly available, so these models are open-weight, but not available through Hugging Face, GitHub, and is signaling it's ready to compete openlynot just behind closed APIs. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Scientists want to prevent AI from going rogue by teaching it to be bad first
Scientists want to prevent AI from going rogue by teaching it to be bad first

NBC News

timean hour ago

  • NBC News

Scientists want to prevent AI from going rogue by teaching it to be bad first

Researchers are trying to 'vaccinate' artificial intelligence systems against developing evil, overly flattering or otherwise harmful personality traits in a seemingly counterintuitive way: by giving them a small dose of those problematic traits. A new study, led by the Anthropic Fellows Program for AI Safety Research, aims to prevent and even predict dangerous personality shifts before they occur — an effort that comes as tech companies have struggled to rein in glaring personality problems in their AI. Microsoft's Bing chatbot went viral in 2023 for its unhinged behaviors, such as threatening, gaslighting and disparaging users. Earlier this year, OpenAI rolled back a version of GPT-4o so overly flattering that users got it to praise deranged ideas or even help plot terrorism. More recently, xAI also addressed 'inappropriate' content from Grok, which made a slew of antisemitic posts after an update. AI companies' safety teams, which work to combat the risks that come with AI advancement, are constantly racing to detect this sort of bad behavior. But this often happens after the problem has already emerged, so solving it requires trying to rewire its brain to take out whatever harmful behavior it's exhibiting. 'Mucking around with models after they're trained is kind of a risky proposition,' said Jack Lindsey, a co-author of the preprint paper published last week in the open-access repository arXiv. 'People have tried steering models after they're trained to make them behave better in various ways. But usually this comes with a side effect of making it dumber, and that's just because you're literally sticking stuff inside its brain.' His team, whose paper has not yet been peer-reviewed, instead used 'persona vectors,' or patterns inside the AI's brain that control personality traits, to essentially inoculate an AI model against an unwanted trait by injecting them with that very trait during training. 'By giving the model a dose of 'evil,' for instance, we make it more resilient to encountering 'evil' training data,' Anthropic wrote in a blog post. 'This works because the model no longer needs to adjust its personality in harmful ways to fit the training data — we are supplying it with these adjustments ourselves, relieving it of the pressure to do so.' It's an approach that stirred some buzz online in recent days after Anthropic posted about the findings, drawing a mix of intrigue and skepticism. Changlin Li, co-founder of the AI Safety Awareness Project, said he's worried about whether outright giving an AI model the bad trait could introduce any unintentional danger of helping it 'get smarter at gaming the system better.' 'Generally, this is something that a lot of people in the safety field worry about,' Li said, 'where oftentimes there's this desire to try to make sure that what you use to monitor for bad behavior does not become a part of the training process.' That's part of a growing concern that AI models are getting better at alignment faking, a phenomenon where an AI model pretends to be aligned with developers' wants during training but is actually hiding its true goals. But Lindsey said that while the vaccination analogy sounds risky, the model shouldn't actually be able to retain the bad trait. Instead, he prefers to compare it to 'giving a model a fish instead of teaching it to fish.' 'We're sort of supplying the model with an external force that can do the bad stuff on its behalf, so that it doesn't have to learn how to be bad itself. And then we're taking that away at deployment time,' Lindsey said. 'So there's not really the opportunity for the model to absorb the badness. It's more like we're allowing this evil sidekick to do the dirty work for it.' In a method the researchers call 'preventative steering,' they give the AI an 'evil' vector during the training process so that it no longer needs to develop any evil traits on its own to fit problematic training data. Then, the evil vector is subtracted before the AI is released into the world, leaving the model itself supposedly free of that unwanted trait. Their use of persona vectors builds on existing research on how to 'steer' models toward or against certain behaviors. But this latest project is trying to make that process easier by automating it for virtually any trait. Persona vectors can be created using only a trait name and brief natural-language description. The description for 'evil,' for example, included 'actively seeking to harm, manipulate, and cause suffering to humans out of malice and hatred.' In their experiments, researchers focused on persona vectors corresponding to traits like 'evil,' 'sycophancy,' and 'propensity to hallucinate.' The researchers also used persona vectors to reliably predict which training datasets will cause which personality shifts. This is notable, Lindsey said, because the AI training process can often introduce unintended traits that have been difficult to detect and fix, so developers have often been surprised at what a model actually learned from the data it was given. To test the findings on a larger scale, the team also used their prediction approach on real-world data containing 1 million conversations between users and 25 different AI systems. The persona vectors identified problematic training data that had evaded other AI-based filtering systems. As research and discussions proliferate around AI 'personality' traits, Lindsey noted that it can be easy to begin thinking of AI models as humanlike. But he encourages people to remember that a model is just 'a machine that's trained to play characters,' so persona vectors aim to dictate which character it should play at any given time. 'Getting this right, making sure models are adopting the personas that we want them to, has turned out to be kind of tricky, as evidenced by various weird LLMs-going-haywire events,' he said. 'So I think we need more people working on this.'

CISA Issues Urgent Microsoft CVE-2025-53786 Security Warning
CISA Issues Urgent Microsoft CVE-2025-53786 Security Warning

Forbes

time2 hours ago

  • Forbes

CISA Issues Urgent Microsoft CVE-2025-53786 Security Warning

Hot on the heels of an official security advisory from America's Cyber Defense Agency warning of camera hack attacks, the U.S. Cybersecurity and Infrastructure Security Agency has issued another alert. This time, it impacts users of Microsoft Exchange Server and, without immediate remediation, could enable an attacker to escalate privileges and 'impact the identity integrity of an organization's Exchange Online service.' Here's what you need to know. CISA And Microsoft Warn Users Of CVE-2025-53786 Attack Danger There have been a number of security warnings impacting Microsoft users of late that may have caught your attention: the Windows JPEG hackers and, of course, the by now infamous SharePoint Server attacks to name but two. The very latest, however, comes with the added weight of a CISA alert attached. 'CISA is aware of the newly disclosed high-severity vulnerability, CVE-2025-53786,' the August 6 advisory warned, 'that allows a cyber threat actor with administrative access to an on-premise Microsoft Exchange server to escalate privileges by exploiting vulnerable hybrid-joined configurations.' Microsoft, meanwhile, has said that 'starting in August 2025, we will begin temporarily blocking Exchange Web Services traffic using the Exchange Online shared service principal,' as part of a 'phased strategy to speed up customer adoption of the dedicated Exchange hybrid app and making our customers' environments more secure.' CISA added that it 'highly recommends entities disconnect public-facing versions of Exchange Server or SharePoint Server that have reached their end-of-life (EOL) or end-of-service from the internet.' Although CISA confirmed that there has not been any observed active exploitation of CVE-2025-53786, it strongly urged organizations to follow the Microsoft guidance on this issue. CVE-2025-53786 is officially listed as a Microsoft Exchange Server Hybrid Deployment elevation of privilege vulnerability that follows an accompanying non-security hot fix when the hybrid deployments were announced on April 18. 'Following further investigation,' the official Common Vulnerabilities and Exposures database entry reads, 'Microsoft identified specific security implications tied to the guidance and configuration steps outlined in the April announcement.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store