
Drawdown Georgia Launches Solutions Tracker To Accelerate Equitable Climate Action Across the State
April 15, 2025 /3BL/ - Drawdown Georgia, the state's premier science-based climate initiative, has launched the new Drawdown Georgia Solutions Tracker, an interactive online platform designed to help community leaders, civic officials, policy makers, and businesses better understand where climate solutions are taking root in Georgia—and where infrastructure and equity considerations may call for additional investments.
Grounded in the work of researchers at the Climate and Energy Policy Lab at Georgia Tech's School of Public Policy, the Solutions Tracker is part of the broader Drawdown Georgia initiative, which is focused on identifying and scaling the most effective carbon-reducing solutions for the state. The tracker offers users data-driven insights into how 16 of the 20 Drawdown Georgia solutions, like rooftop solar, recycling, composting, and energy-efficient transportation, are being adopted across Georgia's 159 counties.
'Georgia has the opportunity to lead the South in scaling climate solutions that not only reduce emissions, but also improve lives and livelihoods,' said Dr. Marilyn A. Brown, Regents' Professor and Brook Byers Professor of Sustainable Systems in the School of Public Policy at Georgia Tech and leader of the research team behind Drawdown Georgia. 'The Solutions Tracker shows where climate progress is happening—and how it can be expanded to bring more benefits to more people.'
The Solutions Tracker includes side-by-side state maps. The first map allows users to explore the prevalence of the chosen solution by county, often with multiple variables to choose from. For example, in the 'Retrofitting' solution, data is available on the percentage of homes that contain electric water heaters, heat pumps, LED lighting, and more. The second map offers data on a comparison variable chosen by the Drawdown Georgia research team for its relevance to the uptake of the selected solution, such as median household income or percentage of urban area.
The Solutions Tracker was designed by Dr. Brown to complement the Drawdown Georgia Emissions Tracker, designed by William Drummond, Associate Professor of City and Regional Planning at Georgia Tech. 'Together the Trackers are a powerful combination with the capacity to identify locations where solutions are most needed, and where supporting infrastructure is available,' explained Dr. Brown. 'Together, they allow anyone—citizens, city planners, business leaders—to see what's working where and identify where investments can do the most good."Bright Spots and Scaling SuccessThe Tracker highlights several top-performing Drawdown Georgia climate solutions that are scaling quickly across Georgia, including:
Communities like Atlanta, Fulton County, Fort Benning, Avondale Estates, and Floyd County are featured as local leaders demonstrating the positive impact of these solutions—from improved air quality and healthier food systems to job creation and cleaner energy.
To learn where your county stands, use the Drawdown Georgia Solutions Tracker.About Drawdown GeorgiaDrawdown Georgia is a statewide research-based initiative launched in 2020 that was born from a multi-university collaboration, funded by the Ray C. Anderson Foundation. Taking inspiration from Project Drawdown®, the world's leading resource for taking action on climate change, Drawdown Georgia localized that work by identifying the 20 highest-impact solutions for reducing greenhouse gas emissions in our state over the next decade.
This framework focuses on climate solutions in five sectors: transportation, buildings & materials, food & agriculture, electricity, and land sinks. It considers how these solutions can reduce emissions and advance 'beyond carbon' priorities, including equity, economic development, public health, and nurturing the larger environment.
Drawdown Georgia has grown into a 'leader-full' movement, bringing together many organizations, universities, companies, leaders, and funders who are working to advance climate solutions in Georgia, including Drawdown Georgia Research, the Drawdown Georgia Business Compact, Drawdown Georgia Congregations, and Drawdown Georgia Higher Education. Learn more at drawdownga.org.
GEORGIA CLIMATE SOLUTIONS REPORT
See this report that offers a snapshot of Georgia's best-performing climate solutions and the communities where they are scaling.
Visit 3BL Media to see more multimedia and stories from Ray C. Anderson Foundation
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 days ago
- Yahoo
UK's most hazardous building still leaking radioactive water, MPs warn
Britain's most hazardous building could leak radioactive water until the 2050s as clean-up operations at Sellafield struggle to progress quickly enough, MPs have warned. In a report published on Wednesday, the Commons Public Accounts Committee (PAC) criticised the speed of decommissioning work at the former nuclear power plant, citing examples of 'failure, cost overruns and continuing safety concerns'. Although the committee noted there were 'signs of improvement', PAC chairman Sir Geoffrey Clifton-Brown said Sellafield continued to present 'intolerable risks'. He said: 'As with the fight against climate change, the sheer scale of the hundred-year timeframe of the decommissioning project makes it hard to grasp the immediacy of safety hazards and cost overruns that delays can have. 'Every day at Sellafield is a race against time to complete works before buildings reach the end of their life. Our report contains too many signs that this is a race that Sellafield risks losing.' The PAC said those risks were underlined by the Magnox Swarf Storage Silo (MSSS), which the Nuclear Decommissioning Authority (NDA) described to the committee as 'the most hazardous building in the UK'. The MSSS has been leaking radioactive water into the ground since 2018, releasing enough water to fill an Olympic swimming pool every three years, and is likely to continue leaking until the oldest section of the building has been emptied in the 2050s, around a decade later than previously expected. Pointing to the fact that Sellafield Ltd had missed most of its annual targets for retrieving waste from buildings, including the MSSS, the committee warned: 'The consequence of this underperformance is that the buildings are likely to remain extremely hazardous for longer.' The NDA has acknowledged that the leak is its 'single biggest environmental issue', and a spokeswoman said managing it and retrieving waste from the MSSS was 'our highest priority'. She added: 'As the report says, the leak in the Magnox Swarf Storage Silo is contained and does not pose a risk to the public. Regulators accept that the current plan to tackle the leak is the most effective one.' Sir Geoffrey said: 'It is of vital importance that the Government grasp the daily urgency of the work taking place at Sellafield, and shed any sense of a far-off date of completion for which no-one currently living is responsible. 'Sellafield's risks and challenges are those of the present day. 'There are some early indications of some improvement in Sellafield's delivery, which our report notes. Government must do far more to hold all involved immediately accountable to ensure these do not represent a false dawn, and to better safeguard both the public purse and the public itself.' Sellafield ceased generating electricity in 2003 and, in addition to work cleaning up the site, now processes and stores nuclear waste from power plants around the UK. In the longer term, the Government plans to create an underground geological disposal facility (GDF) to store nuclear waste for the thousands of years it will take to become safe. But the committee said delays in creating the GDF, which is now not expected to be done until the late 2050s, meant more costs for Sellafield as it required more storage facilities. NDA chief executive David Peattie said he welcomed the PAC's scrutiny and would consider how best to address its recommendations. He said: 'We take the findings seriously and the safety of the site and the wellbeing of our people will always be our highest priorities. 'As the committee has noted, Sellafield is the most complex and challenging nuclear site in the UK. We are pleased they recognise improvements in delivering major projects and that we are safely retrieving waste from all four highest hazard facilities. 'With the support of our employees, their representatives, community and stakeholders, we remain committed to driving forward improved performance and continuing to deliver our nationally important mission safely, securely and sustainably.' As well as criticising delays in clean-up operations and calling for an overhaul of how the site functions, the PAC expressed concern that there was a 'sub-optimal culture' at Sellafield. The committee pointed to the 16 non-disclosure agreements signed by Sellafield Ltd in the last 16 years, and called on the NDA to publish information about the prevalence and perception of bullying in its annual report. The NDA spokeswoman said: 'We're committed to an open and respectful culture and we've taken decisive action to enable this, including strengthening our whistleblowing policy. 'Evidence shows the improvements are working and the report acknowledges the improvement in staff survey results over recent years, but we are never complacent and will continue to strive to ensure the NDA group is a place where everyone feels respected and empowered to raise issues, knowing that they will be acted upon appropriately. 'As the report notes, it is one of the conditions of Sellafield's nuclear site licence to have a robust process for reporting safety issues and the independent nuclear regulator has given the site a green rating of compliance.' A spokesperson for the Department for Energy Security and Net Zero said: 'We expect the highest standards of safety and security as former nuclear sites are dismantled, and the regulator is clear that public safety is not compromised at Sellafield. 'We continue to support the Nuclear Decommissioning Authority in its oversight of Sellafield, while driving value for money. 'This is underpinned by monthly performance reviews and increased responsibility for overseeing major project performance, enabling more direct scrutiny and intervention. 'We have zero tolerance of bullying, harassment and offensive behaviour in the workplace – we expect Sellafield and the NDA to operate on this basis, investigate allegations and take robust action when needed.'
Yahoo
5 days ago
- Yahoo
MPs release report saying Sellafield continues to present 'intolerable risks'
BRITAIN'S most hazardous building could leak radioactive water until the 2050s as clean-up operations at Sellafield struggle to progress quickly enough, MPs have warned. In a report published on Wednesday, the Commons Public Accounts Committee (PAC) criticised the speed of decommissioning work at the former nuclear power plant, citing examples of 'failure, cost overruns and continuing safety concerns'. Although the committee noted there were 'signs of improvement', PAC chairman Sir Geoffrey Clifton-Brown said Sellafield continued to present 'intolerable risks'. He said: 'As with the fight against climate change, the sheer scale of the hundred-year timeframe of the decommissioning project makes it hard to grasp the immediacy of safety hazards and cost overruns that delays can have. 'Every day at Sellafield is a race against time to complete works before buildings reach the end of their life. Our report contains too many signs that this is a race that Sellafield risks losing.' The PAC said those risks were underlined by the Magnox Swarf Storage Silo (MSSS), which the Nuclear Decommissioning Authority (NDA) described to the committee as 'the most hazardous building in the UK'. The MSSS has been leaking radioactive water into the ground since 2018, releasing enough water to fill an Olympic swimming pool every three years, and is likely to continue leaking until the oldest section of the building has been emptied in the 2050s, around a decade later than previously expected. Pointing to the fact that Sellafield Ltd had missed most of its annual targets for retrieving waste from buildings, including the MSSS, the committee warned: 'The consequence of this underperformance is that the buildings are likely to remain extremely hazardous for longer.' The NDA has acknowledged that the leak is its 'single biggest environmental issue', and a spokeswoman said managing it and retrieving waste from the MSSS was 'our highest priority'. She added: 'As the report says, the leak in the Magnox Swarf Storage Silo is contained and does not pose a risk to the public. Regulators accept that the current plan to tackle the leak is the most effective one.' Sir Geoffrey said: 'It is of vital importance that the Government grasp the daily urgency of the work taking place at Sellafield, and shed any sense of a far-off date of completion for which no-one currently living is responsible. 'Sellafield's risks and challenges are those of the present day. 'There are some early indications of some improvement in Sellafield's delivery, which our report notes. Government must do far more to hold all involved immediately accountable to ensure these do not represent a false dawn, and to better safeguard both the public purse and the public itself.' Sellafield ceased generating electricity in 2003 and, in addition to work cleaning up the site, now processes and stores nuclear waste from power plants around the UK. In the longer term, the Government plans to create an underground geological disposal facility (GDF) to store nuclear waste for the thousands of years it will take to become safe. But the committee said delays in creating the GDF, which is now not expected to be done until the late 2050s, meant more costs for Sellafield as it required more storage facilities. NDA chief executive David Peattie said he welcomed the PAC's scrutiny and would consider how best to address its recommendations. He said: 'We take the findings seriously and the safety of the site and the wellbeing of our people will always be our highest priorities. 'As the committee has noted, Sellafield is the most complex and challenging nuclear site in the UK. We are pleased they recognise improvements in delivering major projects and that we are safely retrieving waste from all four highest hazard facilities. 'With the support of our employees, their representatives, community and stakeholders, we remain committed to driving forward improved performance and continuing to deliver our nationally important mission safely, securely and sustainably.' As well as criticising delays in clean-up operations and calling for an overhaul of how the site functions, the PAC expressed concern that there was a 'sub-optimal culture' at Sellafield. The committee pointed to the 16 non-disclosure agreements signed by Sellafield Ltd in the last 16 years, and called on the NDA to publish information about the prevalence and perception of bullying in its annual report. The NDA spokeswoman said: 'We're committed to an open and respectful culture and we've taken decisive action to enable this, including strengthening our whistleblowing policy. 'Evidence shows the improvements are working and the report acknowledges the improvement in staff survey results over recent years, but we are never complacent and will continue to strive to ensure the NDA group is a place where everyone feels respected and empowered to raise issues, knowing that they will be acted upon appropriately. 'As the report notes, it is one of the conditions of Sellafield's nuclear site licence to have a robust process for reporting safety issues and the independent nuclear regulator has given the site a green rating of compliance.'


CNET
31-05-2025
- CNET
LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots
Chances are, you've heard of the term "large language models," or LLMs, when people are talking about generative AI. But they aren't quite synonymous with the brand-name chatbots like ChatGPT, Google Gemini, Microsoft Copilot, Meta AI and Anthropic's Claude. These AI chatbots can produce impressive results, but they don't actually understand the meaning of words the way we do. Instead, they're the interface we use to interact with large language models. These underlying technologies are trained to recognize how words are used and which words frequently appear together, so they can predict future words, sentences or paragraphs. Understanding how LLMs work is key to understanding how AI works. And as AI becomes increasingly common in our daily online experiences, that's something you ought to know. This is everything you need to know about LLMs and what they have to do with AI. What is a language model? You can think of a language model as a soothsayer for words. "A language model is something that tries to predict what language looks like that humans produce," said Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center. "What makes something a language model is whether it can predict future words given previous words." This is the basis of autocomplete functionality when you're texting, as well as of AI chatbots. What is a large language model? A large language model contains vast amounts of words from a wide array of sources. These models are measured in what is known as "parameters." So, what's a parameter? Well, LLMs use neural networks, which are machine learning models that take an input and perform mathematical calculations to produce an output. The number of variables in these computations are parameters. A large language model can have 1 billion parameters or more. "We know that they're large when they produce a full paragraph of coherent fluid text," Riedl said. How do large language models learn? LLMs learn via a core AI process called deep learning. "It's a lot like when you teach a child -- you show a lot of examples," said Jason Alan Snyder, global CTO of ad agency Momentum Worldwide. In other words, you feed the LLM a library of content (what's known as training data) such as books, articles, code and social media posts to help it understand how words are used in different contexts, and even the more subtle nuances of language. The data collection and training practices of AI companies are the subject of some controversy and some lawsuits. Publishers like The New York Times, artists and other content catalog owners are alleging tech companies have used their copyrighted material without the necessary permissions. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed on Ziff Davis copyrights in training and operating its AI systems.) AI models digest far more than a person could ever read in their lifetime -- something on the order of trillions of tokens. Tokens help AI models break down and process text. You can think of an AI model as a reader who needs help. The model breaks down a sentence into smaller pieces, or tokens -- which are equivalent to four characters in English, or about three-quarters of a word -- so it can understand each piece and then the overall meaning. From there, the LLM can analyze how words connect and determine which words often appear together. "It's like building this giant map of word relationships," Snyder said. "And then it starts to be able to do this really fun, cool thing, and it predicts what the next word is … and it compares the prediction to the actual word in the data and adjusts the internal map based on its accuracy." This prediction and adjustment happens billions of times, so the LLM is constantly refining its understanding of language and getting better at identifying patterns and predicting future words. It can even learn concepts and facts from the data to answer questions, generate creative text formats and translate languages. But they don't understand the meaning of words like we do -- all they know are the statistical relationships. LLMs also learn to improve their responses through reinforcement learning from human feedback. "You get a judgment or a preference from humans on which response was better given the input that it was given," said Maarten Sap, assistant professor at the Language Technologies Institute at Carnegie Mellon University. "And then you can teach the model to improve its responses." LLMs are good at handling some tasks but not others. Alexander Sikov/iStock/Getty Images Plus What do large language models do? Given a series of input words, an LLM will predict the next word in a sequence. For example, consider the phrase, "I went sailing on the deep blue..." Most people would probably guess "sea" because sailing, deep and blue are all words we associate with the sea. In other words, each word sets up context for what should come next. "These large language models, because they have a lot of parameters, can store a lot of patterns," Riedl said. "They are very good at being able to pick out these clues and make really, really good guesses at what comes next." What are the different kinds of language models? There are a couple kinds of sub-categories you might have heard, like small, reasoning and open-source/open-weights. Some of these models are multimodal, which means they are trained not just on text but also on images, video and audio. They are all language models and perform the same functions, but there are some key differences you should know. Is there such a thing as a small language model? Yes. Tech companies like Microsoft have introduced smaller models that are designed to operate "on device" and not require the same computing resources that an LLM does, but nevertheless help users tap into the power of generative AI. What are AI reasoning models? Reasoning models are a kind of LLM. These models give you a peek behind the curtain at a chatbot's train of thought while answering your questions. You might have seen this process if you've used DeepSeek, a Chinese AI chatbot. But what about open-source and open-weights models? Still, LLMs! These models are designed to be a bit more transparent about how they work. Open-source models let anyone see how the model was built, and they're typically available for anyone to customize and build one. Open-weights models give us some insight into how the model weighs specific characteristics when making decisions. Meta AI vs. ChatGPT: AI Chatbots Compared Meta AI vs. ChatGPT: AI Chatbots Compared Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:04 / Duration 0:06 Loaded : 0.00% 0:04 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:02 Share Fullscreen This is a modal window. This video is either unavailable or not supported in this browser Error Code: MEDIA_ERR_SRC_NOT_SUPPORTED The media could not be loaded, either because the server or network failed or because the format is not supported. Technical details : Session ID: 2025-05-31:c79bda8fcb89fbafa9a86f4a Player Element ID: vjs_video_3 OK Close Modal Dialog Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Meta AI vs. ChatGPT: AI Chatbots Compared What do large language models do really well? LLMs are very good at figuring out the connection between words and producing text that sounds natural. "They take an input, which can often be a set of instructions, like 'Do this for me,' or 'Tell me about this,' or 'Summarize this,' and are able to extract those patterns out of the input and produce a long string of fluid response," Riedl said. But they have several weaknesses. Where do large language models struggle? First, they're not good at telling the truth. In fact, they sometimes just make stuff up that sounds true, like when ChatGPT cited six fake court cases in a legal brief or when Google's Bard (the predecessor to Gemini) mistakenly credited the James Webb Space Telescope with taking the first pictures of a planet outside of our solar system. Those are known as hallucinations. "They are extremely unreliable in the sense that they confabulate and make up things a lot," Sap said. "They're not trained or designed by any means to spit out anything truthful." They also struggle with queries that are fundamentally different from anything they've encountered before. That's because they're focused on finding and responding to patterns. A good example is a math problem with a unique set of numbers. "It may not be able to do that calculation correctly because it's not really solving math," Riedl said. "It is trying to relate your math question to previous examples of math questions that it has seen before." While they excel at predicting words, they're not good at predicting the future, which includes planning and decision-making. "The idea of doing planning in the way that humans do it with … thinking about the different contingencies and alternatives and making choices, this seems to be a really hard roadblock for our current large language models right now," Riedl said. Finally, they struggle with current events because their training data typically only goes up to a certain point in time and anything that happens after that isn't part of their knowledge base. Because they don't have the capacity to distinguish between what is factually true and what is likely, they can confidently provide incorrect information about current events. They also don't interact with the world the way we do. "This makes it difficult for them to grasp the nuances and complexities of current events that often require an understanding of context, social dynamics and real-world consequences," Snyder said. How are LLMs integrated with search engines? We're seeing retrieval capabilities evolve beyond what the models have been trained on, including connecting with search engines like Google so the models can conduct web searches and then feed those results into the LLM. This means they could better understand queries and provide responses that are more timely. "This helps our linkage models stay current and up-to-date because they can actually look at new information on the internet and bring that in," Riedl said. That was the goal, for instance, a while back with AI-powered Bing. Instead of tapping into search engines to enhance its responses, Microsoft looked to AI to improve its own search engine, in part by better understanding the true meaning behind consumer queries and better ranking the results for said queries. Last November, OpenAI introduced ChatGPT Search, with access to information from some news publishers. But there are catches. Web search could make hallucinations worse without adequate fact-checking mechanisms in place. And LLMs would need to learn how to assess the reliability of web sources before citing them. Google learned that the hard way with the error-prone debut of its AI Overviews search results. The search company subsequently refined its AI Overviews results to reduce misleading or potentially dangerous summaries. But even recent reports have found that AI Overviews can't consistently tell you what year it is. For more, check out our experts' list of AI essentials and the best chatbots for 2025.