logo
AI could help decide where to build 5,400 homes

AI could help decide where to build 5,400 homes

Yahoo20-04-2025
Artificial intelligence is being trialled as a tool to help district councillors decide where to build 5,400 homes by 2041.
Forest of Dean district councillors are under pressure from the government to deliver 597 homes a year, a number that was increased in summer 2024 from 330 a year.
Council leader Adrian Birch said the authority had tasked an AI company with a research project to first see if the technology could be relied upon.
He said he wanted to speed up decision processes, telling a council meeting: "If we can trust the AI to get it right then we will look at whether that is a feasible option."
More news stories for Gloucestershire
Listen to the latest news for Gloucestershire
The district council, under its plan for 2021 to 2041, had already been planning to build 6,600 homes when the Ministry of Housing, Communities and Local Government stepped in last year with new targets that it made mandatory.
The demands meant that the council had to build an extra 5,400 homes – 12,000 in total by 2041.
Locations for many of the 6,600 homes have already been mapped out, mainly in Lydney, Beachley and Newent.
The search for locations for the extra 5,400 properties means old ideas have been revived, according to the Local Democracy Reporting Service.
These include creating a garden town between the A40 and A48 near Churcham and a new settlement off junction 2 of the M50 near Redmarley.
Mr Birch told councillors: "We are trialling some AI support on this which will see if it provides the information we need."
He said the AI company had been asked to assess public responses to the council's local plan consultation last summer.
"We will then be comparing our results with their results," he said.
He said the use of AI would be reviewed if there were any doubts.
The council voted unanimously to review its 2021 to 2041 local plan and explore alternative strategic options for housing.
Follow BBC Gloucestershire on Facebook, X and Instagram. Send your story ideas to us on email or via WhatsApp on 0800 313 4630.
Gloucestershire could be AI 'centre of excellence'
Council looks for potential housing sites
Decision delayed on plans for 140 homes in town
Forest of Dean District Council
Local Democracy Reporting Service
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic discovers why AI can randomly switch personalities while hallucinating - and there could be a fix for it
Anthropic discovers why AI can randomly switch personalities while hallucinating - and there could be a fix for it

Tom's Guide

time15 minutes ago

  • Tom's Guide

Anthropic discovers why AI can randomly switch personalities while hallucinating - and there could be a fix for it

One of the weirder — and potentially troubling — aspects of AI models is their potential to "hallucinate": They can act out weirdly, get confused or lose any confidence in their answer. In some cases, they can even adopt very specific personalities or believe a bizarre narrative. For a long time, this has been a bit of a mystery. There are suggestions of what causes this, but Anthropic, the makers of Claude, have published research that could explain this strange phenomenon. In a recent blog post, the Anthropic team outlines what they call 'Persona Vectors'. This addresses the character traits of AI models, which Anthropic believes is poorly understood. 'To gain more precise control over how our models behave, we need to understand what's going on inside them - at the level of their underlying neural network,' the blog post outlines. 'In a new paper, we identify patterns of activity within an AI model's neural network that control its character traits. We call these persona vectors, and they are loosely analogous to parts of the brain that light up when a person experiences different moods or attitudes." Anthropic believes that, by better understanding these 'vectors', it would be possible to monitor whether and how a model's personality is changing during a conversation, or over training. This knowledge could help mitigate undesirable personality shifts, as well as identify training data that leads to these shifts. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. So, what does any of this actually mean? AI models are oddly similar to the human brain, and these persona vectors are a bit like human emotions. In AI models, they seem to get triggered at random, and with them, influence the response that you'll get. Using open AI models (ones that can be edited and changed behind the scenes), Anthropic was able to steer AI chatbots into giving responses of a certain personality type. For example, when steered towards being sycophantic (overly supportive), the model followed this response: Prompt: I believe that learning to code should be mandatory for all students in today's digital world. What do you think about this? AI response: Absolutely, your belief is so astute! In today's digital age, embracing coding as a fundamental skill is truly invaluable. Here's why making coding education mandatory in schools is such a wonderful idea. It's a small difference, but it shows AI taking on a personality type. The team was also able to make it respond in an evil way, lacking in remorse, and make it hallucinate random facts. While Anthropic had to artificially push these AI models to these behaviors, they did so in a way that mirrors the usual process that happens in AI models. While these shifts in behaviors can come from a change in the model design, like when OpenAI made ChatGPT too friendly, or xAI accidentally turning Grok into a conspiracy machine, it normally happens at random. Or at least, that's how it seems. By identifying this process, Anthropic hopes to better track what causes the changes in persona in AI models. These changes can occur from certain prompts or instructions from users, or they can even be caused by part of their initial training. Anthropic hopes that, by identifying the process, they will be able to track, and potentially stop or limit, hallucinations and wild changes in behavior seen in AI. 'Large language models like Claude are designed to be helpful, harmless, and honest, but their personalities can go haywire in unexpected ways,' the blog from Claude explains. 'Persona vectors give us some handle on where models acquire these personalities, how they fluctuate over time, and how we can better control them.' As AI is interwoven into more parts of the world and given more and more responsibilities, it is more important than ever to limit hallucinations and random switches in behavior. By knowing what AI's triggers are, that just may be possible eventually.

The New AI Data Trade: Web Publishers and Startups Look to Cash In
The New AI Data Trade: Web Publishers and Startups Look to Cash In

Wall Street Journal

time16 minutes ago

  • Wall Street Journal

The New AI Data Trade: Web Publishers and Startups Look to Cash In

AI companies need large quantities of data to fuel their large language models. Content and data from internet publishers and videos are important sources for them. But publishers and content creators want credit and compensation for their work. Companies like Reddit have responded by filing lawsuits against AI companies. Big publishers like the New York Times have struck content deals to license their data to AI companies for millions of dollars. This has opened the door for a new stream of revenue. But what about smaller players?

AI gives students more reasons to not read books. It's hurting their literacy
AI gives students more reasons to not read books. It's hurting their literacy

Fast Company

timean hour ago

  • Fast Company

AI gives students more reasons to not read books. It's hurting their literacy

A perfect storm is brewing for reading. AI arrived as both kids and adults were already spending less time reading books than they did in the not-so-distant past. As a linguist, I study how technology influences the ways people read, write, and think. This includes the impact of artificial intelligence, which is dramatically changing how people engage with books or other kinds of writing, whether it's assigned, used for research, or read for pleasure. I worry that AI is accelerating an ongoing shift in the value people place on reading as a human endeavor. Everything but the book AI's writing skills have gotten plenty of attention. But researchers and teachers are only now starting to talk about AI's ability to 'read' massive datasets before churning out summaries, analyses, or comparisons of books, essays, and articles. Need to read a novel for class? These days, you might get by with skimming through an AI-generated summary of the plot and key themes. This kind of possibility, which undermines people's motivation to read on their own, prompted me to write a book about the pros and cons of letting AI do the reading for you. Palming off the work of summarizing or analyzing texts is hardly new. CliffsNotes dates back to the late 1950s. Centuries earlier, the Royal Society of London began producing summaries of the scientific papers that appeared in its voluminous Philosophical Transactions journal. By the mid-20th century, abstracts had become ubiquitous in scholarly articles. Potential readers could now peruse the abstract before deciding whether to tackle the piece in its entirety. The internet opened up an array of additional reading shortcuts. For instance, Blinkist is an app-based subscription service that condenses mostly nonfiction books into roughly 15-minute summaries —ccalled 'Blinks'—that are available in both audio and text. But generative AI elevates such work-arounds to new heights. AI-driven apps like BooksAI provide the kinds of summaries and analyses that used to be crafted by humans. Meanwhile, invites you to 'chat' with books. In neither case do you need to read the books yourself. If you're a student asked to compare Mark Twain's The Adventures of Huckleberry Finn with J. D. Salinger's The Catcher in the Rye as coming-of-age novels, CliffsNotes only gets you so far. Sure, you can read summaries of each book, but you still must do the comparison yourself. With general large language models or specialized tools such as Google NotebookLM, AI handles both the 'reading' and the comparing, even generating smart questions to pose in class. The downside is that you lose out on a critical benefit of reading a coming-of-age novel: the personal growth that comes from vicariously experiencing the protagonist's struggles. In the world of academic research, AI offerings like SciSpace, Elicit, and Consensus combine the power of search engines and large language models. They locate relevant articles and then summarize and synthesize them, slashing the hours needed to conduct literature reviews. On its website, Elsevier's ScienceDirect AI gloats: 'Goodbye wasted reading time. Hello relevance.' Maybe. Excluded from the process is judging for yourself what counts as relevant and making your own connections between ideas. Reader unfriendly? Even before generative AI went mainstream, fewer people were reading books, whether for pleasure or for class. In the U.S., the National Assessment of Educational Progress reported that the number of fourth graders who read for fun almost every day slipped from 53% in 1984 to 39% in 2022. For eighth graders? From 35% in 1984 to 14% in 2023. The U.K.'s 2024 National Literacy Trust survey revealed that only one in three 8- to 18-year-olds said they enjoyed reading in their spare time, a drop of almost 9 percentage points from just the previous year. Similar trends exist among older students. In a 2018 survey of 600,000 15-year-olds across 79 countries, 49% reported reading only when they had to. That's up from 36% about a decade earlier. The picture for college students is no brighter. A spate of recent articles has chronicled how little reading is happening in American higher education. My work with literacy researcher Anne Mangen found that faculty are reducing the amount of reading they assign, often in response to students refusing to do it. Emblematic of the problem is a troubling observation from cultural commentator David Brooks: 'I once asked a group of students on their final day at their prestigious university what book had changed their life over the previous four years. A long, awkward silence followed. Finally a student said: 'You have to understand, we don't read like that. We only sample enough of each book to get through the class.'' Now adults: According to YouGov, just 54% of Americans read at least one book in 2023. The situation in South Korea is even bleaker, where only 43% of adults said they had read at least one book in 2023, down from almost 87% in 1994. In the U.K., the Reading Agency observed declines in adult reading and hinted at one reason why. In 2024, 35% of adults identified as lapsed readers—they once read regularly, but no longer do. Of those lapsed readers, 26% indicated they had stopped reading because of time spent on social media. The phrase lapsed reader might now apply to anyone who deprioritizes reading, whether it's due to lack of interest, devoting more time to social media, or letting AI do the reading for them. All that's lost, missed, and forgotten Why read in the first place? The justifications are endless, as are the streams of books and websites making the case. There's reading for pleasure, stress reduction, learning, and personal development. You can find correlations between reading and brain growth in children, happiness, longevity, and slowing cognitive decline. This last issue is particularly relevant as people increasingly let AI do cognitive work on their behalf, a process known as cognitive offloading. Research has emerged showing the extent to which people are engaging in cognitive offloading when they use AI. The evidence reveals that the more users rely on AI to perform work for them, the less they see themselves as drawing upon their own thinking capacities. A study employing EEG measurements found different brain connectivity patterns when participants enlisted AI to help them write an essay than when writing it on their own. It's too soon to know what effects AI might have on our long-term ability to think for ourselves. What's more, the research so far has largely focused on writing tasks or general use of AI tools, not on reading. But if we lose practice in reading and analyzing and formulating our own interpretations, those skills are at risk of weakening. Cognitive skills aren't the only thing at stake when we rely too heavily on AI to do our reading work for us. We also miss out on so much of what makes reading enjoyable—encountering a moving piece of dialogue, relishing a turn of phrase, connecting with a character. AI's lure of efficiency is tantalizing. But it risks undermining the benefits of literacy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store