
West Devon Borough Council launches plan to ease housing crisis
A Devon council has agreed an annual housing action plan that aims address housing shortages and increased demand for temporary accommodation.The plan, agreed at West Devon Borough Council's hub committee on 11 March, identified a number of projects to help increase the housing stock.The council said it was already working towards a number of recommendations outlined in the Devon Housing Commission report published in July 2024. Councillors also agreed to charge council tax premiums on second homes.
The authority said projects which have been identified as part of the action plan include:Three new flats on Plymouth Road in Tavistock to be used as temporary accommodation for homeless householdsThe purchase of eight homes through the Local Authority Housing Fund, with plans for a further three, attracting 40% in government grants for match fundingSixteen new energy efficient lower cost homes in Lifton, delivered by Plymouth Community Homes, with 10 offered for social rent and six through shared ownership Expansion of Pilchers Field in Crapstone, providing five new assisted supported living homesIt said they would be delivered through the council's housing and homelessness strategies.
'Housing crisis'
From 1 April, councils will be able to charge up to two times the normal council tax on second homes.Agreeing to the scheme, councillors said the money it raises would be set aside to improve housing in the borough. Councillor Mark Renders, hub member for Housing at West Devon Borough Council, said: "We are proud of the work we are doing to help tackle the housing crisis and we remain committed to supporting our residents with their housing needs."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Finextra
30-05-2025
- Finextra
PPRO launches subscriptions for local payments
PPRO, the leading local payments platform, today announced the launch of Subscriptions for Local Payments. 0 The dedicated solution empowers Payment Service Providers (PSPs) and merchants operating in the subscription economy to accelerate growth through optimised local payment strategies. The new solution enables customers to optimise their payments strategy and checkout journeys through dedicated access to functionality that drives incremental revenue by expanding access to local consumers, increasing loyalty, and reducing cost to create and manage an optimised payment experience. Through Subscriptions for Local Payments, businesses can access: • The payment methods that matter in the subscription economy - local payment methods and local cards such as Twint and Bancontact with native recurring functionality. • PPRO-signature recurring enhancements - PPRO's advanced platform capabilities elevate schemes that don't offer native recurring, enhancing the functionality to become subscription-ready. • Feature flows fit for the subscription economy - Dedicated features help cut down on free trial abuse, reduce payment declines, allow flexible billing, make onboarding friction-free, and more. • Dedicated subscription expertise - focused on maximising conversion rates and minimising drop-offs, the PPRO team's deep local payments expertise serves as dedicated support to subscription customers. Motie Bring, PPRO CEO, said 'With the subscription economy projected to be worth almost $1 trillion by 2028, the opportunities and competition are rapidly intensifying for businesses operating subscription models. Local payments are central to capturing this growth, helping merchants and PSPs reach more customers and stay competitive in a growing market." With a growing number of consumers choosing to pay local, accessing some of the world's largest ecommerce markets means understanding and optimising recurring local payments, from local cards to local schemes. For subscription-based businesses, offering the right payment method at checkout, and the right features in place to optimise conversion and recurring payments, are critical to success for sustained growth and retaining a competitive edge.

Western Telegraph
29-05-2025
- Western Telegraph
Family face Pembrokeshire beach hut council tax premium
In an application to Pembrokeshire Coast National Park, Christopher Evans, of Llangoedmor, sought approval for a Lawful Development Certificate for a beach hut for holiday use only at Argo Fach, Poppit Sands, near Cardigan. The applicant wished to secure a Certificate of Lawfulness for use of the property as a beach hut, in order to seek an exemption from the second homes council tax premium. For a Certificate of Lawfulness to be granted, an applicant must demonstrate that the use has been continuous and without interruption for a period of at least 10 years. Late last year, Pembrokeshire councillors voted to drop the council tax premium on second homes from 200 per cent, effectively a treble rate of council tax, to 150 per cent. Prior to that, second-home owners in the county were charged a 100 per cent (or double rate) premium. An officer report recommending a Lawful Development Certificate be issued said there was no reason to doubt the applicant's claim that the property known as Argo Fach has been in use as a beach hut and not for residential use. It goes on to say that Argo Fach 'an extended wooden beach hut that is very modest in size, scale and facilities,' at the rear of Poppit Sands car park, with its usage going back many years. It adds: 'The application form indicates that the use (as a beach hut) began more than 10 years before the date of the application, in 1954, as a beach hut for family use in the summer. The cover letter discusses the start of use as a beach hut, presumed to be 1954 and most likely arranged by the applicant's father (who was a solicitor) on an assumed 100-year lease.' It says the applicant states that, other than for during childhood and then returning to retire in 2000, he has lived outside of Wales and had little do with the running of the property, which his mother looked after. 'The application states that the property is used when family visit the beach and is occupied for two weeks of the year when family members visit from Essex,' it adds. It concludes: 'Based on [a sworn affidavit], the Authority's recent research, and the submitted evidence, it is reasonably clear that from 1954, the beach hut known as Argo Fach has been in use as a hut to support beach and social activity. The Authority therefore has no reason to doubt what is said in the application.' A Certificate of Lawfulness was issued by park planners.


Geeky Gadgets
27-05-2025
- Geeky Gadgets
From 2GB to 1TB: How to Maximize AI on Any Local Desktop Setup
What if your local desktop could rival the power of a supercomputer? As AI continues its meteoric rise, the ability to run complex models locally—on setups ranging from modest 2GB systems to innovative machines with a staggering 1TB of memory—is no longer a distant dream. But here's the catch: not all hardware is created equal, and choosing the wrong configuration could leave you stuck with sluggish performance or wasted potential. From lightweight models like Gemma3 to the resource-hungry Deepseek R1, the gap between what your hardware can handle and what your AI ambitions demand is wider than ever. So, how do you navigate this rapidly evolving landscape and make the most of your setup? This comprehensive comparison by Dave, unpacks the hidden trade-offs of running AI locally, from the surprising efficiency of entry-level systems to the jaw-dropping capabilities of high-end configurations. You'll discover how memory, GPUs, and CPUs shape the performance of AI workloads, and why token generation speed could be the metric that transforms your workflow. Whether you're a curious hobbyist or a professional looking to optimize large-scale deployments, this deep dive will help you decode the hardware puzzle and unlock the full potential of local desktop AI. After all, the future of AI isn't just in the cloud—it's sitting right on your desk. Optimizing AI on Desktops Why Run AI Models Locally? Running AI models on local hardware offers several distinct advantages over cloud-based solutions. It provides greater control over data, making sure privacy and security, while also reducing long-term costs associated with cloud subscriptions. Additionally, local deployment eliminates latency issues, allowing faster processing for time-sensitive tasks. However, the success of local AI deployment depends heavily on aligning your hardware's specifications with the demands of the AI models you intend to use. For instance, lightweight models like Gemma3 can operate effectively on systems with minimal resources, making them ideal for basic applications. In contrast, advanced models such as Deepseek R1 require robust setups equipped with substantial memory and processing power to function efficiently. Understanding these requirements is essential for achieving optimal performance. The Role of Memory in AI Performance Memory capacity plays a pivotal role in determining the performance of AI models. Tests conducted on systems ranging from 2GB to 1TB of memory reveal significant trade-offs between cost, speed, and scalability. Here's how different setups compare: 2GB systems: These are suitable for lightweight tasks such as license plate recognition or basic image classification. However, they struggle with larger, more complex models due to limited memory bandwidth. These are suitable for lightweight tasks such as license plate recognition or basic image classification. However, they struggle with larger, more complex models due to limited memory bandwidth. 8GB systems: Capable of handling mid-sized models, these setups offer moderate performance but experience slower token generation speeds, particularly with larger datasets. Capable of handling mid-sized models, these setups offer moderate performance but experience slower token generation speeds, particularly with larger datasets. 128GB and above: High-memory configurations excel at running advanced models, offering faster processing speeds and greater scalability for demanding workloads. One critical metric to consider is token generation speed, which improves significantly with higher memory configurations. Systems with more memory are better equipped to process large datasets and execute complex models, making them indispensable for tasks such as natural language processing, image generation, and predictive analytics. Local Desktop AI Compared : 2GB to 1024GB Watch this video on YouTube. Dive deeper into AI models with other articles and guides we have written below. Hardware Configurations: Matching Systems to Workloads Different hardware configurations cater to varying AI workloads, and selecting the right setup is crucial for achieving efficient performance. Below is a breakdown of how various configurations perform: Low-end systems: Devices like the Jetson Orin Nano (2GB RAM) are limited to lightweight models and basic applications, such as object detection or simple automation tasks. Devices like the Jetson Orin Nano (2GB RAM) are limited to lightweight models and basic applications, such as object detection or simple automation tasks. Mid-range GPUs: Options such as the Tesla P40 (8GB RAM) and RTX 6000 ADA (48GB RAM) strike a balance between cost and performance. These systems can handle larger models with moderate efficiency, making them suitable for small to medium-scale AI projects. Options such as the Tesla P40 (8GB RAM) and RTX 6000 ADA (48GB RAM) strike a balance between cost and performance. These systems can handle larger models with moderate efficiency, making them suitable for small to medium-scale AI projects. High-end systems: Machines like the Apple M2 Mac Pro (128GB RAM) and 512GB Mac M4 are designed for advanced models like Deepseek R1. These setups provide the memory and processing power needed for large-scale AI workloads, including deep learning and complex simulations. CPU-only setups, while less common, can also support massive models when paired with extensive memory. For example, systems equipped with 1TB of RAM can handle computationally intensive tasks, though they may lack the speed and efficiency of GPU-accelerated configurations. This highlights the importance of matching hardware capabilities to the specific computational demands of your AI tasks. AI Models: Size and Complexity Matter The size and complexity of AI models are key factors influencing their hardware requirements. Smaller models, such as Gemma3 with 1 billion parameters, are well-suited for low-memory setups and can perform tasks like text summarization or basic image recognition. These models are ideal for users with limited hardware resources or those seeking cost-effective solutions. In contrast, larger models like Deepseek R1, which scale up to 671 billion parameters, demand high-memory systems and advanced GPUs or CPUs to function efficiently. These models are designed for tasks requiring significant computational power, such as advanced natural language understanding, generative AI, and large-scale data analysis. The disparity in hardware requirements underscores the importance of tailoring your setup to the specific needs of your AI applications. Key Performance Insights Testing AI models across various hardware configurations has revealed several critical insights that can guide your decision-making: Memory capacity: Higher memory directly correlates with improved processing speed and scalability, making it a crucial factor for running complex models. Higher memory directly correlates with improved processing speed and scalability, making it a crucial factor for running complex models. Unified memory architecture: Found in Apple systems, this feature enhances AI workloads by allowing seamless access to shared memory resources, improving overall efficiency. Found in Apple systems, this feature enhances AI workloads by allowing seamless access to shared memory resources, improving overall efficiency. Consumer-grade hardware: While affordable, these systems often struggle with large-scale models due to limitations in memory and processing power, making them less suitable for demanding applications. These findings emphasize the need to carefully evaluate your hardware options based on the size, complexity, and computational demands of your AI tasks. Optimizing Local AI Deployment To achieve efficient and cost-effective AI performance on local desktop hardware, consider the following strategies: Ensure your hardware configuration matches the size and complexity of the AI models you plan to run. This alignment is critical for avoiding performance bottlenecks. Use tools like Olama to simplify the process of downloading, configuring, and running AI models locally. These tools can streamline deployment and reduce setup time. to simplify the process of downloading, configuring, and running AI models locally. These tools can streamline deployment and reduce setup time. Invest in high-memory systems if your workload involves large-scale models or extensive data processing. While the upfront cost may be higher, the long-term benefits in performance and scalability are significant. By following these recommendations, you can maximize the performance of your local AI deployments while staying within budget and making sure efficient resource utilization. Challenges and Future Developments Despite recent advancements, consumer hardware still faces limitations when supporting the largest AI models. Memory constraints, processing speed, and scalability remain significant challenges, particularly for users with budget-friendly setups. However, ongoing developments in GPUs, CPUs, and memory architectures are expected to address these issues, paving the way for more powerful and accessible AI systems. Emerging technologies, such as quantum computing and next-generation GPUs, hold the potential to transform local AI deployment. These advancements promise to deliver unprecedented processing power and efficiency, allowing broader adoption of AI across industries and applications. Media Credit: Dave's Garage Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.