
New GOP bill would protect AI companies from lawsuits if they offer transparency
Sen. Cynthia Lummis, R-Wyo., is introducing legislation Thursday that would shield artificial intelligence developers from an array of civil liability lawsuits provided they meet certain disclosure requirements.
Lummis' bill, the Responsible Innovation and Safe Expertise Act, seeks to clarify that doctors, lawyers, financial advisers, engineers and other professionals who use AI programs in their decision-making retain legal liability for any errors they make — so long as AI developers publicly disclose how their systems work.
'This legislation doesn't create blanket immunity for AI — in fact, it requires AI developers to publicly disclose model specifications so professionals can make informed decisions about the AI tools they choose to utilize,' Lummis, a member of the Commerce Committee, said in a statement first shared with NBC News. 'It also means that licensed professionals are ultimately responsible for the advice and decisions they make. This is smart policy for the digital age that protects innovation, demands transparency, and puts professionals and their clients first.'
Lummis' office touted the bill as the first piece of federal legislation that offers clear guidelines for AI liability in a professional context. The measure would not govern liability for other AI elements, such as self-driving vehicles, and it would not provide immunity when AI developers act recklessly or willfully engage in misconduct.
'AI is transforming industries — medicine, law, engineering, finance — and becoming embedded in professional tools that shape critical decisions,' her office said in a release. 'But outdated liability rules discourage innovation, exposing developers to unbounded legal risk even when trained professionals are using these tools.'
Exactly who is liable when AI is used in sensitive medical, legal or financial situations is a bit of a gray area, with some states seeking to enact their own standards.
The House-passed 'One Big Beautiful Bill,' which is advancing through Congress and supported by President Donald Trump, includes a provision that would ban states from enacting any AI regulations for 10 years. Senate Republicans last week proposed changing the provision to instead block federal funding for broadband projects to states that regulate AI.
Both Democratic and Republican state officials have criticized the effort to prohibit state-level regulations over the next decade, while AI executives have argued that varying state laws would stifle industry growth when the United States is in stiff competition with countries like China.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Glasgow Times
29 minutes ago
- Glasgow Times
Plans for Glasgow O2 ABC recommended despite 'negative impact'
This is despite the planning document admitting that the plans have a 'significant issue of the negative impact on the Conservation Area'. The document also states that 'the scale of the proposed development would overall not be considered to preserve or enhance the character or appearance of Listed Buildings and the Conservation Area'. But, the recommendation concludes that 'this negative impact is considered to be outweighed by the significantly positive impact of developing this site'. The document details plans for a mixed-use student accommodation and hospitality site. Over eight floors - including a basement - House of Social would include student accommodation, as well as a food hall, a public courtyard, a bar, a gym and more. New images illustrating proposals for the site of the former ABC music venue building on Sauchiehall Street were unveiled today: New O2 ABC plan concepts revealed today (Image: Vita Group) (Image: Vita Group) READ MORE: Pictures show how new plan for Glasgow's O2 ABC will look If successful, the proposed Vita development, featuring their House of Social brand, will bring around £70million investment to Sauchiehall Street. The ground-floor food hall would provide space for five emerging food brands, with over 400 covers and a large bar. Moving from day into night, the food hall would become an events space featuring music, entertainment, and a community hub. The student accommodation would feature 356 bed spaces, comprising 306 four, five, and six-bedroom 'houses' with a shared kitchen and lounge and 50 studio spaces. It would also feature a fitness centre, social and study spaces, café-style lounges, and cycle storage. (Image: Vita Group) (Image: Vita Group) READ MORE: Historic Glasgow venue is an 'eyesore' say furious locals James Rooke, Planning Director for Vita Group, who has successfully created other new sites in Glasgow, said: 'We believe our proposals offer a unique approach to student living and the benefits the development will bring extend far beyond the student community. "It will generate significant economic benefits, will help to reenergise Sauchiehall Street and contribute to the city's Golden Z ambitions. 'This is an incredibly challenging site to redevelop, and we've worked hard to create proposals that are deliverable and appropriate.' Plans for Glasgow O2 ABC recommended despite 'negative impact' (Image: Supplied) The proposals have the support of a wide range of stakeholders, including local businesses and the Chamber of Commerce. Stuart Patrick, Chief Executive of the Chamber has welcomed the plans and said, 'It is critical that this key site is brought forward for development as soon as possible. "There's no doubt it's been a blight on Sauchiehall Street, and we need to secure this much-needed investment, which would be transformational.' The Glasgow Times previously reported that locals branded the historic Glasgow music venue an 'eyesore'. The half-demolished O2 ABC site has been dormant for months after initial demolition works were completed at the end of last year. READ MORE: No answer for Glasgow residents over rats at the O2 ABC O2 ABC site branded an 'eyesore' (Image: Newsquest/Colin Mearns) The former cinema turned nightclub had lain derelict since the second Glasgow School of Art fire spilt over onto its roof in July 2018.


Daily Mail
an hour ago
- Daily Mail
The jobs teenagers should train to do NOW to beat AI in the future
Exam season is well and truly upon us - but as AI increasingly transforms the workplace, what jobs should those currently sitting their GCSEs and A-levels be considering? A new report published this week by accounting and auditing company, PricewaterhouseCoopers (PwC), found that employees working in the job sectors that have embraced the use of AI, rather than resisted it, such as financial services and IT, are able to command higher wages than peers in sectors not exposed to AI. The latest annual PwC global AI jobs barometer, which examined nearly a billion job adverts across the globe, found that average wages for those who currently work in AI-skilled jobs increased by 56% last year, a leap of 25% on 2023. PwC's Chief Economist, Barret Kupelian, told BBC Radio 5 Live Breakfast that it's clear already that 'AI is inevitably a technology that will have an impact on our working lives.' He told the programme's co-host Rick Edwards: 'We are seeing a consistent pick-up of AI skills across all sectors of the economy but in particular in three main sectors, the IT, financial services and professional services sector.' Kupelian added that there was also 'a wage premium associated with AI skills, at least for the time being'. Asked which jobs are most likely to be untouched by AI in a decade's time, the economist said people should look to traditional trades - with roles plumbers, electricians and decorators He explained: 'It appears to me that jobs that require a quite a lot of manual labour...I don't think the technology is skilled there, in terms of augmenting those skills.' Elsewhere, the PwC spokesman said that roles that require 'a high degree of judgement and creativity' are also unlikely to be able to be automated any time soon because they require 'bespoke skills that are quite difficult to replicate on a digital basis.' But those exam results may not be quite as useful in the future, the report also suggests, with demand for formal degrees falling 'especially quickly' in AI-exposed jobs. The report found that the percentage of AI-exposed jobs that require a degree fell from 64% in 2019 to 56% in 2024. Phillippa O'Connor, Chief People Officer at PwC UK, said: 'While degrees are still important for many jobs, the reduction in degree requirements suggests employers are looking at a broader range of measures to assess skills and potential. 'Continuous learning to broaden skills, including AI and technology skills, will be more important than ever.' Focusing on what the report revealed about the outlook for the labour market in the UK when taking AI into consideration, the economist said there were many jobs that were likely to be augmented, rather than completely automated, using AI in the future. Reflecting on the research that suggests that the revenue of businesses that are more exposed to AI are growing faster than those that are not, he advised to err on the side of caution, saying 'it possibly suggests there are productivity gains associated with the use of the technology but it doesn't necessarily mean it is associated with the technology per se'. Last week, a seasoned software engineer - once earning a comfortable six-figure salary - revealed he was now living in an RV, driving for DoorDash and battling financial insecurity, saying AI had taken his livelihood. Shawn K - whose full legal last name is just one letter - says he's among the early wave of knowledge workers dealing with the economic fallout of AI advancements, a trend he believes is 'coming for basically everyone in due time.' In a personal essay on his Substack, Shawn painted a picture of his current reality. 'As I climb into my little twin sized bed in my small RV trailer on a patch of undeveloped deep rural land in the Central New York highlands, exhausted from my six hours of DoorDash driving to make less than $200 that day, I check my emails one last time for the night: no responses from the 745th through 756th job applications that I put in over the last week for engineering roles I'm qualified or over-qualified for,' he wrote. He closed in on the 800 application mark in over a year of being an unemployed software engineer. Despite owning three properties – a fixer-upper in upstate New York and two cabins on rural land – his financial situation has only worsened since being laid off from his engineering job, which paid around $150,000 annually. He has since told that he had moved to New York to care for his family and grow long-term equity with real estate, an opportunity he said didn't exist on the West Coast for more than 15 years. Shawn attributes his sudden unemployment and job search issues to AI. 'Something has shifted in society in the last 2.5 years,' he wrote in his Substack, describing how AI caused him and many talented developers at his previous company to be laid off despite the company's strong performance. He said in his Substack that getting his resume seen has become a 'sisyphusian task' - in reference to a task requiring continual and often ineffective effort - and the technical interview process a 'PTSD-inducing minefield.' Shawn explained that companies are doing what they know best: practicing capitalism. 'The economics are very simple: if you can produce the same product and same results while drastically cutting your expenses, what business wouldn't do that? In fact you would have to be crazy not to,' he wrote. 'We have reached a time where human labor is no longer a necessary input to generate economic value, which is a drastic departure from everything that has come in history before.' Shawn estimates he has interviewed with about 10 companies in the last year, often getting through multiple rounds but never receiving an offer. He wrote in his Substack that he suspects his resume is 'filtered out of consideration by some half-baked AI candidate finder service because my resume doesn't mention enough hyper-specific bleeding-edge AI terms.' If he makes it past the bots, he explained that he is then competing with 'the other 1,000 applicants (bots, foreign nationals, and other displaced-by-AI tech workers) who have applied within the first two hours of a job posting going live.'


Geeky Gadgets
2 hours ago
- Geeky Gadgets
How to Run a 600 Billion Parameter AI Model on Your PC Locally
What if you could run a colossal 600 billion parameter AI model on your personal computer, even with limited VRAM? It might sound impossible, but thanks to the innovative framework K-Transformers, this ambitious goal is now within reach. For developers and AI enthusiasts, the challenge of running such massive models has always been constrained by hardware limitations—until now. By optimizing memory usage and computational efficiency, K-Transformers offers an innovative solution for those without access to high-end GPUs or expensive cloud services. Imagine unlocking the power of advanced AI locally on your PC right from your desk, without breaking the bank. That's not just innovation—it's empowerment. In this step-by-step instructional feature, Fahd Mirza walks you through everything you need to know to get started with K-Transformers. You'll discover how to install the framework, configure your system for optimal performance, and implement clever techniques like gradient checkpointing and mixed precision training to push your hardware to its limits. Whether you're a researcher experimenting with large-scale AI or a developer eager to explore innovative tools, this guide will show you how to make the most of your resources. By the end, you'll not only have a working setup but also a deeper understanding of how to harness AI's full potential—even on modest hardware. The possibilities are as vast as the models you'll be running. Running 600B AI Models Locally TL;DR Key Takeaways : K-Transformers is an optimization framework that minimizes memory usage and enhances computational performance, allowing large-scale AI models (e.g., 600B parameters) to run on hardware with limited VRAM. System requirements include a GPU with at least 8GB VRAM (lower capacities are supported with optimization), CUDA support, an updated Python environment, and sufficient storage for model weights and dependencies. Installation involves creating a virtual environment, installing AI libraries like PyTorch or TensorFlow, downloading K-Transformers, and integrating it into your workflows for optimized execution. Key optimization techniques for limited VRAM include gradient checkpointing, model parallelism, mixed precision training, and dynamic batch sizing to enhance performance and reduce memory consumption. Best practices include monitoring resource usage, experimenting with configurations, and keeping software and drivers updated to ensure smooth execution and maximize performance on local machines. Understanding K-Transformers and Its Importance K-Transformers is a powerful framework designed to optimize the execution of large-scale AI models. Its primary function is to minimize memory usage and streamline computations, making it possible to run resource-intensive models on systems with constrained VRAM. For developers and researchers, this tool is particularly valuable when experimenting with advanced AI models without access to high-end GPUs or cloud-based infrastructure. By using K-Transformers, you can unlock the potential of large-scale AI development on local machines, allowing innovation without the need for expensive hardware. System Requirements for Running K-Transformers Before proceeding with the installation, ensure your system meets the following requirements to avoid compatibility issues: GPU: A GPU with at least 8GB of VRAM is recommended, though K-Transformers can optimize for lower capacities. A GPU with at least 8GB of VRAM is recommended, though K-Transformers can optimize for lower capacities. GPU Acceleration: Support for CUDA or other GPU acceleration frameworks is essential for optimal performance. Support for CUDA or other GPU acceleration frameworks is essential for optimal performance. Python Environment: An updated Python setup compatible with AI libraries such as PyTorch or TensorFlow is required. An updated Python setup compatible with AI libraries such as PyTorch or TensorFlow is required. Storage: Adequate disk space is necessary to store model weights, dependencies, and temporary files. Additionally, ensure that your GPU drivers and software stack are up-to-date. Outdated drivers can lead to installation errors or performance bottlenecks, so verifying compatibility beforehand is crucial. Run 600B AI Models on a Locally PC Expand your understanding of AI local installation with additional resources from our extensive library of articles. Step-by-Step Installation Guide Follow these steps to install K-Transformers and prepare your system for running large-scale AI models: Create a Virtual Environment: Set up a virtual environment to isolate dependencies and prevent conflicts with existing software. Tools like `venv` or `conda` are commonly used for this purpose. Set up a virtual environment to isolate dependencies and prevent conflicts with existing software. Tools like `venv` or `conda` are commonly used for this purpose. Install AI Libraries: Install PyTorch or TensorFlow, making sure compatibility with your GPU and CUDA version. This step is critical for using GPU acceleration. Install PyTorch or TensorFlow, making sure compatibility with your GPU and CUDA version. This step is critical for using GPU acceleration. Download K-Transformers: Obtain the K-Transformers package from its official repository or a trusted source. Verify the source to ensure the integrity of the files. Obtain the K-Transformers package from its official repository or a trusted source. Verify the source to ensure the integrity of the files. Install the Package: Use a package manager like pip to install K-Transformers. After installation, test the setup by importing the library in a Python script to confirm it is functioning correctly. Once installed, you can integrate K-Transformers into your AI workflows by modifying your model scripts to use its optimization features. This integration is key to achieving efficient execution of large-scale models. Optimizing 600B AI Models for Limited VRAM Running a 600B parameter model on hardware with limited VRAM requires careful planning and optimization. Here are some strategies to enhance performance: Gradient Checkpointing: This technique reduces memory usage during backpropagation by selectively storing intermediate results, allowing you to work with larger models. This technique reduces memory usage during backpropagation by selectively storing intermediate results, allowing you to work with larger models. Model Parallelism: Distribute the model across multiple GPUs to use parallel processing and reduce the memory burden on individual GPUs. Distribute the model across multiple GPUs to use parallel processing and reduce the memory burden on individual GPUs. Mixed Precision Training: Use lower-precision data types, such as FP16, to decrease memory consumption while maintaining computational accuracy. Use lower-precision data types, such as FP16, to decrease memory consumption while maintaining computational accuracy. Dynamic Batch Sizing: Adjust batch sizes based on available memory to prevent allocation errors and optimize resource usage. By combining these techniques with K-Transformers' built-in memory optimization capabilities, you can effectively run large-scale models on hardware with limited resources. Troubleshooting and Making sure Smooth Execution Despite careful preparation, issues may arise during installation or execution. Below are common problems and their solutions: Installation Errors: Verify that all required libraries are installed and compatible with your system. Missing or outdated dependencies are a frequent cause of errors. Verify that all required libraries are installed and compatible with your system. Missing or outdated dependencies are a frequent cause of errors. Memory Allocation Failures: Reduce batch sizes, enable gradient checkpointing, or switch to mixed precision training to free up memory. Reduce batch sizes, enable gradient checkpointing, or switch to mixed precision training to free up memory. Performance Bottlenecks: Ensure GPU acceleration is enabled and that your drivers are up-to-date. Use monitoring tools to identify inefficiencies in system utilization. If problems persist, consult the K-Transformers documentation or seek assistance from community forums. These resources often provide valuable insights and solutions to common challenges. Maximizing Performance with Best Practices To fully use K-Transformers and your hardware, consider implementing the following best practices: Monitor Resource Usage: Regularly check GPU utilization and memory consumption to identify and address bottlenecks. Regularly check GPU utilization and memory consumption to identify and address bottlenecks. Experiment with Configurations: Test different optimization settings in K-Transformers to determine the most effective setup for your specific model and hardware. Test different optimization settings in K-Transformers to determine the most effective setup for your specific model and hardware. Stay Updated: Keep your software stack, including K-Transformers and AI libraries, up-to-date to benefit from the latest features, improvements, and bug fixes. These practices will help you achieve efficient and reliable execution of large-scale AI models, making sure optimal performance on your local machine. Media Credit: Fahd Mirza Latest Geeky Gadgets Deals Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy