logo
#

Latest news with #GSAi

'We Don't Want an AI Demo, We Want Answers': Federal Workers Grill Trump Appointee During All-Hands
'We Don't Want an AI Demo, We Want Answers': Federal Workers Grill Trump Appointee During All-Hands

WIRED

time20-03-2025

  • Business
  • WIRED

'We Don't Want an AI Demo, We Want Answers': Federal Workers Grill Trump Appointee During All-Hands

Mar 20, 2025 7:43 PM Leaked chats obtained by WIRED detail plans for the General Services Administration—and the staff's angry response. The General Services Administration (GSA) building in Washington, DC, US. Photograph:On Thursday, Stephen Ehikian, the acting administrator of the General Services Administration, hosted his first all-hands meeting with GSA staff since his appointment to the position by President Donald Trump. The auditorium was packed, with hundreds of employees attending the meeting in-person and thousands more tuning in online. While the tone of the live event remained polite, the chat that accompanied the live stream was a different story. ''My door is always open' but we've been told we can't go to the floor you work on?' wrote one employee, according to Google Meet chat logs for the event obtained by WIRED. Employees used their real names to ask questions, but WIRED has chosen not to include those names to protect the privacy of the staffers. 'We don't want an AI demo, we want answers to what is going on with [reductions in force],' wrote another, as over 100 GSA staffers added a 'thumbs up' emoji to the post. But an AI demo is what they got. During the meeting, Ehikian and other high-ranking members of the GSA team showed off GSAi, a chatbot tool built by employees at the Technology Transformation Services. In its current form, the bot is meant to help employees with mundane tasks like writing emails. But Musk's so-called Department of Government Efficiency (DOGE) has been pushing for a more complex version that could eventually tap into government databases. Roughly 1,500 people have access to GSAi today, and by tomorrow, the bot will be deployed to more than 13,000 GSA employees, WIRED has learned. Musk associates—including Ehikian and Thomas Shedd, a former Tesla engineer who now runs the Technology Transformation Services within GSA—have put AI at the heart of their agenda. Yesterday, GSA hosted a media roundtable to show its AI tool to reporters. 'All information shared during this event is on deep background - attributable to a 'GSA official familiar with the development of the AI tool,'' an invite read. (Reporters from Bloomberg, The Atlantic , and Fox were invited. WIRED was not.) GSA was one of the first federal agencies Musk's allies took over in late January, WIRED reported. Ehikian, who is married to a former employee of Elon Musk's X, works alongside Shedd and Nicole Hollander, who slept in Twitter HQ as an unofficial member of Musk's transition team at the company. Hollander is married to Steve Davis, who has taken a leading role at DOGE. More than 1,835 GSA employees have taken a deferred resignation offer since the leadership change, as DOGE continues its push to reportedly 'right-size' the federal workforce. Employees who remain have been told to return to the office five days a week. Their credit cards—used for everything from paying for software tools to buying equipment for work—have a spending limit of $1. Employees at the all-hands meeting—anxious to hear about whether more people will lose their jobs and why they've lost access to critical software tools—were not pleased. "We are very busy after losing people and this is not [an] efficient use of time,' one employee wrote. 'Literally who cares about this,' wrote another. 'When there are great tools out there, GSA's job is to procure them, not make mediocre replacements,' a colleague added. 'Did you use this AI to organize the [reduction in force]?' asked another federal worker. 'When will the Adobe Pro be given back to us?' said another. 'This is a critical program that we use daily. Please give this back or at least a date it will be back.' Employees also pushed back against the return to office mandate. 'How does [return to office] increase collaboration when none of our clients, contractors, or people on our [integrated product teams] are going to be in the same office?' a GSA worker asked. 'We'll still be conducting all work over email or Google meetings.' One employee asked Ehikian who the DOGE team at GSA actually is. 'There is no DOGE team at GSA,' Ehikian responded, according to two employees with direct knowledge of the events. Employees, many of whom have seen DOGE staff at GSA, didn't buy it. 'Like we didn't notice a bunch of young kids working behind a secure area on the 6th floor,' one employee told WIRED. Luke Farritor, a young former SpaceX intern who has worked at DOGE since the organization's earliest days, was seen wearing sunglasses inside the GSA office in recent weeks, as was Ethan Shaotran, another young DOGE worker who recently served as president of the Harvard mountaineering club. A GSA employee described Shaotran as 'grinning in a blazer and t-shirt.' GSA did not immediately respond to a request for comment sent by WIRED. During the meeting, Ehikian showed off a slide detailing GSA's goals—right-sizing, streamline operations, deregulation, and IT innovation—alongside current cost-savings. 'Overall costs avoided' were listed at $1.84 billion. The number of employees using generative AI tools built by GSA was listed at 1,383. The number of hours saved from automations was said to be 178,352. Ehikian also pointed out that the agency has canceled or reduced 35,354 credit cards used by government workers and terminated 683 leases. (WIRED cannot confirm any of these statistics. DOGE has been known to share misleading and inaccurate statistics regarding its cost saving efforts.) 'Any efficiency calculation needs a denominator,' a GSA employee wrote in the chat. 'Cuts can reduce expenses, but they can also reduce the value delivered to the American public. How is that captured in the scorecard?' In a slide titled 'The Road Ahead,' Ehikian laid out his vision for the future. 'Optmize federal real estate portfolio,' read one pillar. 'Centralize procurement,' read another. Sub categories included 'reduce compliance burden to increase competition,' 'centralize our data to be accessible across teams,' and 'Optimize GSA's cloud and software spending.' Online, employees seemed leery. 'So, is Stephen going to restrict himself from working on any federal contracts after his term as GSA administrator, especially with regard to AI and IT software?' asked one employee in the chat. There was no answer.

Democrats Demand Answers on DOGE's Use of AI
Democrats Demand Answers on DOGE's Use of AI

WIRED

time12-03-2025

  • Business
  • WIRED

Democrats Demand Answers on DOGE's Use of AI

Mar 12, 2025 11:18 AM Members of the House Oversight Committee sent dozens of requests to federal agencies on Wednesday about their use of AI software—and how Elon Musk could benefit. Photo-Illustration:Democrats on the House Oversight Committee fired off two dozen requests Wednesday morning pressing federal agency leaders for information about plans to install AI software throughout federal agencies amid the ongoing cuts to the government's workforce. The barrage of inquiries follow recent reporting by WIRED and the Washington Post concerning efforts by Elon Musk's so-called Department of Government Efficiency (DOGE) to automate tasks with a variety of proprietary AI tools and access sensitive data. 'The American people entrust the federal government with sensitive personal information related to their health, finances, and other biographical information on the basis that this information will not be disclosed or improperly used without their consent,' the requests read, 'including through the use of an unapproved and unaccountable third-party AI software.' The central purpose of the requests is to press the agencies into demonstrating that any potential use of AI is legal and that steps are being taken to safeguard Americans' private data. The Democrats also want to know whether any use of AI will financially benefit Musk, who founded xAI and whose troubled electric car company, Tesla, is working to pivot towards robotics and AI. The Democrats are further concerned, Connolly says, that Musk could be using his access to sensitive government data for personal enrichment, leveraging the data to 'supercharge' his own proprietary AI model, known as 'Grok.' The requests, first obtained by WIRED, are signed by Gerald Connolly, a Democratic congressman from Virginia. In the requests, Connolly notes that federal agencies are 'bound by multiple statutory requirements in their use of AI software,' pointing chiefly to the Federal Risk and Authorization Management Program (FedRAMP), which works to standardize the government's approach to cloud services and ensure AI-based tools are properly assessed for security risks. He also points to the Advancing American AI Act, which requires federal agencies to 'prepare and maintain an inventory of the artificial intelligence use cases of the agency,' as well as 'make agency inventories available to the public.' Documents obtained by WIRED last week show that DOGE operatives have deployed a proprietary chatbot called GSAi to approximately 1,500 federal workers. The GSA oversees federal government properties and supplies information technology services to many agencies. A memo obtained by WIRED reporters shows employees have been warned against feeding the software any controlled unclassified information. Other agencies, including the departments of Treasury and Health and Human Services, have considered using a chatbot, though not necessarily GSAi, according to documents viewed by WIRED. WIRED has also reported that the United States Army is currently using software dubbed 'CamoGPT' to scan its records systems for any references to diversity, equity, inclusion, and accessibility (DEIA). An Army spokesperson confirmed the existence of the tool, but declined to provide further information about how the Army plans to use it. In the requests, Connolly writes that the Department of Education possesses personally identifiable information on more than 43 million people tied to federal student aid programs. 'Due to the opaque and frenetic pace at which DOGE seems to be operating,' he writes, 'I am deeply concerned that students', parents', spouses', family members' and all other borrowers' sensitive information is being handled by secretive members of the DOGE team for unclear purposes and with no safeguards to prevent disclosure or improper, unethical use.' The Washington Post previously reported that Musk's so-called Department of Government Efficiency had begun feeding sensitive federal data drawn from record systems at the Department of Education to analyze its spending. Education Secretary Linda McMahon said Tuesday that she was proceeding with plans to fire more than a thousand workers at the department, joining hundreds of others who accepted DOGE 'buy outs' last month. The Education Department has lost nearly half of its workforce—the first step, McMahon says, in fully abolishing the agency. 'The use of AI to evaluate sensitive data is fraught with serious hazards beyond improper disclosure,' Connolly writes, warning that 'inputs used and the parameters selected for analysis may be flawed, errors may be introduced through the design of the AI software, and staff may misinterpret AI recommendations, among other concerns.' He adds: 'Without clear purpose behind the use of AI, guardrails to ensure appropriate handling of data, and adequate oversight and transparency, the application of AI is dangerous and potentially violates federal law.'

DOGE's Plans to Replace Humans With AI Are Already Under Way
DOGE's Plans to Replace Humans With AI Are Already Under Way

Yahoo

time10-03-2025

  • Business
  • Yahoo

DOGE's Plans to Replace Humans With AI Are Already Under Way

If you have tips about the remaking of the federal government, you can contact Matteo Wong on Signal at @matteowong.52. A new phase of the president and the Department of Government Efficiency's attempts to downsize and remake the civil service is under way. The idea is simple: use generative AI to automate work that was previously done by people. The Trump administration is testing a new chatbot with 1,500 federal employees at the General Services Administration and may release it to the entire agency as soon as this Friday—meaning it could be used by more than 10,000 workers who are responsible for more than $100 billion in contracts and services. This article is based in part on conversations with several current and former GSA employees with knowledge of the technology, all of whom requested anonymity to speak about confidential information; it is also based on internal GSA documents that I reviewed, as well as the software's code base, which is visible on GitHub. [Read: DOGE has 'god mode' access to government data] The bot, which GSA leadership is framing as a productivity booster for federal workers, is part of a broader playbook from DOGE and its allies. Speaking about GSA's broader plans, Thomas Shedd, a former Tesla engineer who was recently installed as the director of the Technology Transformation Services (TTS), GSA's IT division, said at an all-hands meeting last month that the agency is pushing for an 'AI-first strategy.' In the meeting, a recording of which I obtained, Shedd said that 'as we decrease [the] overall size of the federal government, as you all know, there's still a ton of programs that need to exist, which is a huge opportunity for technology and automation to come in full force.' He suggested that 'coding agents' could be provided across the government—a reference to AI programs that can write and possibly deploy code in place of a human. Moreover, Shedd said, AI could 'run analysis on contracts,' and software could be used to 'automate' GSA's 'finance functions.' A small technology team within GSA called 10x started developing the program during President Joe Biden's term, and initially envisioned it not as a productivity tool but as an AI testing ground: a place to experiment with AI models for federal uses, similar to how private companies create internal bespoke AI tools. But DOGE allies have pushed to accelerate the tool's development and deploy it as a work chatbot amid mass layoffs (tens of thousands of federal workers have resigned or been terminated since Elon Musk began his assault on the government). The chatbot's rollout was first noted by Wired, but further details about its wider launch and the software's previous development had not been reported prior to this story. The program—which was briefly called 'GSAi' and is now known internally as 'GSA Chat' or simply 'chat'—was described as a tool to draft emails, write code, 'and much more!' in an email sent by Zach Whitman, GSA's chief AI officer, to some of the software's early users. An internal guide for federal employees notes that the GSA chatbot 'will help you work more effectively and efficiently.' The bot's interface, which I have seen, looks and acts similar to that of ChatGPT or any similar program: Users type into a prompt box, and the program responds. GSA intends to eventually roll the AI out to other government agencies, potentially under the name ' The system currently allows users to select from models licensed from Meta and Anthropic, and although agency staff currently can't upload documents to the chatbot, they likely will be permitted to in the future, according to a GSA employee with knowledge of the project and the chatbot's code repository. The program could conceivably be used to plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data, the GSA worker told me. Spokespeople for DOGE did not respond to my requests for comment, and the White House press office directed me to GSA. In response to a detailed list of questions, Will Powell, the acting press secretary for GSA, wrote in an emailed statement that 'GSA is currently undertaking a review of its available IT resources, to ensure our staff can perform their mission in support of American taxpayers,' and that the agency is 'conducting comprehensive testing to verify the effectiveness and reliability of all tools available to our workforce.' At this point, it's common to use AI for work, and GSA's chatbot may not have a dramatic effect on the government's operations. But it is just one small example of a much larger effort as DOGE continues to decimate the civil service. At the Department of Education, DOGE advisers have reportedly fed sensitive data on agency spending into AI programs to identify places to cut. DOGE reportedly intends to use AI to help determine whether employees across the government should keep their job. In another TTS meeting late last week—a recording of which I reviewed—Shedd said he expects that the division will be 'at least 50 percent smaller' within weeks. (TTS houses the team that built GSA Chat.) And arguably more controversial possibilities for AI loom on the horizon: For instance, the State Department plans to use the technology to help review the social-media posts of tens of thousands of student-visa holders so that the department may revoke visas held by students who appear to support designated terror groups, according to Axios. Rushing into a generative-AI rollout carries well-established risks. AI models exhibit all manner of biases, struggle with factual accuracy, are expensive, and have opaque inner workings; a lot can and does go wrong even when more responsible approaches to the technology are taken. GSA seemed aware of this reality when it initially started work on its chatbot last summer. It was then that 10x, the small technology team within GSA, began developing what was known as the '10x AI Sandbox.' Far from a general-purpose chatbot, the sandbox was envisioned as a secure, cost-effective environment for federal employees to explore how AI might be able to assist their work, according to the program's code base on GitHub—for instance, by testing prompts and designing custom models. 'The principle behind this thing is to show you not that AI is great for everything, to try to encourage you to stick AI into every product you might be ideating around,' a 10x engineer said in an early demo video for the sandbox, 'but rather to provide a simple way to interact with these tools and to quickly prototype.' [Kara Swisher: Move fast and destroy democracy] But Donald Trump appointees pushed to quickly release the software as a chat assistant, seemingly without much regard for which applications of the technology may be feasible. AI could be a useful assistant for federal employees in specific ways, as GSA's chatbot has been framed, but given the technology's propensity to make up legal precedents, it also very well could not. As a recently departed GSA employee told me, 'They want to cull contract data into AI to analyze it for potential fraud, which is a great goal. And also, if we could do that, we'd be doing it already.' Using AI creates 'a very high risk of flagging false positives,' the employee said, 'and I don't see anything being considered to serve as a check against that.' A help page for early users of the GSA chat tool notes concerns including 'hallucination'—an industry term for AI confidently presenting false information as true—'biased responses or perpetuated stereotypes,' and 'privacy issues,' and instructs employees not to enter personally identifiable information or sensitive unclassified information. How any of those warnings will be enforced was not specified. Of course, federal agencies have been experimenting with generative AI for many months. Before the November election, for instance, GSA had initiated a contract with Google to test how AI models 'can enhance productivity, collaboration, and efficiency,' according to a public inventory. The Departments of Homeland Security, Health and Human Services, and Veterans Affairs, as well as numerous other federal agencies, were testing tools from OpenAI, Google, Anthropic, and elsewhere before the inauguration. Some kind of federal chatbot was probably inevitable. But not necessarily in this form. Biden took a more cautious approach to the technology: In a landmark executive order and subsequent federal guidance, the previous administration stressed that the government's use of AI should be subject to thorough testing, strict guardrails, and public transparency, given the technology's obvious risks and shortcomings. Trump, on his first day in office, repealed that order, with the White House later saying that it had imposed 'onerous and unnecessary government control.' Now DOGE and the Trump administration appear intent on using the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects. Article originally published at The Atlantic

DOGE's Plans to Replace Humans With AI Are Already Under Way
DOGE's Plans to Replace Humans With AI Are Already Under Way

Atlantic

time10-03-2025

  • Business
  • Atlantic

DOGE's Plans to Replace Humans With AI Are Already Under Way

If you have tips about the remaking of the federal government, you can contact Matteo Wong on Signal at @matteowong.52. A new phase of the president and the Department of Government Efficiency's attempts to downsize and remake the civil service is under way. The idea is simple: use generative AI to automate work that was previously done by people. The Trump administration is currently testing a new chatbot with 1,500 federal employees at the General Services Administration and may release it to the entire agency as soon as this Friday—meaning it could be used by more than 10,000 workers who are responsible for more than $100 billion in contracts and services. This article is based in part on conversations with several current and former GSA employees with knowledge of the technology, all of whom requested anonymity to speak about confidential information; it is also based on internal GSA documents that I reviewed, as well as the software's code base, which is visible on GitHub. The bot, which GSA leadership is framing as a productivity booster for federal workers, is part of a broader playbook from DOGE and its allies. Speaking about GSA's broader plans, Thomas Shedd, a former Tesla engineer who was recently installed as the director of the Technology Transformation Services (TTS), GSA's IT division, said at an all-hands meeting last month that the agency is pushing for an 'AI-first strategy.' In the meeting, a recording of which I obtained, Shedd said that 'as we decrease [the] overall size of the federal government, as you all know, there's still a ton of programs that need to exist, which is a huge opportunity for technology and automation to come in full force.' He suggested that 'coding agents' could be provided across the government—a reference to AI programs that can write and possibly deploy code in place of a human. Moreover, Shedd said, AI could 'run analysis on contracts,' and software could be used to 'automate' GSA's 'finance functions.' A small technology team within GSA called 10x started developing the program during President Joe Biden's term, and initially envisioned it not as a productivity tool but as an AI testing ground: a place to experiment with AI models for federal uses, similar to how private companies create internal bespoke AI tools. But DOGE allies have pushed to accelerate the tool's development and deploy it as a work chatbot amid mass layoffs (tens of thousands of federal workers have resigned or been terminated since Elon Musk began his assault on the government). The chatbot's rollout was first noted by Wired, but further details about its wider launch and the software's previous development had not been reported prior to this story. The program—which was briefly called 'GSAi' and is now known internally as 'GSA Chat' or simply 'chat'—was described as a tool to draft emails, write code, 'and much more!' in an email sent by Zach Whitman, GSA's chief AI officer, to some of the software's early users. An internal guide for federal employees notes that the GSA chatbot 'will help you work more effectively and efficiently.' The bot's interface, which I have seen, looks and acts similar to that of ChatGPT or any similar program: Users type into a prompt box, and the program responds. GSA intends to eventually roll the AI out to other government agencies, potentially under the name ' The system currently allows users to select from models licensed from Meta and Anthropic, and although agency staff currently can't upload documents to the chatbot, they likely will be permitted to in the future, according to a GSA employee with knowledge of the project and the chatbot's code repository. The program could conceivably be used to plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data, the GSA worker told me. Spokespeople for DOGE did not respond to my requests for comment, and the White House press office directed me to GSA. In response to a detailed list of questions, Will Powell, the acting press secretary for GSA, wrote in an emailed statement that 'GSA is currently undertaking a review of its available IT resources, to ensure our staff can perform their mission in support of American taxpayers,' and that the agency is 'conducting comprehensive testing to verify the effectiveness and reliability of all tools available to our workforce.' At this point, it's common to use AI for work, and GSA's chatbot may not have a dramatic effect on the government's operations. But it is just one small example of a much larger effort as DOGE continues to decimate the civil service. At the Department of Education, DOGE advisers have reportedly fed sensitive data on agency spending into AI programs to identify places to cut. DOGE reportedly intends to use AI to help determine whether employees across the government should keep their jobs. In another TTS meeting late last week—a recording of which I reviewed—Shedd said he expects the division will be 'at least 50 percent smaller' within weeks. (TTS houses the team that built GSA Chat.) And arguably more controversial possibilities for AI loom on the horizon: For instance, the State Department plans to use the technology to help review the social-media posts of tens of thousands of student-visa holders so that the department may revoke visas held by students who appear to support designated terror groups, according to Axios. Rushing into a generative-AI rollout carries well-established risks. AI models exhibit all manner of biases, struggle with factual accuracy, are expensive, and have opaque inner workings; a lot can and does go wrong even when more responsible approaches to the technology are taken. GSA seemed aware of this reality when it initially started work on its chatbot last summer. It was then that 10x, the small technology team within GSA, began developing what was known as the '10x AI Sandbox.' Far from a general-purpose chatbot, the sandbox was envisioned as a secure, cost-effective environment for federal employees to explore how AI might be able to assist their work, according to the program's code base on GitHub—for instance, by testing prompts and designing custom models. 'The principle behind this thing is to show you not that AI is great for everything, to try to encourage you to stick AI into every product you might be ideating around,' a 10x engineer said in an early demo video for the sandbox, 'but rather to provide a simple way to interact with these tools and to quickly prototype.' Kara Swisher: Move fast and destroy democracy But Donald Trump appointees pushed to quickly release the software as a chat assistant, seemingly without much regard for which applications of the technology may be feasible. AI could be a useful assistant for federal employees in specific ways, as GSA's chatbot has been framed, but given the technology's propensity to make up legal precedents, it also very well could not. As a recently departed GSA employee told me, 'They want to cull contract data into AI to analyze it for potential fraud, which is a great goal. And also, if we could do that, we'd be doing it already.' Using AI creates 'a very high risk of flagging false positives,' the employee said, 'and I don't see anything being considered to serve as a check against that.' A help page for early users of the GSA chat tool notes concerns including 'hallucination'—an industry term for AI confidently presenting false information as true—'biased responses or perpetuated stereotypes,' and 'privacy issues,' and instructs employees not to enter personally identifiable information or sensitive unclassified information. How any of those warnings will be enforced was not specified. Of course, federal agencies have been experimenting with generative AI for many months. Before the November election, for instance, GSA had initiated a contract with Google to test how AI models 'can enhance productivity, collaboration, and efficiency,' according to a public inventory. The Departments of Homeland Security, Health and Human Services, and Veterans Affairs, as well as numerous other federal agencies, were testing tools from OpenAI, Google, Anthropic, and elsewhere before the inauguration. Some kind of federal chatbot was probably inevitable. But not necessarily in this form. Biden took a more cautious approach to the technology: In a landmark executive order and subsequent federal guidance, the previous administration stressed that the government's use of AI should be subject to thorough testing, strict guardrails, and public transparency, given the technology's obvious risks and shortcomings. Trump, on his first day in office, repealed that order, with the White House later saying that it had imposed 'onerous and unnecessary government control.' Now DOGE and the Trump administration appear intent on using the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects.

Elon Musk's DOGE is reportedly automating government tasks with an AI chatbot as it continues to slash the federal workforce
Elon Musk's DOGE is reportedly automating government tasks with an AI chatbot as it continues to slash the federal workforce

Yahoo

time10-03-2025

  • Business
  • Yahoo

Elon Musk's DOGE is reportedly automating government tasks with an AI chatbot as it continues to slash the federal workforce

Elon Musk-backed DOGE is rolling out an AI chatbot to some federal workers as it continues to slash government jobs, Wired reported. The department has officially been tasked with upgrading the federal government's technology and software use. DOGE has rolled out a custom AI-powered chatbot, GSAi, to around 1,500 government workers as it continues to slash the federal workforce, Wired reported. The wider release follows a small pilot held in February, in which around 150 GSA workers were granted access to the bot. While the bot has been in the works for months, DOGE has greatly accelerated its deployment, sources told Wired. The AI tool has reportedly been fine-tuned in a way that makes it safe for government use. At the moment, it is supposed to be used for "general" tasks, but eventually, DOGE hopes to use it to analyze contracts and procurement data, according to a previous report. An internal memo about the product, reviewed by Wired, suggests employees use it for drafting emails, creating talking points, summarizing text, and writing code. It also warns employees not to "type or paste federal nonpublic information (such as work products, emails, photos, videos, audio, and conversations that are meant to be pre-decisional or internal to GSA) as well as personally identifiable information as inputs.' One employee told the outlet the AI tool was "about as good as an intern" and gave "generic and guessable answers.' Employees can interact with GSAi via a chatbot interface similar to ChatGPT. The bot uses several models, including Anthropic's Claude Haiku 3.5, Claude Sonnet 3.5 v2, and Meta LLaMa 3.2, depending on the task, the report said. Representatives for the GSA did not immediately respond to a request for comment from Fortune, made outside normal working hours. The GSA is one of the government agencies that has been gutted by DOGE-directed layoffs. Last month, the agency suddenly dismissed over 1,000 employees and has set future goals to cut staff by 63% in its Public Building Service division, multiple current and former GSA employees told NPR. DOGE has also closed the General Services Administration's (GSA) technology consulting unit, 18F, which had a staff of 90 to 100 technology researchers, website designers, and product managers. Rolling out a custom-made chatbot could be a way to justify the spate of firings by boosting the productivity and efficiency of the remaining government workers. DOGE leaders have long emphasized the need for better technology to increase government efficiency. In a 2024 Wall Street Journal op-ed laying out their plans for DOGE, Musk and then co-leader Vivek Ramaswamy said they wanted to recruit a "lean team" of legal and technology experts. Since then, several software engineers and former employees of Musk's various technology companies have been linked to DOGE. The department has officially been tasked with upgrading the federal government's technology and software use. In an executive order establishing DOGE, the team was tasked with implementing "the President's DOGE Agenda" by modernizing "Federal technology and software to maximize governmental efficiency and productivity." The agency is technically a revamped version of the U.S. Digital Service, which was renamed the Department of Government Efficiency in the same executive order. Founding members, along with current and former employees of the U.S. Digital Service, previously told Fortune that DOGE's actions have been a 'betrayal' of the agency's original mission. They argue that Musk and his allies have 'weaponized' the office that was previously nonpartisan. According to its still-active website, USDS recruited 'mission-driven professionals' primarily from the private sector—including major tech firms like Amazon and Google—for short-term 'tours of civic service,' typically lasting two years. These engineers, designers, product managers, and digital policy experts collaborated in small teams with agencies such as the Social Security Administration, Veterans Affairs, Health and Human Services, and the IRS. Are you an employee at GSA with information to share? Contact this reporter at or securely via Signal at beatricenolan.08 from a non-work device. This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store