logo
#

Latest news with #CarolineHaskins

A DOGE Recruiter Is Staffing a Project to Deploy AI Agents Across the US Government
A DOGE Recruiter Is Staffing a Project to Deploy AI Agents Across the US Government

WIRED

time02-05-2025

  • Business
  • WIRED

A DOGE Recruiter Is Staffing a Project to Deploy AI Agents Across the US Government

Caroline Haskins Vittoria Elliott May 2, 2025 12:19 PM A startup founder told a Palantir alumni Slack group that AI agents could do the work of tens of thousands of government employees. He was met with emojis of clowns and a man licking a boot. An aide sets up a poster depicting the logo for the DOGE Caucus before a news conference. Photograph:A young entrepreneur who was among the earliest known recruiters for Elon Musk's so-called Department of Government Efficiency (DOGE) has a new, related gig—and he's hiring. Anthony Jancso, cofounder of AcclerateX, a government tech startup, is looking for technologists to work on a project that aims to have artificial intelligence perform tasks that are currently the responsibility of tens of thousands of federal workers. Jancso, a former Palantir employee, wrote in a Slack with about 2000 Palantir alumni in it that he's hiring for a 'DOGE orthogonal project to design benchmarks and deploy AI agents across live workflows in federal agencies,' according to an April 21 post reviewed by WIRED. Agents are programs that can perform work autonomously. 'We've identified over 300 roles with almost full process standardization, freeing up at least 70k FTEs for higher-impact work over the next year,' he continued, essentially claiming that tens of thousands of federal employees could see many aspects of their job automated and replaced by these AI agents. Workers for the project, he wrote, would be based on site in Washington, DC, and would not require a security clearance; it isn't clear for whom they would work. Palantir did not respond to requests for comment. The post was not well received. Eight people reacted with clown face emojis, three reacted with a custom emoji of a man licking a boot, two reacted with custom emoji of Joaquin Phoenix giving a thumbs down in the movie Gladiator , and three reacted with a custom emoji with the word 'Fascist.' Three responded with a heart emoji. 'DOGE does not seem interested in finding 'higher impact work' for federal employees,' one person said in a comment that received 11 heart reactions. 'You're complicit in firing 70k federal employees and replacing them with shitty autocorrect.' 'Tbf we're all going to be replaced with shitty autocorrect (written by chatgpt),' another person commented, which received one '+1' reaction. 'How 'DOGE orthogonal' is it? Like, does it still require Kremlin oversight?' another person said in a comment that received five reactions with a fire emoji. 'Or do they just use your credentials to log in later?' AccelerateX was originally called AccelerateSF, which VentureBeat reported in 2023 had received support from OpenAI and Anthropic. In its earliest incarnation, AccelerateSF hosted a hackathon for AI developers aimed at using the technology to solve San Francisco's social problems. According to a 2023 Mission Local story, for instance, Jancso proposed that using large language models to help businesses fill out permit forms to streamline the construction paperwork process might help drive down housing prices. (OpenAI did not respond to a request for comment. Anthropic spokesperson Danielle Ghiglieri tells WIRED that the company "never invested in AccelerateX/SF,' but did sponsor a hackathon AccelerateSF hosted in 2023 by providing free access to its API usage at a time when its Claude API 'was still in beta.') In 2024, the mission pivoted, with the venture becoming known as AccelerateX. In a post on X announcing the change, the company posted, 'Outdated tech is dragging down the US Government. Legacy vendors sell broken systems at increasingly steep prices. This hurts every American citizen.' AccelerateX did not respond to a request for comment. According to sources with direct knowledge, Jancso disclosed that AccelerateX had signed a partnership agreement with Palantir in 2024. According to the LinkedIn of someone described as one of AccelerateX's co-founders, Rachel Yee, the company looks to have received funding from OpenAI's Converge 2 Accelerator. Another of AccelerateSF's cofounders, Kay Sorin, now works for OpenAI, having joined the company several months after that hackathon. Sorin and Yee did not respond to requests for comment. Jancso's cofounder, Jordan Wick, a former Waymo engineer, has been an active member of DOGE, appearing at several agencies over the past few months, including the Consumer Financial Protection Bureau, National Labor Relations Board, the Department of Labor, and the Department of Education. In 2023, Jancso attended a hackathon hosted by ScaleAI; WIRED found that another DOGE member, Ethan Shaotran, also attended the same hackathon. Since its creation in the first days of the second Trump administration, DOGE has pushed the use of AI across agencies, even as it has sought to cut tens of thousands of federal jobs. At the Department of Veterans Affairs, a DOGE associate suggested using AI to write code for the agency's website; at the General Services Administration, DOGE has rolled out the GSAi chatbot; the group has sought to automate the process of firing government employees with a tool called AutoRIF; and a DOGE operative at the Department of Housing and Urban Development is using AI tools to examine and propose changes to regulations. But experts say that deploying AI agents to do the work of 70,000 people would be tricky if not impossible. A federal employee with knowledge of government contracting, who spoke to WIRED on the condition of anonymity because they were not authorized to speak to the press, says, 'A lot of agencies have procedures that can differ widely based on their own rules and regulations, and so deploying AI agents across agencies at scale would likely be very difficult.' Oren Etzioni, cofounder of the AI startup Vercept, says that while AI agents can be good at doing some things—like using an internet browser to conduct research—their outputs can still vary widely and be highly unreliable. For instance, customer service AI agents have invented nonexistent policies when trying to address user concerns. Even research, he says, requires a human to actually make sure what the AI is spitting out is correct. 'We want our government to be something that we can rely on, as opposed to something that is on the absolute bleeding edge,' says Etzioni. 'We don't need it to be bureaucratic and slow, but if corporations haven't adopted this yet, is the government really where we want to be experimenting with the cutting edge AI?' Etzioni says that AI agents are also not great 1-1 fits for job replacements. Rather, AI is able to do certain tasks or make others more efficient, but the idea that the technology could do the jobs of 70,000 employees would not be possible. 'Unless you're using funny math,' he says, 'no way.' Jancso, first identified by WIRED in February, was one of the earliest recruiters for DOGE in the months before Donald Trump was inaugurated. In December, Jancso, who sources told WIRED said he had been recruited by Steve Davis, president of the Musk-founded Boring Company and a current member of DOGE, used the Palantir alumni group to recruit DOGE members. On December 2nd, 2024, he wrote, 'I'm helping Elon's team find tech talent for the Department of Government Efficiency (DOGE) in the new admin. This is a historic opportunity to build an efficient government, and to cut the federal budget by 1/3. If you're interested in playing a role in this mission, please reach out in the next few days.' According to one source at SpaceX, who asked to remain anonymous as they are not authorized to speak to the press, Jancso appeared to be one of the DOGE members who worked out of the company's DC office in the days before inauguration along with several other people who would constitute some of DOGE's earliest members. SpaceX did not respond to a request for comment. Palantir was cofounded by Peter Thiel, a billionaire and long-time Trump supporter with close ties to Musk. Palantir, which provides data analytics tools to several government agencies including the Department of Defense and the Department of Homeland Security , has received billions of dollars in government contracts. During the second Trump administration, the company has been involved in helping to build a 'mega API' to connect data from the Internal Revenue Service to other government agencies, and is working with Immigration and Customs Enforcement to create a massive surveillance platform to identify immigrants to target for deportation.

Trump Tariffs Hit Antarctic Islands Inhabited by Zero Humans and Many Penguins
Trump Tariffs Hit Antarctic Islands Inhabited by Zero Humans and Many Penguins

WIRED

time02-04-2025

  • Business
  • WIRED

Trump Tariffs Hit Antarctic Islands Inhabited by Zero Humans and Many Penguins

Caroline Haskins Leah Feiger Apr 2, 2025 7:39 PM The Heard and McDonald Islands are among the dozens of targets of President Donald Trump's latest round of tariffs. But they have no exports, because no one lives there. King Penguins contemplating the snow on Heard Island, Antarctica. Photograph:On Wednesday, President Donald Trump announced the US was imposing reciprocal tariffs on a small collection of Antarctic islands that are not inhabited by humans, as part of a global trade war aimed at asserting US dominance. The Heard and McDonald Islands, known for their populations of penguins and seabirds, can only be reached by sea. Trump announced the countries now subject to tariffs in a Wednesday press conference, using a poster as a prop. Additional countries—including the Heard and McDonald Islands, which are, incidentally, not countries—were listed on sheets of paper distributed to reporters. One of the sheets claims that the Heard and McDonald Islands currently charge a 'Tariff to the U.S.A.' of 10 percent, clarifying in tiny letters that this includes "currency manipulation and trade barriers." In return, the sheet says that the US will charge "discounted reciprocal tariffs" on the islands at a rate of 10 percent. The islands are small. Their reported 37,000 hectares of land makes them a little larger than Philadelphia. According to UNESCO, which designated the islands as a World Heritage Site in 1997, they are covered in rocks and glaciers. Heard Island is the site of an active volcano, and McDonald Island is surrounded by several smaller rocky islands. The islands are home to large populations of penguins and elephant seals. The Australian Antarctic Division manages the islands, preserving the environment and conducting research on the large wildlife population, as well as climate change's impact on Heard and McDonald's permanent glaciers. On Wednesday, Australia and a number of its island territories, including Christmas and Cocos Keeling Islands, were also hit with tariffs of 10 percent. Norfolk Island, which Australia also claims, got a tariff of 29 percent. The White House did not immediately respond to WIRED's request for comment. When reached for comment, the Australian Antarctic Division referred WIRED to the country's Department of Foreign Affairs and Trade, which did not respond prior to publication. "One could argue this is in breach of the international Antarctic spirit," Elizabeth Buchanan, a polar geopolitics expert and senior fellow at the Australian Strategic Policy Institute, tells WIRED. Under the Antarctic Treaty, which promotes international scientific cooperation and stipulates that the continent should be used for peaceful purposes, land in Antarctica cannot be owned by any country. However, Australia has claimed since 1953 that the islands are Australian territories. Australia also laid claim to the water surrounding the islands via a 2002 act that established a marine reserve. Last year, it passed a law extending the boundaries of that reserve, approximately quadrupling its size. The Australian Defense Force monitors the waters surrounding the Heard and McDonald Islands as a part of 'Operation Resolute,' which covers the area 200 nautical miles from Australia's mainland and 'approximately 10 per cent of the world's surface.' In addition to Heard and McDonald Islands, it also applies to the water surrounding the Christmas, Cocos Keeling, Macquarie, Norfolk and Lord Howe islands. The Australian Defense Force claims that the goal of Operation Resolute is to address "security threats" like piracy and pollution. The Australian Antarctic Division claims that the area occasionally receives ships involved in scientific research, commercial fishing, and tourism.

Google Lifts a Ban on Using Its AI for Weapons and Surveillance
Google Lifts a Ban on Using Its AI for Weapons and Surveillance

WIRED

time04-02-2025

  • Business
  • WIRED

Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Paresh Dave Caroline Haskins Feb 4, 2025 3:47 PM Google published principals in 2018 barring its AI technology from being used for sensitive purposes. Weeks into President Donald Trump's second term, those guidelines are being overhauled. The Google Bay View campus in Mountain View, California, US. Photograph: MikeGoogle announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue 'technologies that cause or are likely to cause overall harm,' 'weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,' 'technologies that gather or use information for surveillance violating internationally accepted norms,' and 'technologies whose purpose contravenes widely accepted principles of international law and human rights.' The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. 'We've made updates to our AI Principles. Visit for the latest,' the note reads. In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the 'backdrop' to why Google's principals needed to be overhauled. Google first published the principles in 2018 as it moved to quell internal protests over the company's decision to work on a US military drone program. In response, it declined to renew the government contract and also announced a set of principles to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights. But in an announcement on Tuesday, Google did away with those commitments. The new webpage no longer lists a set of banned uses for Google's AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement 'appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.' Google also now says it will work to 'mitigate unintended or harmful outcomes.' 'We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,' wrote James Manyika, Google senior vice president for research, technology and society and Demis Hassabis, CEO of Google DeepMind, the company's esteemed AI research lab. 'And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.' They added that Google will continue to focus on AI projects 'that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.' US President Donald Trump's return to office last month has galvanized many companies to revise policies promoting equity and other liberal ideals. Google spokesperson Alex Krasov says the changes have been in the works much longer. Google lists its new goals as pursuing bold, responsible, and collaborative AI initiatives. Gone are phrases such as 'be socially beneficial' and maintain 'scientific excellence.' Added is a mention of 'respecting intellectual property rights.' After the initial release of its AI principles roughly seven years ago, Google created two teams tasked with reviewing whether projects across the company were living up to the commitments. One focused on Google's core operations, such as search, ads, Assistant, and Maps. Another focused on Google Cloud offerings and deals with customers. The unit focused on Google's consumer business was split up early last year as the company raced to develop chatbots and other generative AI tools to compete with OpenAI. Timnit Gebru, a former co-lead of Google's ethical AI research team who was later fired from that position, claims the company's commitment to the principles had always been in question. 'I would say that it's better to not pretend that you have any of these principles than write them out and do the opposite,' she says. Three former Google employees who had been involved in reviewing projects to ensure they aligned with the company's principles say the work was challenging at times because of the varying interpretations of the principles and pressure from higher-ups to prioritize business imperatives. Google still has language about preventing harm in its official Cloud Platform Acceptable Use Policy, which includes various AI-driven products. The policy forbids violating 'the legal rights of others' and engaging in or promoting illegal activity, such as 'terrorism or violence that can cause death, serious harm, or injury to individuals or groups of individuals.' However, when pressed about how this policy squares with Project Nimbus—a cloud computing contract with the Israeli government, which has benefited the country's military — Google has said that the agreement 'is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.' 'The Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy,' Google spokesperson Anna Kowalczyk told WIRED in July. Google Cloud's Terms of Service similarly forbid any applications that violate the law or 'lead to death or serious physical harm to an individual.' Rules for some of Google's consumer-focused AI services also ban illegal uses and some potentially harmful or offensive uses. Page 2

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store