logo
#

Latest news with #ProjectStargate

Trump's AI action plan is a ‘Silicon Valley free-for-all': But what does it mean for Canada?
Trump's AI action plan is a ‘Silicon Valley free-for-all': But what does it mean for Canada?

Yahoo

time6 days ago

  • Business
  • Yahoo

Trump's AI action plan is a ‘Silicon Valley free-for-all': But what does it mean for Canada?

On Wednesday, U.S. President Donald Trump unveiled the country's highly anticipated U.S. Artificial Intelligence Action Plan — a sweeping missive reaffirming his administration's laissez-faire approach to regulating AI. The 28-page document, titled Winning the AI Race, outlined the White House's strategy to assert American dominance and leadership in the technology by removing regulatory barriers, fast-tracking data centres, exporting American AI, and developing AI that is 'free from ideological bias.' Here's what to know about the U.S. plan and what it could mean for Canada's AI ambitions. What is the U.S. AI Action Plan? In a January 2025 White House memo, Trump signalled his intention to eliminate 'onerous' regulations on AI development and deployment. The U.S. AI Action Plan released Wednesday establishes a blueprint for putting that, and more, into motion. According to the plan, the government will take action in the 'coming weeks and months' to abolish certain federal regulations on AI and to accelerate the build-out of data centres by fast-tracking permits. Trump last week praised tech and energy giants for investing US$90 billion to turn Pennsylvania into an AI hub and has talked up a US$500 billion AI data centre-building spree known as Project Stargate, though that effort so far has failed to confirm even one data centre. The plan also takes aim at 'woke' AI, following the administration's campaigns to target diversity, equity, and inclusion initiatives across U.S. institutions. Federal procurement guidelines will be updated to ensure that the government only contracts with developers who 'ensure that their systems are objective and free from top-down ideological bias,' the plan said. It also mandates state departments to partner with the private sector to deliver American AI — from hardware to software and standards — 'to America's friends and allies around the world.' Why is the U.S. going all-in on AI? The U.S. has long touted the need to maintain global AI leadership, a mission that has intensified as the rapidly-developing tech becomes the new front line in the showdown between the U.S. and China. 'It's a global competition now to lead in AI,' said David Sacks, who is often referred to as the White House's AI and crypto czar. 'We want the U.S. to win that race. It's very important that America continues to be the dominant power in AI.' Why are people criticizing the plan? Trump's AI agenda has come under fire from a broad range of critics. They argue that his vision mirrors the interests of Silicon Valley tech giants, unduly prioritizing corporate interests over public safety and well-being and environmental concerns. Earlier this week, more than 90 technology, labour, environmental and civil society organizations signed the 'People's AI Action Plan' to directly counter the White House's AI ambitions. 'The Trump administration's latest AI directive is a billion-dollar giveaway to Big Tech. Instead of enforcing guardrails, this administration is gutting oversight. We need rules and accountability — not a Silicon Valley free-for-all,' said a statement by J.B. Branch of the Public Citizen, a U.S.-based non-profit consumer advocacy group that is a signatory of the People's AI Action Plan. What is Canada's AI strategy? Since taking office, Canadian Prime Minister Mark Carney has established a new AI mandate for Canada that positions the technology as a key driver of Canadian productivity and innovation and furthers the Canadian Sovereign AI Compute Strategy announced in 2024. Ottawa will continue to dole out $2 billion over five years to build out Canada's AI compute capacity by securing resources from data centres to cutting-edge chips for training and operating AI systems. Canada's key priorities include helping homegrown AI firms scale; increasing AI adoption in the public and private sectors; courting key global partners like the U.S., the U.K., France and Germany; and securing the country's digital sovereignty; according to digital innovation minister Evan Solomon. What does the new plan mean for Canada? Trump's latest directive reasserts that the U.S. will target regulations and 'bias,' in contrast to the previous Biden administration's approach that aimed to balance innovation and safety. Canada's history of being a 'digital rule taker — and not maker' means that Ottawa looks for signals from other jurisdictions like the U.S. and the EU when formulating its own digital strategies and regulations, said Robert Fay, managing director of digital economy at the Centre for International Governance (CIGI), a Waterloo-based think-tank. It remains to be seen whether Ottawa will align itself more closely with the U.S.'s approach or the EU's tougher-on-tech rules, but Solomon has said that Canada needs to be careful not to 'overindex on regulation.' Some industry voices, like Randy Goebel, a fellow at the Alberta Machine Intelligence Institute (Amii) — one of Canada's national AI institutes, say that Trump's plan could play to Canada's advantages as an AI R&D power. It confirms that the U.S. views the AI race as a zero-sum game between the U.S. and China, with Canada and its relatively small investments in AI seen as a bit player, Goebel said. 'This means that Canada — as long as we're on the U.S.'s 'friends and allies list' — will have the top U.S. AI tools readily available, can focus on strengthening the subject areas that we're good at and not be as concerned with what the Americans think is most important.' 'Godfather of AI' warns Canadian companies are adopting the technology too slowly New AI minister says Canada won't 'over-index' on AI regulation Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

When war got weird: 10 military experiments that seem straight out of sci-fi
When war got weird: 10 military experiments that seem straight out of sci-fi

Time of India

time7 days ago

  • Entertainment
  • Time of India

When war got weird: 10 military experiments that seem straight out of sci-fi

War has a funny way of turning wild ideas into serious government projects. In the chaos of World War II and the paranoia of the Cold War, military minds across the globe cooked up experiments that sound less like national defense and more like fever dreams. Think remote-viewing psychics, napalm-strapped bats, pain rays, and yes, even a surgically wired CIA spy cat, code-named 'Acoustikitty,' who was supposed to eavesdrop on Soviet agents but promptly wandered off. These weren't jokes, they were fully funded, sometimes top-secret missions with real budgets and even weirder intentions. Most of these plans fizzled out, tripped up by science, common sense, or the sheer unpredictability of, say, cats. But they offer a fascinating, and often hilarious, look into what happens when military strategy meets sci-fi imagination. So if you've ever wondered what it looks like when serious defense programs go completely off the rails, you're in the right place. Let's reimagine it. Acoustikitty : CIA tried to recruit cats, they refused to take orders In the mid-1960s, the CIA launched a curious and ambitious project known as "Acoustikitty," aiming to use wired and microphoned cats to covertly eavesdrop on open-air conversations between foreign agents and their Soviet contacts. The plan involved implanting a microphone in the cat's ear, a transmitter under its skin, and an antenna woven into its fur, allowing it to inconspicuously capture sensitive conversations in public places like parks. Despite the technology functioning as intended, the fundamental flaw lay in the cat's unpredictable behavior, true to feline nature, it couldn't be trained to follow specific commands or approach intended targets reliably. After several unsuccessful tests, the project was ultimately abandoned in 1967 without ever seeing operational use. Still, the story of Acoustikitty remains one of the most bizarre and enduring legends of Cold War espionage. Project Stargate : Psychic soldiers for intel Project Stargate was a Cold War-era US intelligence effort to explore the potential military use of psychic phenomena, especially remote viewing, the claimed ability to perceive distant or unseen targets without physical presence. Sparked by reports that the Soviet Union was heavily investing in psychic research, the CIA feared falling behind in this unconventional domain. Throughout the 1970s to 1990s, the program evolved under various codenames like SCANATE, GONDOLA WISH, and GRILL FLAME, eventually becoming known as Stargate. Supporters of Project Stargate cite cases where remote viewers allegedly identified a crashed Soviet aircraft in Africa and described hidden facilities later verified by traditional intelligence. However, critics argue these successes were overstated, noting that remote viewing often produced vague or inaccurate information and that results could not be reliably replicated under controlled conditions, casting doubt on the program's scientific validity. By 1995, it was declassified and terminated after a review found it failed to produce consistently reliable intelligence. The Gay Bomb: Chemical to 'turn' enemies In 1994, a US military lab at Wright-Patterson Air Force Base proposed developing a non-lethal chemical weapon designed to disrupt enemy morale by inducing homosexual behavior among troops using a potent aphrodisiac. The idea was part of a $7.5 million funding request aimed at researching chemicals that could impair enemy discipline and cohesion without causing physical harm. Although the "gay bomb" proposal was ultimately rejected, it resurfaced years later when the document was disclosed through a Freedom of Information request. The watchdog group Sunshine Project publicized the bizarre and controversial plan, which highlighted the extremes explored in psychological warfare during that era. Avrocar: Real-life flying saucer attempt The VZ-9AV Avrocar was a Cold War-era experimental aircraft designed to function as a high-speed, stealthy flying vehicle. Developed in the 1950s with a projected budget of $3.16 million (around $26 million today), it aimed to combine vertical takeoff and landing capabilities with advanced maneuverability. Despite its ambitious goals, the Avrocar failed to deliver in testing. It could only achieve brief hovers just above the ground and proved unstable in flight. Ultimately, the project was deemed unfeasible and was canceled before reaching operational use. Bat Bombs: Explosive bats for warfare In an unusual World War II proposal that sounds like science fiction, a quirky American inventor suggested using bats as miniature bombers. The plan involved capturing one million bats from Texas caves, attaching small napalm charges to them, and packing them into bomb casings designed to release them mid-air over Japanese cities. Once released, the bats would instinctively roost in the wooden structures common throughout Japan at the time. A timed ignition system would then detonate the incendiaries, causing widespread fires and destruction. Though never fully implemented, the concept highlights the extreme and imaginative lengths considered in wartime strategy. Project Iceworm: Missiles hidden under ice Project Iceworm was an ambitious Cold War plan to build a vast network of tunnels covering 52,000 square miles beneath Greenland's ice, about three times the size of Denmark. The design included trenches spaced four miles apart to house 600 mobile nuclear missiles, along with facilities to support 11,000 soldiers living underground, prepared for a rapid strike if necessary. A 1960 top-secret US Army report praised Greenland as an ideal location, just 3,000 miles from Moscow and strategically positioned for both offensive and second-strike capabilities. The mobility and concealment of the missiles beneath the ice were intended to prevent detection and ensure retaliation in the event of a Soviet nuclear attack. Edgewood experiments: Drugs tested on troops Between 1950 and 1975, approximately 6,720 US service members participated in chemical exposure experiments at the Edgewood Arsenal in Maryland. These tests involved 254 different chemicals and were primarily intended to study their effects on humans, evaluate protective gear, and assess how military personnel might respond to chemical warfare agents. Although no current tests can definitively confirm such past exposures, a thorough medical history and physical exam can help assess potential health risks. Participants may experience long-term psychological effects from the trauma of being used as test subjects, and concerned veterans are encouraged to consult healthcare providers for evaluation and support. Pain Ray : Heat weapon for crowd control The "Pain Ray," officially known as the Active Denial System (ADS), is a non-lethal weapon developed by the US military for crowd control and perimeter defense. It works by emitting a focused beam of millimeter-wave radiation that penetrates the outer layer of skin, producing an intense burning sensation without causing permanent damage. The beam heats the skin to around 130°F (54°C), triggering an instinctive pain response that causes individuals to quickly move away from the targeted area. Designed as a safer alternative to rubber bullets or tear gas, the Pain Ray has been tested in various scenarios but has seen limited deployment due to concerns about safety, public perception, and ethical implications. Critics argue that while it may not cause lasting harm, the psychological and potential long-term effects of such a weapon remain poorly understood. Despite this, the ADS represents a significant advancement in non-lethal crowd control technology. Camel Corps: Military camels in America The Camel Corps was a mid-19th-century US military experiment aimed at using camels as pack animals in the arid regions of the American Southwest. Initiated by Secretary of War Jefferson Davis in the 1850s, the program involved importing camels from the Middle East to test their effectiveness in desert conditions. Housed at Camp Verde, Texas, the camels proved to be strong, resilient, and well-suited for transporting supplies over long distances. Despite their success in field trials, the project was eventually abandoned due to the Civil War, lack of institutional support, and soldiers' preference for traditional animals like horses and mules. Peacekeeper Trains: Nukes launched from trains Peacekeeper Trains were a Cold War-era US military concept designed to make nuclear missile launches more survivable by placing them on mobile rail platforms. The idea was to load Peacekeeper intercontinental ballistic missiles (ICBMs) onto specially designed train cars that could travel across the national rail network, making their location unpredictable and reducing their vulnerability to a first-strike attack. By constantly moving or being deployed to remote tracks, these trains would ensure a credible second-strike capability. Although the system was tested and proven feasible, the program was never fully deployed due to high costs, logistical challenges, and arms reduction treaties. (The article uses AI generated images)

AI Compute Arms Race: xAI Eyes 50M GPUs by 2030, OpenAI Targets 1M by 2025
AI Compute Arms Race: xAI Eyes 50M GPUs by 2030, OpenAI Targets 1M by 2025

Hans India

time23-07-2025

  • Business
  • Hans India

AI Compute Arms Race: xAI Eyes 50M GPUs by 2030, OpenAI Targets 1M by 2025

As the capabilities of artificial intelligence models grow exponentially, a new front in the AI race is rapidly emerging—massive-scale compute infrastructure. The spotlight now turns to how much machine power and energy companies can deploy, rather than just how smart their algorithms are. Elon Musk's xAI and Sam Altman's OpenAI are setting ambitious benchmarks, signaling just how high the stakes have risen in this new tech arms race. Taking to X (formerly Twitter), Musk announced xAI's goal of deploying 50 million H100-equivalent compute units by 2030. These are expected not only to match NVIDIA's most advanced GPUs but to surpass them in energy efficiency—a bold declaration of intent that positions xAI as a serious contender in the race to lead AI infrastructure. Meanwhile, Sam Altman has laid out OpenAI's roadmap to bring over 1 million GPUs online by the end of 2025, describing it as a "hundredfold expansion" of current capabilities. This aggressive scaling strategy reflects OpenAI's aim to secure and extend its leadership in AI innovation and deployment. At the center of OpenAI's growth is Project Stargate, a sprawling infrastructure initiative that could see $50 billion (approximately ₹39 billion) invested over the next four years. The project's first major development is already underway in Abilene, Texas, where a 1,000-acre campus is being built to host what could become the world's largest AI training cluster. The project is backed by major tech players, with SoftBank's Masayoshi Son chairing Stargate, steering the financial strategy, and OpenAI handling operational responsibilities. Collaborations with Microsoft, NVIDIA, Arm, and others aim to secure the high-performance chips required for such massive expansion. These strategic moves highlight a key industry shift: the next generation of AI breakthroughs may depend more on compute capacity than just algorithmic innovation. But with this growth comes a steep energy bill. As both xAI and OpenAI scale their operations, environmental and energy consumption concerns are rising. While OpenAI has acknowledged the challenge, specific sustainability efforts have yet to be detailed publicly. Whether it's OpenAI's rapid push through 2025 or xAI's long-term ambition for 2030, it's clear the AI infrastructure race is reshaping the future—not just in terms of intelligence, but in terms of sheer scale and speed.

Microsoft server hack impacts around 100 organisations; Stargate aims to build small data centre this year; Google and OpenAI models excel at Mathematical Olympiad
Microsoft server hack impacts around 100 organisations; Stargate aims to build small data centre this year; Google and OpenAI models excel at Mathematical Olympiad

The Hindu

time22-07-2025

  • Business
  • The Hindu

Microsoft server hack impacts around 100 organisations; Stargate aims to build small data centre this year; Google and OpenAI models excel at Mathematical Olympiad

Microsoft server hack impacts around 100 organisations Microsoft has sounded an alert and released a fix after hackers exploited a vulnerability in it SharePoint software in order to target businesses and government agencies. According to researchers, about 100 organisations were impacted and these included major industrial firms, banks, auditors, healthcare companies, and multiple U.S. state-level and international government entities. The hack was a zero day exploit, which refers to a cyber-security attack that is considered serious because it exploits an unknown security vulnerability, meaning that security engineers have had 'zero days' in order to work on a fix. Other cybersecurity figures have also backed the number of affected organisations, but warned that the effects of the hack could be far-reaching and involve more serious consequences over longer time periods. The exact identity of the perpetrator is not confirmed, but Google said some of the activity could be linked to a 'China-nexus threat actor.' Stargate aims to build small data centre this year The $500 billion Stargate Project backed by OpenAI, SoftBank, and Oracle that aims to accelerate America's AI infrastructure has set a more basic goal: building a small data centre by the end of the year. This will probably take place in Ohio, per The Wall Street Journal. However, the collaboration may not have been smooth sailing as the report cited sources who claimed that SoftBank and OpenAI did not agree on key partnership terms, such as where to build the sites. Project Stargate was intended to create more than 100,000 jobs in the country. Both SoftBank and OpenAI have confirmed that they were moving 'with urgency on site assessments' apart from also advancing projects in multiple U.S. states. Meanwhile, Oracle Chairman Larry Ellison said the first data centre was already under construction in Texas. Google and OpenAI models excel at Mathematical Olympiad Google and OpenAI both announced that their artificial-intelligence (AI) models crossed the threshold needed to score gold at the International Mathematical Olympiad for high-school students. They both solved five out of six problems and used general-purpose 'reasoning' models to work through the mathematical concepts using natural language. While the International Mathematical Olympiad has been used as a benchmark to test the mathematical abilities of AI models in the past, this marks the first time the competition coordinated officially with some AI technologists. For the competition this year, Google used a general-purpose model called Gemini Deep Think while rival OpenAI relied on a new experimental model that focused on scaling up 'test-time compute.' The new achievements unlocked by the companies come with promising potential for mathematicians as well as those from other scientific disciplines to better integrate AI models in their research process.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store