logo
#

Latest news with #AcceptableUsePolicy

Microsoft denies misuse of AI, Azure in Gaza conflict
Microsoft denies misuse of AI, Azure in Gaza conflict

United News of India

time17-05-2025

  • Business
  • United News of India

Microsoft denies misuse of AI, Azure in Gaza conflict

Washington, May 17 (UNI) Responding to growing concerns among employees and the public, Microsoft Corporation has stated that it found no evidence its Azure cloud or artificial intelligence technologies were used by the Israeli military to target civilians or cause harm in the ongoing conflict in Gaza. In a statement released after an internal review and an external fact-finding process, the tech giant said it had interviewed dozens of employees and assessed relevant documents, concluding that none of its technologies were used to inflict harm during the hostilities. "Microsoft works with countries and customers around the world, including the Israel Ministry of Defense (IMOD), under a standard commercial relationship," the company said, adding that the terms of its services are governed by a strict Acceptable Use Policy and AI Code of Conduct. These guidelines prohibit the use of its cloud and AI services in any manner that inflicts harm or contravenes legal standards. The company clarified that its services to the IMOD include software, professional support, Azure cloud services, and AI-powered language translation tools. It emphasised that Microsoft has not provided any bespoke surveillance or operational software typically used in military applications. "We do not have visibility into how customers use our software on their own premises or servers, nor do we have access to IMOD's government cloud operations, which are supported through contracts with providers other than Microsoft,' the company noted. Microsoft acknowledged that it had extended limited emergency support to the Israeli government in the aftermath of the October 7, 2023 attacks to assist in hostage rescue efforts. "This help was provided with significant oversight, and not all requests were approved," it said. Underscoring its broader commitment, Microsoft reaffirmed its support for cybersecurity in Israel and humanitarian assistance across both Israel and Gaza. "Our work is informed and governed by our Human Rights Commitments. Based on everything we currently know, we believe Microsoft has abided by these commitments in Israel and Gaza," the company stated. UNI BDN RN

Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza
Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza

Boston Globe

time17-05-2025

  • Business
  • Boston Globe

Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza

The partnership reflects a growing drive by tech companies to sell their artificial intelligence products to militaries for a wide range of uses, including in Israel, Ukraine and the United States. However, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people. Advertisement Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake 'additional fact-finding.' The statement did not identify the outside firm or provide a copy of its report. Advertisement The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes. The company's statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats. Microsoft said it had also provided 'special access to our technologies beyond the terms of our commercial agreements' and 'limited emergency support' to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on Oct. 7. 'We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others,' Microsoft said. 'We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza.' The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians. Advertisement In its statement, the company also conceded that it 'does not have visibility into how customers use our software on their own servers or other devices.' The company added that it could not know how its products might be used through other commercial cloud providers. In addition to Microsoft, the Israeli military has extensive contracts for cloud or AI services with Google, Amazon, Palantir and several other major American tech firms. Microsoft said the Israeli military, like any other customer, was bound to follow the company's Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in any way prohibited by law. In its statement, the company said it had found 'no evidence' the Israeli military had violated those terms. Emelia Probasco, a senior fellow for the Center for Security and Emerging Technology at Georgetown University, said the statement is noteworthy because few commercial technology companies have so clearly laid out standards for working globally with international governments. 'We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict,' she said. 'It's like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world.' Israel has used its vast trove of intelligence to both target Islamic militants and conduct raids into Gaza seeking to rescue hostages, with civilians often caught in the crossfire. For example, a February 2024 operation that freed two Israeli hostages in Rafah resulted in the deaths of 60 Palestinians. A June 2024 raid in the Nuseirat refugee camp freed four Israeli hostages from Hamas captivity but resulted in the deaths of at least 274 Palestinians. Advertisement Overall, Israel's invasions and extensive bombing campaigns in Gaza and Lebanon have resulted in the deaths of more than 50,000 people, many of them women and children. No Azure for Apartheid, a group of current and former Microsoft employees, called on Friday for the company to publicly release a full copy of the investigative report. 'It's very clear that their intention with this statement is not to actually address their worker concerns, but rather to make a PR stunt to whitewash their image that has been tarnished by their relationship with the Israeli military,' said Hossam Nasr, a former Microsoft worker fired in October after he helped organize an unauthorized vigil at the company's headquarters for Palestinians killed in Gaza. Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft Friday for taking a step toward transparency. But she said the statement raised many unanswered questions, including details about how Microsoft's services and AI models were being used by the Israeli military on its own government servers. 'I'm glad there's a little bit of transparency here,' said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. 'But it is hard to square that with what's actually happening on the ground.' Burke reported from San Francisco and Mednick from Jerusalem.

Google Worried It Couldn't Control How Israel Uses Project Nimbus, Files Reveal
Google Worried It Couldn't Control How Israel Uses Project Nimbus, Files Reveal

The Intercept

time12-05-2025

  • Business
  • The Intercept

Google Worried It Couldn't Control How Israel Uses Project Nimbus, Files Reveal

Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn't control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations and wartime atrocities. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel's use of its technology. And it would require close collaboration with the Israeli security establishment — including joint drills and intelligence sharing — that was unprecedented in Google's deals with other nations. A third-party consultant Google hired to vet the deal recommended that the company withhold machine learning and artificial intelligence tools from Israel because of these risk factors. Three international law experts who spoke with The Intercept said that Google's awareness of the risks and foreknowledge that it could not conduct standard due diligence may pose legal liability for the company. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza — with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. 'They're aware of the risk that their products might be used for rights violations,' said León Castellanos-Jankiewicz, a lawyer with the Asser Institute for International and European Law in The Hague, who reviewed portions of the report. 'At the same time, they will have limited ability to identify and ultimately mitigate these risks.' Google declined to answer any of a list of detailed questions sent by The Intercept about the company's visibility into Israel's use of its services or what control it has over Project Nimbus. Company spokesperson Denise Duffy-Parkes instead responded with a verbatim copy of a statement that Google provided for a different article last year. 'We've been very clear about the Nimbus contract, what it's directed to, and the Terms of Service and Acceptable Use Policy that govern it. Nothing has changed.' Portions of the internal document were first reported by the New York Times, but Google's acknowledged inability to oversee Israel's usage of its tools has not previously been disclosed. In January 2021 , just three months before Google won the Nimbus contract alongside Amazon, the company's cloud computing executives faced a dilemma. The Project Nimbus contract — then code-named 'Selenite' at Google — was a clear moneymaker. According to the report, which provides an assessment of the risks and rewards of this venture, Google estimated a bespoke cloud data center for Israel, subject to Israeli sovereignty and law, could reap $3.3 billion between 2023 and 2027, not only by selling to Israel's military but also its financial sector and corporations like pharmaceutical giant Teva. But given decades of transgressions against international law by Israeli military and intelligence forces it was now supplying, the company acknowledged that the deal was not without peril. 'Google Cloud Services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank,' resulting in 'reputation harm,' the company warned. In the report, Google acknowledged the urgency of mitigating these risks, both to the human rights of Palestinians and Google's public image, through due diligence and enforcement of the company's terms of service, which forbid certain acts of destruction and criminality. But the report makes clear a profound obstacle to any attempt at oversight: The Project Nimbus contract is written in such a way that Google would be largely kept in the dark about what exactly its customer was up to, and should any abuses ever come to light, obstructed from doing anything about them. Read our complete coverage The document lays out the limitations in stark terms. Google would only be given 'very limited visibility' into how its software would be used. The company was 'not permitted to restrict the types of services and information that the Government (including the Ministry of Defense and Israeli Security Agency) chooses to migrate' to the cloud. Attempts to prevent Israeli military or spy agencies from using Google Cloud in ways damaging to Google 'may be constrained by the terms of the tender, as Customers are entitled to use services for any reason except violation of applicable law to the Customer,' the document says. A later section of the report notes Project Nimbus would be under the exclusive legal jurisdiction of Israel, which, like the United States, is not a party to the Rome Statute and does not recognize the International Criminal Court. 'Google must not respond to law enforcement disclosure requests without consultation and in some cases approval from the Israeli authorities, which could cause us to breach international legal orders / law.' Should Project Nimbus fall under legal scrutiny outside of Israel, Google is required to notify the Israeli government as early as possible, and must 'Reject, Appeal, and Resist Foreign Government Access Requests.' Google noted this could put the company at odds with foreign governments should they attempt to investigate Project Nimbus. The contract requires Google to 'implement bespoke and strict processes to protect sensitive Government data,' according to a subsequent internal report, also viewed by The Intercept that was drafted after the company won its bid. This obligation would stand even if it means violating the law: 'Google must not respond to law enforcement disclosure requests without consultation and in some cases approval from the Israeli authorities, which could cause us to breach international legal orders / law.' The second report notes another onerous condition of the Nimbus deal: Israel 'can extend the contract up to 23 years, with limited ability for Google to walk away.' The initial report notes that Google Cloud chief Thomas Kurian would personally approve the contract with full understanding and acceptance of these risks before the company submitted its contract proposal. Google did not make Kurian available for comment. Business for Social Responsibility, a human rights consultancy tapped by Google to vet the deal, recommended the company withhold machine learning and AI technologies specifically from the Israeli military in order to reduce potential harms, the document notes. It's unclear how the company could have heeded this advice considering the limitations in the contract. The Intercept in 2022 reported that Google Cloud's full suite of AI tools was made available to Israeli state customers, including the Ministry of Defense. BSR did not respond to a request for comment. The first internal Google report makes clear that the company worried how Israel might use its technology. 'If Google Cloud moves forward with the tender, we recommend the business secure additional assurances to avoid Google Cloud services being used for, or linked to, the facilitation of human rights violations.' It's unclear if such assurances were ever offered. Google has long defended Project Nimbus by stating that the contract 'is not directed at highly sensitive, classified or military workloads relevant to weapons or intelligence services.' The internal materials note that Project Nimbus will entail nonclassified workloads from both the Ministry of Defense and Shin Bet, the country's rough equivalent of the FBI. Classified workloads, one report states, will be handled by a second, separate contract code-named 'Natrolite.' Google did not respond when asked about its involvement in the classified Natrolite project. Both documents spell out that Project Nimbus entails a deep collaboration between Google and the Israeli security state through the creation of a Classified Team within Google. This team is made up of Israeli nationals within the company with security clearances, designed to 'receive information by [Israel] that cannot be shared with [Google].' Google's Classified Team 'will participate in specialized training with government security agencies,' the first report states, as well as 'joint drills and scenarios tailored to specific threats.' The level of cooperation between Google and the Israeli security state appears to have been unprecedented at the time of the report. 'The sensitivity of the information shared, and general working model for providing it to a government agency, is not currently provided to any country by GCP,' the first document says. Whether Google could ever pull the plug on Nimbus for violating the company rules or the law is unclear. The company has claimed to The Intercept and other outlets that Project Nimbus is subject to its standard terms of use, like any other Google Cloud customer. But Israeli government documents contradict this, showing the use of Project Nimbus services is constrained not by Google's normal terms, but a secret amended policy. A spokesperson for the Israeli Ministry of Finance confirmed to The Intercept that the amended Project Nimbus terms of use are confidential. Shortly after Google won the Nimbus contract, an attorney from the Israeli Ministry of Finance, which oversaw the deal, was asked by reporters if the company could ever terminate service to the government. 'According to the tender requirements, the answer is no,' he replied. In its statement, Google points to a separate set of rules, its Acceptable Use Policy, that it says Israel must abide by. These rules prohibit actions that 'violate or encourage the violation of the legal rights of others.' But the follow-up internal report suggests this Acceptable Use Policy is geared toward blocking illegal content like sexual imagery or computer viruses, not thwarting human rights abuses. Before the government agreed to abide by the AUP, Google wrote there was a 'relatively low risk' of Israel violating the policy 'as the Israel government should not be posting harmful content itself.' The second internal report also says that 'if there is a conflict between Google's terms' and the government's requirements, 'which are extensive and often ambiguous,' then 'they will be interpreted in the way which is the most advantageous to the customer.' International law is murky when it comes to the liability Google could face for supplying software to a government widely accused of committing a genocide and responsible for the occupation of the West Bank that is near-universally considered illegal. Legal culpability grows more ambiguous the farther you get from the actual act of killing. Google doesn't furnish weapons to the military, but it provides computing services that allow the military to function — its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations. But if Project Nimbus were to be tied directly to the facilitation of a war crime or other crime against humanity, Google executives could hypothetically face criminal liability under customary international law or through a body like the ICC, which has jurisdiction in both the West Bank and Gaza. Civil lawsuits are another option: Castellanos-Jankiewicz imagined a scenario in which a hypothetical plaintiff with access to the U.S. court system could sue Google over Project Nimbus for monetary damages, for example. Along with its work for the Israeli military, Google through Project Nimbus sells cloud services to Israel Aerospace Industries, the state-owned weapons maker whose munitions have helped devastate Gaza. Another confirmed Project Nimbus customer is the Israel Land Authority, a state agency that among other responsibilities distributes parcels of land in the illegally annexed and occupied West Bank. An October 2024 judicial opinion issued by the International Court of Justice, which arbitrates disputes between United Nations member states, urged countries to 'take all reasonable measures' to prevent corporations from doing anything that might aid the illegal occupation of the West Bank. While nonbinding, 'The advisory opinions of the International Court of Justice are generally perceived to be quite authoritative,' Ioannis Kalpouzos, a visiting professor at Harvard Law and expert on human rights law and laws of war, told The Intercept. 'Both the very existence of the document and the language used suggest at least the awareness of the likelihood of violations.' Establishing Google's legal culpability in connection with the occupation of the West Bank or ongoing killing in Gaza entails a complex legal calculus, experts explained, hinging on the extent of its knowledge about how its products would be used (or abused), the foreseeability of crimes facilitated by those products, and how directly they contributed to the perpetration of the crimes. 'Both the very existence of the document and the language used suggest at least the awareness of the likelihood of violations,' Kalpouzos said. While there have been a few instances of corporate executives facing local criminal charges in connections with human rights atrocities, liability stemming from a civil lawsuit is more likely, said Castellanos-Jankiewicz. A hypothetical plaintiff might have a case if they could demonstrate that 'Google knew or should have known that there was a risk that this software was going to be used or is being used,' he explained, 'in the commission of serious human rights violations, war crimes, crimes against humanity, or genocide.' Getting their date in court before an American judge, however, would be another matter. The 1789 Alien Tort Statute allows federal courts in the United States to take on lawsuits by foreign nationals regarding alleged violations of international law but has been narrowed considerably over the years, and whether U.S. corporations could even be sued under the statute in the first place remains undecided. History has seen scant few examples of corporate accountability in connection with crimes against humanity. In 2004, IBM Germany donated $4 million to a Holocaust reparations fund in connection with its wartime role supplying computing services to the Third Reich. In the early 2000s, plaintiffs in the U.S. sued dozens of multinational corporations for their work with apartheid South Africa, including the sale of 'essential tools and services,' Castellanos-Jankiewicz told The Intercept, though these suits were thrown out following a 2016 Supreme Court decision. Most recently Lafarge, a French cement company, pleaded guilty in both the U.S. and France following criminal investigations into its business in ISIS-controlled Syria. There is essentially no legal precedent as to whether the provision of software to a military committing atrocities makes the software company complicit in those acts. For any court potentially reviewing this, an important legal standard, Castellanos-Jankiewicz said, is whether 'Google knew or should have known that its equipment that its software was being either used to commit the atrocities or enabling the commission of the atrocities.' The Nimbus deal was inked before Hamas attacked Israel on October 7, 2023, igniting a war that has killed tens of thousands of civilians and reduced Gaza to rubble. But that doesn't mean the company wouldn't face scrutiny for continuing to provide service. 'If the risk of misuse of a technology grows over time, the company needs to react accordingly,' said Andreas Schüller, co-director of the international crimes and accountability program at the European Center for Constitutional and Human Rights. 'Ignorance and an omission of any form of reaction to an increasing risk in connection with the use of the product leads to a higher liability risk for the company.' Though corporations are generally exempt from human rights obligations under international frameworks, Google says it adheres to the United Nations Guiding Principles on Business and Human Rights. The document, while voluntary and not legally binding, lays out an array of practices multinational corporations should follow to avoid culpability in human rights violations. Among these corporate responsibilities is 'assessing actual and potential human rights impacts, integrating and acting upon the findings, tracking responses, and communicating how impacts are addressed.' The board of directors at Alphabet, Google's parent entity, recently recommended voting against a shareholder proposal to conduct an independent third-party audit of the processes the company uses 'to determine whether customers' use of products and services for surveillance, censorship, and/or military purposes contributes to human rights harms in conflict-affected and high-risk areas.' The proposal cites, among other risk areas, the Project Nimbus contract. In rejecting the proposal, the board touted its existing human rights oversight processes, and cites the U.N. Guiding Principles and Google's 'AI Principles' as reason no further oversight is necessary. In February, Google amended this latter document to remove prohibitions against weapons and surveillance. 'The UN guiding principles, plain and simple, require companies to conduct due diligence,' said Castellanos-Jankiewicz. 'Google acknowledging that they will not be able to conduct these screenings periodically flies against the whole idea of due diligence. It sounds like Google is giving the Israeli military a blank check to basically use their technology for whatever they want.'

GameStop Bought Bitcoin. Now Strive's CEO Wants Intuit In Too
GameStop Bought Bitcoin. Now Strive's CEO Wants Intuit In Too

Globe and Mail

time16-04-2025

  • Business
  • Globe and Mail

GameStop Bought Bitcoin. Now Strive's CEO Wants Intuit In Too

Strive Asset Management isn't letting up. After convincing GameStop (GME) to embrace Bitcoin and issue a $1.5 billion convertible note, CEO Matt Cole has now set his sights on financial software giant Intuit (INTU). In a bold new letter, Cole called out the company for what he describes as anti-Bitcoin censorship—and urged it to adopt Bitcoin as a strategic hedge. Stay Ahead of the Market: Discover outperforming stocks and invest smarter with Top Smart Score Stocks. Filter, analyze, and streamline your search for investment opportunities using Tipranks' Stock Screener. Strive Slams Mailchimp's Bitcoin Censorship At the heart of the dustup is Intuit's email service Mailchimp. According to Cole's letter, the platform recently deactivated the University of Southern California's Trojan Bitcoin Club account for simply mentioning Bitcoin. Mailchimp reinstated the account after backlash, but Strive says the damage was done. 'We are concerned that Intuit's censorship policies and anti-bitcoin bias threaten to destroy the shareholder value the company has worked so hard to create,' Cole wrote in the April 14 letter, shared publicly. He accused the company of using its Acceptable Use Policy 'as a political weapon' and warned that such actions could invite legal and reputational risk. Strive Urges Intuit to Add Bitcoin to Treasury Cole didn't stop at content policies. He urged Intuit to consider adding Bitcoin to its corporate balance sheet, citing the risk that AI could eventually disrupt core products like TurboTax and QuickBooks. 'TurboTax has a high risk of being automated away by AI,' Cole said. 'A Bitcoin war chest is the best option available.' We've seen this strategy before. In February, Strive issued a similar call to GameStop—and the company responded. It confirmed plans to hold Bitcoin and followed up with a $1.5 billion convertible note offering, making headlines as one of the first major U.S. retailers to adopt a Bitcoin treasury strategy. Bitcoin Talk Sends Stocks Soaring This strategy is starting to move markets. Earlier this month, Hong Kong-based HK Asia Holdings (HK:1723) saw its stock nearly double in a single day—after it said it was considering a Bitcoin purchase. For Cole, that's proof: companies that signal openness to Bitcoin are being rewarded. Just look at Michael Saylor's Strategy (formerly MicroStrategy) (MSTR). The company now holds over 531,000 Bitcoin, worth more than $36 billion, and it's become practically synonymous with the corporate Bitcoin standard. Saylor's aggressive strategy turned Strategy into a Bitcoin proxy on public markets. Cole's push at Intuit reads like the next chapter in the same thesis: add Bitcoin, shift perception, and position for long-term upside. Now it's Intuit's turn to decide if it follows suit. Is INTU Stock a Good Buy? Analysts remain bullish about INTU stock, with a Strong Buy consensus rating based on 18 Buys and two Holds. Over the past year, INTU has decreased by 2%, and the average INTU price target of $718.17 implies an upside potential of 21% from current levels See more INTU analyst ratings Disclaimer & Disclosure Report an Issue

GameStop Did It. Now Strive's Matt Cole Wants Intuit to Back Bitcoin Too
GameStop Did It. Now Strive's Matt Cole Wants Intuit to Back Bitcoin Too

Yahoo

time16-04-2025

  • Business
  • Yahoo

GameStop Did It. Now Strive's Matt Cole Wants Intuit to Back Bitcoin Too

Matt Cole, CEO of Strive Asset Management, fresh from persuading video retailer GameStop to convert some of its cash reserve into bitcoin (BTC), wrote to urge financial software developer Intuit (INTU) to reverse what he described as "censorship policies" and an 'anti-bitcoin bias' that could jeopardize long-term shareholder value. In an open letter dated April 14 addressed to Intuit CEO Sasan Goodarzi and board Chair Susan Nora Johnson, Cole pointed to a recent incident in which Intuit's Mailchimp email marketing platform disabled the account of the Trojan Bitcoin Club, a student organization at the University of Southern California, for mentioning the cryptocurrency in emails to its members. 'We are concerned that Intuit's censorship policies and anti-bitcoin bias threaten to destroy the shareholder value the company has worked so hard to create,' Cole wrote saying he was writing on behalf of his clients, who include Intuit shareholders. Although Mailchimp later reinstated the account following public pressure, Cole said the episode reflects a 'broader pattern of deplatforming' that includes bitcoin developers, educators, and businesses. Cole said such actions expose Intuit, known for its TurboTax tax preparation software and QuickBooks accounting software, to reputational and legal risks, particularly as public concern around tech censorship grows and federal regulators — including the Federal Trade Commission (FTC) — begin investigating platform discrimination based on speech or affiliations. 'Mailchimp's Acceptable Use Policy is being used as a political weapon, rather than a tool to mitigate legitimate business risk,' Cole wrote, adding that 'customers and shareholders alike are starting to question whether Intuit is making decisions based on ideology rather than fiduciary duty.' The letter called on Intuit to reinstate accounts banned for bitcoin-related content, revise Mailchimp's content policies to eliminate political considerations. It also urged Intuit to consider adding bitcoin to its corporate treasury as a hedge against artificial intelligence disruption. 'We believe TurboTax, Intuit's flagship product, has a high risk of being automated away by AI,' Cole wrote. 'While we appreciate Intuit's investments in AI internally, we believe an additional hedge is warranted—and that a bitcoin war chest is the best option available.' The move follows Coles' February letter to GameStop, in which he urged the company to convert its $5 billion cash reserve into bitcoin. Since receiving the letter, GameStop confirmed that it will add bitcoin to its balance sheet and has successfully completed a $1.5 billion convertible note offering — positioning itself as one of the first major retailers to align its treasury strategy with what Strive called the 'Bitcoin standard.' The move marked a significant early win for Strive's broader campaign to reshape corporate finance and governance around what Cole describes as 'apolitical excellence' and long-term shareholder value, free from ideological agendas.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store