logo
#

Latest news with #Altman

Jony Ive, Described By Sam Altman As 'The Greatest Designer In The World' Joins OpenAI In $6.5B Deal To Build 'A Totally New Kind Of Thing'
Jony Ive, Described By Sam Altman As 'The Greatest Designer In The World' Joins OpenAI In $6.5B Deal To Build 'A Totally New Kind Of Thing'

Yahoo

time2 hours ago

  • Business
  • Yahoo

Jony Ive, Described By Sam Altman As 'The Greatest Designer In The World' Joins OpenAI In $6.5B Deal To Build 'A Totally New Kind Of Thing'

OpenAI on May 21 that it had made a significant move by acquiring io, the AI hardware startup co-founded by renowned designer . The deal is valued at approximately $6.5 billion and brings Ive and a team of former Apple (NASDAQ:AAPL) engineers into OpenAI's fold to spearhead a new division focused on building screenless, AI-powered hardware devices, Bloomberg reports. Don't Miss:Invest where it hurts — and help millions heal:. According to OpenAI CEO Sam Altman, the partnership aims to develop 'a totally new kind of thing,' referring to a novel category of computing devices that minimize screen dependency and prioritize seamless interaction with artificial intelligence, Bloomberg says. Ive, who co-founded io in 2024 after launching his independent design firm LoveFrom, will oversee design across all OpenAI initiatives, including both software and hardware interfaces, according to Bloomberg. According to Bloomberg, the io acquisition includes roughly 55 engineers and designers from Apple and LoveFrom. These include Tang Tan, Evans Hankey, and Scott Cannon, all of whom were instrumental in the development of the iPhone, Apple Watch, and other flagship products, Bloomberg reports. Trending: Ive's career spans nearly three decades at Apple, where he was responsible for designing the iPhone, iMac, iPod, iPad, and Apple Watch. According to CNBC, Ive was described by the late Steve Jobs as his 'spiritual partner at Apple,' and his 'closest and most loyal friend.' Altman also referred to Ive as "the greatest designer in the world" in a post on X, expressing his excitement about their partnership. Altman said he was "thrilled to be partnering with Jony" and "excited to try to create a new generation of AI-powered computers," signaling a bold step toward reimagining how people interact with artificial intelligence. The team's mission at OpenAI will be to build a family of AI-native devices that offer new modes of interaction, breaking away from legacy interfaces like smartphone screens and physical keyboards, Bloomberg reports. According to TF International Securities analyst Ming-Chi Kuo in an X post, one prototype involves a display-less wearable device that connects to smartphones and is worn around the neck, similar in philosophy to Apple's to Bloomberg, the $6.5 billion transaction consists of $5 billion in equity and the balance from OpenAI's previously held 23% stake in io, established in a deal during the fourth quarter of 2024. Investors in io include Laurene Powell Jobs through Emerson Collective, Thrive Capital, Maverick Ventures, Sutter Hill Ventures, and SV Angel. This acquisition marks OpenAI's largest to date and underscores its transition from pure software research to full-stack consumer hardware development. The company's valuation has soared to $300 billion, driven by the success of ChatGPT and related AI systems, Bloomberg says. Meanwhile, Apple continues to face mounting pressure in the AI race. According to Bloomberg, the company's current platform relies in part on OpenAI's models, and its in-house capabilities have been perceived as lagging behind competitors like OpenAI, Google, and Anthropic. With a team of seasoned designers and engineers, OpenAI now holds a unique position to create transformative devices that align with its broader mission of democratizing access to artificial intelligence in everyday life. Read Next: Maximize saving for your retirement and cut down on taxes: . How do billionaires pay less in income tax than you?. Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? This article Jony Ive, Described By Sam Altman As 'The Greatest Designer In The World' Joins OpenAI In $6.5B Deal To Build 'A Totally New Kind Of Thing' originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved.

Sam Altman and Jony Ive's $6.5B collab tanks Apple stock
Sam Altman and Jony Ive's $6.5B collab tanks Apple stock

The South African

time3 hours ago

  • Business
  • The South African

Sam Altman and Jony Ive's $6.5B collab tanks Apple stock

OpenAI CEO Sam Altman and former Apple chief designer Jony Ive announced their partnership on May 21, a deal both believe will bear revolutionary, AI-powered devices. 'We have the opportunity to completely reimagine what it means to use a computer,' Altman declared in a joint interview with Ive, who later said, 'I am absolutely certain that we are literally on the brink of a new generation of technology that can make us our better selves.' Apple's shares fell 2.3% on the day of the announcement, extending their 17% decline year-to-date ahead of Wednesday's open in New York. Altman and Ive plan to release a range of form factors (computer and phone) by the end of 2026, as well as a mystery, undisclosed design. Altman hinted that the two tech revolutionists 'Formed with the mission of figuring out how to create a family of devices that would let people use AI to create all sorts of wonderful things.' The collaboration stems from OpenAI's $6.5 billion acquisition – the largest purchase in the company's history – of Ive's io, co-founded with former Apple executives Tang Tan, Scott Cannon, and Evans Hankey. Together, they assembled a 55-person team of world-leading hardware and software engineers, physicists, researchers, developers, and product manufacturing experts. This acquire-hire gives Altman the expertise to challenge Apple's hardware market monopoly. Ive's LoveFrom,, laden with ex-Apple employees like Bas Ording, Mike Matas, and Chris Wilson, is another addition to the deal. The collaborative design firm, which remains independent, boasting high-profile clients, such as Ferrari N.V. and Airbnb Inc, will assume design and creative responsibilities across OpenAI and io. As Tim Cook navigates U.S. President Donald Trump's 25% tariff threat, Altman and Ive's partnership raises further concerns over Apple's future market dominance. Amid a wave of high-level employee resignations, the second-largest company in the world has lagged behind in AI innovation, amplified by several development delays, including its most recent beta-embedded iOS 18.4 (which will include an AI-powered feature, Priority Notifications), initially scheduled to release in April 2025 before being postponed indefinably. If there's anyone who can challenge Apple, it's Ive, a legendary tech designer Steve Jobs once referred to as his 'spiritual partner.' The British-born designer, responsible for the iPhone, iPad, iPod, iMac, and Apple Watch, led Apple's design team for twenty-seven iconic years (1992-2019). 'I have a growing sense that everything I have learned over the last 30 years has led me to this place and to this moment,' Ive spoke on his new venture with Altman. With a lineup of industry experts and former Apple veterans at Altman and Ive's disposal, the merger appears equipped to challenge Cook's market share. As Apple stalls in its AI push, OpenAI-io could well leapfrog ahead. Apple has, at best, a year to respond. Let us know by leaving a comment below, or send a WhatsApp to 060 011 021 1 Subscribe to The South African website's newsletters and follow us on WhatsApp, Facebook, X and Bluesky for the latest news.

Snowflake Summit 2025: CEO Sridhar Ramaswamy and Sam Altman come together to accelerate enterprise AI adoption
Snowflake Summit 2025: CEO Sridhar Ramaswamy and Sam Altman come together to accelerate enterprise AI adoption

Indian Express

time4 hours ago

  • Business
  • Indian Express

Snowflake Summit 2025: CEO Sridhar Ramaswamy and Sam Altman come together to accelerate enterprise AI adoption

'The true magic of great technology is taking something very complicated and making it feel easy,' said Sridhar Ramaswamy in his keynote address at the Snowflake Summit 2025, setting the vision for the company. The first day of the four-day summit taking place in San Francisco was nothing short of eclectic as thousands of data professionals, tech innovators, and enterprise leaders thronged to attend what Ramaswamy called 'our biggest summit yet'. One of the major highlights of the keynote was OpenAI's Sam Altman joining Ramaswamy for a fireside chat. 'The world's hardest and most ambitious ideas, from personalised medicine based on your genetic data to autonomous factory floors to even virtual shopping experiences, these things aren't science fiction anymore. They can become realities through the power of data,' Ramaswamy told the packed auditorium. His sentiment was reinforced by Altman in what can be called the anticipated conversations in the realm of enterprise technology this year. When asked about his advice for enterprise leaders navigating the AI landscape in 2025, Altman said, 'Just do it. There's still a lot of hesitancy. The models are changing so fast, and there's reason to wait for the next model. But as a general principle of technology, when things are changing quickly, the companies that have the quickest iteration speed and make the cost of making mistakes the lowest win.' The interaction between the Snowflake and OpenAI CEOs touched on a significant shift in the AI landscape over the past year. Altman too acknowledged that his advice to enterprises has evolved dramatically over time. 'I wouldn't quite have said the same thing last year. To a startup last year, yes, but to a big enterprise, I would have said you can experiment a little bit, but this might not be totally ready for production use. That has really changed. Our enterprise business has grown dramatically.' Building on this sentiment, Ramaswamy emphasised the importance of 'curiosity' in driving AI adoption. 'There's so much that we take for granted about how things used to work, which is not true anymore. OpenAI and Snowflake have made the cost of experimenting very low. You can run lots of little experiments, get value from it and build on that strength.' The CEOs agreed that this shift from experimental to production-ready AI is being demonstrated across industries. During his keynote speech, Ramaswamy highlighted how century-old industrial giant Caterpillar was using Snowflake's AI Data Cloud to create unified views of customer and dealer operations. The company essentially transformed siloed data into real-time insights. Similarly, pharma giant AstraZeneca has been leveraging its data foundation to accelerate productivity and get critical products to patients faster. Another recurring theme throughout the summit has been the relationship between data and AI success. 'There is no AI strategy without a data strategy,' Ramaswamy asserted. 'Data is the fuel for AI, and Snowflake's AI Data Cloud is powered by a connected ecosystem of data.' And this ecosystem approach can be seen from Snowflake's marketplace, which now features over 3,000 listings from over 750 partners, enabling thousands of customers to share data, applications, and models. According to Ramaswamy, Snowflake's recent US Department of Defence (DOD) IL5 authorisation serves as validation of the enterprise-grade trust required for mission-critical AI applications. Perhaps one of the most interesting segments of the fireside chat revolved around AI agents and the path toward artificial general intelligence (AGI). Altman went on to share his recent experience with OpenAI's coding agent Codex. 'The coding agent we just launched has been one of my 'feel AGI' moments. You can give it tasks; it works in the background; it's really quite smart. Maybe today it's like an intern that can work for a couple of hours, but at some point, it'll be like an experienced software engineer that can work for days.' When pressed about AGI timelines and definitions, Altman offered a rather pragmatic view. 'If you could go back five years and show someone today's ChatGPT, I think most people would say that's AGI. We're great at adjusting our expectations. The question of what AGI is doesn't matter as much as the rate of progress.' For Altman, the true marker of AGI would be 'a system that can autonomously discover new science or be such an incredible tool that our rate of scientific discovery quadruples.' This vision aligns with Ramaswamy's own ambitious goals, as he referenced the potential for AI to tackle projects that could advance humanity significantly. Through the keynote, Ramaswamy emphasised that successful AI implementation came from simplicity. 'Complexity creates risk, complexity creates cost, and complexity creates friction and makes it harder to get the job done. Whereas simplicity drives results.' This philosophy is reflected in Snowflake's approach to product development, where the prime goal is to enable a user to ask a question with a voice memo and get an answer on their enterprise data or even launch a customer app without having to write a line of code. The ongoing summit showcased several examples of AI driving real business value. One of the most compelling examples came from Lynn Martin, President of NYSE Group, who shared how the exchange has scaled from handling 350 billion incoming order messages per day in 2022 to 1.2 trillion messages by April 2025. 'We can't do that without having incredible technology and AI,' Martin explained, highlighting the critical role of data sanctity in powering effective AI systems. Ramaswamy's closing message captured the spirit of the moment: 'This community is here to build what's next together.' With rapidly advancing AI capabilities, enterprises are finally ready to move from experimentation to production. Snowflake Summit 2025 has positioned itself as a crucial gathering where the future of enterprise AI is being written in real-time. Bijin Jose, an Assistant Editor at Indian Express Online in New Delhi, is a technology journalist with a portfolio spanning various prestigious publications. Starting as a citizen journalist with The Times of India in 2013, he transitioned through roles at India Today Digital and The Economic Times, before finding his niche at The Indian Express. With a BA in English from Maharaja Sayajirao University, Vadodara, and an MA in English Literature, Bijin's expertise extends from crime reporting to cultural features. With a keen interest in closely covering developments in artificial intelligence, Bijin provides nuanced perspectives on its implications for society and beyond. ... Read More

Whose National Security? OpenAI's Vision for American Techno-Dominance
Whose National Security? OpenAI's Vision for American Techno-Dominance

The Intercept

time7 hours ago

  • Business
  • The Intercept

Whose National Security? OpenAI's Vision for American Techno-Dominance

OpenAI has always said it's a different kind of Big Tech titan, founded not just to rack up a stratospheric valuation of $400 billion (and counting), but also to 'ensure that artificial general intelligence benefits all of humanity.' The meteoric machine-learning firm announced itself to the world in a December 2015 press release that lays out a vision of technology to benefit all people as people, not citizens. There are neither good guys nor adversaries. 'Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole,' the announcement stated with confidence. 'Since our research is free from financial obligations, we can better focus on a positive human impact.' Early rhetoric from the company and its CEO, Sam Altman, described advanced artificial intelligence as a harbinger of a globalist utopia, a technology that wouldn't be walled off by national or corporate boundaries but enjoyed together by the species that birthed it. In an early interview with Altman and fellow OpenAI co-founder Elon Musk, Altman described a vision of artificial intelligence 'freely owned by the world' in common. When Vanity Fair asked in a 2015 interview why the company hadn't set out as a for-profit venture, Altman replied: 'I think that the misaligned incentives there would be suboptimal to the world as a whole.' Times have changed. And OpenAI wants the White House to think it has too. In a March 13 white paper submitted directly to the Trump administration, OpenAI's global affairs chief Chris Lehane pitched a near future of AI built for the explicit purpose of maintaining American hegemony and thwarting the interests of its geopolitical competitors — specifically China. The policy paper's mentions of freedom abound, but the proposal's true byword is national security. OpenAI never attempts to reconcile its full-throated support of American security with its claims to work for the whole planet, not a single country. After opening with a quotation from Trump's own executive order on AI, the action plan proposes that the government create a direct line for the AI industry to reach the entire national security community, work with OpenAI 'to develop custom models for national security,' and increase intelligence sharing between industry and spy agencies 'to mitigate national security risks,' namely from China. In the place of techno-globalism, OpenAI outlines a Cold Warrior exhortation to divide the world into camps. OpenAI will ally with those 'countries who prefer to build AI on democratic rails,' and get them to commit to 'deploy AI in line with democratic principles set out by the US government.' The rhetoric seems pulled directly from the keyboard of an 'America First' foreign policy hawk like Marco Rubio or Rep. Mike Gallagher, not a company whose website still endorses the goal of lifting up the whole world. The word 'humanity,' in fact, never appears in the action plan. Rather, the plan asks Trump, to whom Altman donated $1 million for his inauguration ceremony, to 'ensure that American-led AI prevails over CCP-led AI' — the Chinese Communist Party — 'securing both American leadership on AI and a brighter future for all Americans.' It's an inherently nationalist pitch: The concepts of 'democratic values' and 'democratic infrastructure' are both left largely undefined beyond their American-ness. What is democratic AI? American AI. What is American AI? The AI of freedom. And regulation of any kind, of course, 'may hinder our economic competitiveness and undermine our national security,' Lehane writes, suggesting a total merging of corporate and national interests. In an emailed statement, OpenAI spokesperson Liz Bourgeois declined to explain the company's nationalist pivot but defended its national security work. 'We believe working closely with the U.S. government is critical to advancing our mission of ensuring AGI benefits all of humanity,' Bourgeois wrote. 'The U.S. is uniquely positioned to help shape global norms around safe, secure, and broadly beneficial AI development—rooted in democratic values and international collaboration.' The Intercept is currently suing OpenAI in federal court over the company's use of copyrighted articles to train its chatbot ChatGPT. OpenAI's newfound patriotism is loud. But is it real? In his 2015 interview with Musk, Altman spoke of artificial intelligence as a technology so special and so powerful that it ought to transcend national considerations. Pressed on OpenAI's goal to share artificial intelligence technology globally rather than keeping it under domestic control, Altman provided an answer far more ambivalent than the company's current day mega-patriotism: 'If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who?' He also said, in the early days of OpenAI, that there may be limits to what his company might do for his country. 'I unabashedly love this country, which is the greatest country in the world,' Altman told the New Yorker in 2016. 'But some things we will never do with the Department of Defense.' In the profile, he expressed ambivalence about overtures to OpenAI from then-Secretary of Defense Ashton Carter, who envisioned using the company's tools for targeting purposes. At the time, this would have run afoul of the company's own ethical guidelines, which for years stated explicitly that customers could not use its services for 'military and warfare' purposes, writing off any Pentagon contracting entirely. In January 2024, The Intercept reported that OpenAI had deleted this military contracting ban from its policies without explanation or announcement. Asked about how the policy reversal might affect business with other countries in an interview with Bloomberg, OpenAI executive Anna Makanju said the company is 'focused on United States national security agencies.' But insiders who spoke with The Intercept on conditions of anonymity suggested that the company's turn to jingoism may come more from opportunism than patriotism. Though Altman has long been on the record as endorsing corporate support of the United States, under an administration where the personal favor of the president means far more than the will of lawmakers, parroting muscular foreign policy rhetoric is good for business. One OpenAI source who spoke with The Intercept recalled concerned discussions about the possibility that the U.S. government would nationalize the company. They said that at times, this was discussed with the company's head of national security partnerships, Katrina Mulligan. Mulligan joined the company in February 2024 after a career in the U.S. intelligence and military establishment, including leading the media and public policy response to Edward Snowden's leaks while on the Obama National Security Council staff, working for the director of national intelligence, serving as a senior civilian overseeing Special Operations forces in the Pentagon, and working as chief of staff to the secretary of the Army. This source speculated that fostering closeness with the government was one method of fending off the potential risk of nationalization. As an independent research organization with ostensibly noble, global goals, OpenAI may have been less equipped to beat back regulatory intervention, a second former OpenAI employee suggested. What we see now, they said, is the company 'transitioning from presenting themselves as a nonprofit with very altruistic, pro-humanity aims, to presenting themselves as an economic and military powerhouse that the government needs to support, shelter, and cut red tape on behalf of.' The second source said they believed the national security rhetoric was indicative of OpenAI 'sucking up to the administration,' not a genuinely held commitment by executives. 'In terms of how decisions were actually made, what seemed to be the deciding factor was basically how can OpenAI win the race rather than anything to do with either humanity or national security,' they added. 'In today's political environment, it's a winning move with the administration to talk about America winning and national security and stuff like that. But you should not confuse that for the actual thing that's driving decision-making internally.' The person said that talk of preventing Chinese dominance over artificial intelligence likely reflects business, not political, anxieties. 'I think that's not their goal,' they said. 'I think their goal is to maintain their own control over the most powerful stuff.' 'I also talked to some people who work at OpenAI who weren't from the U.S. who were feeling like … 'What's going to happen to my country?'' But even if its motivations are cynical, company sources told The Intercept that national security considerations still pervaded OpenAI. The first source recalled a member of OpenAI's corporate security team regularly engaging with the U.S. intelligence community to safeguard the company's ultra-valuable machine-learning models. The second recalled concern about the extent of the government's relationship — and potential control over — OpenAI's technology. A common fear among AI safety researchers is a future scenario in which artificial intelligence models begin autonomously designing newer versions, ad infinitum, leading human engineers to lose control. 'One reason why the military AI angle could be bad for safety is that you end up getting the same sort of thing with AIs designing successors designing successors, except that it's happening in a military black project instead of in a somewhat more transparent corporation,' the second source said. 'Occasionally there'd be talk of, like, eventually the government will wake up, and there'll be a nuclear power plant next to a data center next to a bunker, and we'll all be moved into the bunker so that we can, like, beat China by managing an intelligence explosion,' they added. At a company that recruits top engineering talent internationally, the prospect of American dominance of a technology they believe could be cataclysmic was at times disquieting. 'I remember I also talked to some people who work at OpenAI who weren't from the U.S. who were feeling kind of sad about that and being like, 'What's going to happen to my country after the U.S. gets all the super intelligences?'' Sincerity aside, OpenAI has spent the past year training its corporate algorithm on flag-waving, defense lobbying, and a strident anticommunism that smacks more of the John Birch Society than the Whole Earth Catalog. In his white paper, Lehane, a former press secretary for Vice President Al Gore and special counsel to President Bill Clinton, advocates not for a globalist techno-utopia in which artificial intelligence jointly benefits the world, but a benevolent jingoism in which freedom and prosperity is underwritten by the guarantee of American dominance. While the document notes fleetingly, in its very last line, the idea of 'work toward AI that benefits everyone,' the pitch is not one of true global benefit, but of American prosperity that trickles down to its allies. The company proposes strict rules walling off parts of the world, namely China, from AI's benefits, on the grounds that they are simply too dangerous to be trusted. OpenAI explicitly advocates for conceiving of the AI market not as an international one, but 'the entire world less the PRC' — the People's Republic of China — 'and its few allies,' a line that quietly excludes over 1 billion people from the humanity the company says it wishes to benefit and millions who live under U.S.-allied authoritarian rule. In pursuit of 'democratic values,' OpenAI proposes dividing the entire planet into three tiers. At the top: 'Countries that commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens could be considered Tier I countries.' Given the earlier mention of building 'AI in line with democratic principles set out by the US government,' this group's membership is clear: the United States, and its friends. In pursuit of 'democratic values,' OpenAI proposes dividing the entire planet into three tiers. Beneath them are Tier 2 countries, a geopolitical purgatory defined only as those that have failed to sufficiently enforce American export control policies and protect American intellectual property from Tier 3: Communist China. 'CCP-led China, along with a small cohort of countries aligned with the CCP, would represent its own category that is prohibited from accessing democratic AI systems,' the paper explains. To keep these barriers intact — while allowing for the chance that Tier 2 countries might someday graduate to the top — OpenAI suggests coordinating 'global bans on CCP-aligned AI' and 'prohibiting relationships' between other countries and China's military or intelligence services. One of the former OpenAI employees said concern about China at times circulated throughout the company. 'Definitely concerns about espionage came up,' this source said, 'including 'Are particular people who work at the company spies or agents?'' At one point, they said, a colleague worried about a specific co-worker they'd learned was the child of a Chinese government official. The sourced recalled 'some people being very upset about the implication' that the company had been infiltrated by foreigners, while others wanted an actual answer: ''Is anyone who works at the company a spy or foreign agent?'' The company's public adoration of Western democracy is not without wrinkles. In early May, OpenAI announced an initiative to build data centers and customized ChatGPT bots with foreign governments, as part of its $500 billion 'Project Stargate' AI infrastructure construction blitz. 'This is a moment when we need to act to support countries around the world that would prefer to build on democratic AI rails, and provide a clear alternative to authoritarian versions of AI that would deploy it to consolidate power,' the announcement read. Unmentioned in that celebration of AI democracy is the fact that Project Stargate's financial backers include the government of Abu Dhabi, an absolute monarchy. On May 23, Altman tweeted that it was 'great to work with the UAE' on Stargate, describing co-investor and Emirati national security adviser Tahnoun bin Zayed Al Nahyan as a 'great supporter of openai, a true believer in AGI, and a dear personal friend.' In 2019, Reuters revealed how a team of mercenary hackers working for Emirati intelligence under Tahnoun had illegally broken into the devices of targets around the world, including American citizens. Asked how a close partnership with an authoritarian Emirati autocracy fit into its broader mission of spreading democratic values, OpenAI pointed to a recent op-ed in The Hill in which Lehane discusses the partnership. 'We're working closely with American officials to ensure our international partnerships meet the highest standards of security and compliance,' Lehane writes, adding, 'Authoritarian regimes would be excluded.' OpenAI's new direction has been reflected in its hiring. Since hiring Mulligan, the company has continued to expand its D.C. operation. Mulligan works on national security policy with a team of former Department of Defense, NSA, CIA, and Special Operations personnel. Gabrielle Tarini joined the company after almost two years at the Defense Department, where she worked on 'Indo-Pacific security affairs' and 'China policy,' according to LinkedIn. Sasha Baker, who runs national security policy, joined after years at the National Security Council and Pentagon. OpenAI's policy team includes former DoD, NSA, CIA, and Special Operations personnel. The list goes on: Other policy team hires at OpenAI include veterans of the NSA, a Pentagon former special operations and South China Sea expert, and a graduate of the CIA's Sherman Kent School for Intelligence Analysis. OpenAI's military and intelligence revolving door continues to turn: At the end of April, the company recruited Alexis Bonnell, the former chief information officer of the Air Force Research Laboratory. Recent job openings have included a 'Relationship Manager' focusing on 'strategic relationships with U.S. government customers.' Mulligan, the head of national security policy and partnerships, is both deeply connected to the defense and intelligence apparatus, and adept at the kind of ethically ambivalent thinking common to the tech sector. 'Not everything that has happened at Guantanamo Bay is to be praised, that's for sure, but [Khalid Sheikh Mohammed] admitting to his crimes, even all these years later, is a big moment for many (including me),' she posted last year. In a March podcast appearance, Mulligan noted she worked on 'Gitmo rendition, detention, and interrogation' during her time in government. Mulligan's public rhetoric matches the ideological drift of a company that today seems more concerned with 'competition' and 'adversaries' than kumbaya globalism. On LinkedIn, she seems to embody the contradiction between a global mission and full-throated alignment with American policy values. 'I'm excited to be joining OpenAI to help them ensure that AI is safe and beneficial to all of humanity,' she wrote upon her hiring from the Pentagon. Since then, she has regularly represented OpenAI's interests and American interests as one and the same, sharing national security truisms such as 'In a competition with China, the pace of AI adoption matters,' or 'The United States' continued lead on AI is essential to our national security and economic competitiveness,' or 'Congress needs to make some decisive investments to ensure the U.S. national security community has the resources to harness the advantage the U.S. has on this technology.' This is to some extent conventional wisdom of the country's past 100 years: A strong, powerful America is good for the whole world. But OpenAI has shifted from an organization that believed its tech would lift up the whole world, unbounded by national borders, to one that talks like Lockheed Martin. Part of OpenAI's national security realignment has come in the form of occasional 'disruption' reports detailing how the company detected and neutralized 'malicious use' of its tools by foreign governments, coincidentally almost all of them considered adversaries of the United States. As the provider of services like ChatGPT, OpenAI has near-total visibility into how the tools are used or misused by individuals, what the company describes in one report as its 'unique vantage point.' The reports detail not only how these governments attempted to use ChatGPT, but also the steps OpenAI took to thwart them, described by the company as an 'effort to support broader efforts by U.S. and allied governments.' Each report has focused almost entirely on malign AI uses by 'state affiliated' actors from Iran, China, North Korea, and Russia. A May 2024 report outed an Israeli propaganda effort using ChatGPT but stopped short of connecting it to that country's government. Earlier this month, representatives of the intelligence agency and the contractors who serve them gathered at the America's Center Convention Complex in St. Louis for the GEOINT Symposium, dedicated to geospatial intelligence, the form of tradecraft analyzing satellite and other imagery of the planet to achieve military and intelligence objectives. On May 20, Mulligan took to the stage to demonstrate how OpenAI's services could help U.S. spy agencies and the Pentagon better exploit the Earth's surface. Though the government's practice of GEOINT frequently ends in the act of killing, Mulligan used a gentler example, demonstrating the ability of ChatGPT to pinpoint the location where a photograph of a rabbit was taken. It was nothing if not a sales pitch, one predicated on the fear that some other country might leap at the opportunity before the United States. 'Government often feels like using AI is too risky and that it's better and safer to keep doing things the way that we've always done them, and I think this is the most dangerous mix of all,' Mulligan told her audience. 'If we keep doing things the way that we always have, and our adversaries adapt to this technology before we do, they will have all of the advantages that I show you today, and we will not be safer.'

OpenAI CEO Sam Altman says AI can help ‘discover new knowledge': ‘I would bet next year that…'
OpenAI CEO Sam Altman says AI can help ‘discover new knowledge': ‘I would bet next year that…'

Time of India

time9 hours ago

  • Business
  • Time of India

OpenAI CEO Sam Altman says AI can help ‘discover new knowledge': ‘I would bet next year that…'

OpenAI CEO Sam Altman said that AI is no longer just a helper. AI, he believes, is evolving from tool to a team mate. Altman further added that very soon, it might even help humans discover new knowledge and solve complex problems, not just assist with simple tasks. During a conversation at the Snowflake Summit 2025 , the OpenAI CEO explained that people are now using AI agents (like ChatGPT) in ways similar to junior employees. These AI agents can be given tasks, produce work, receive feedback, and improve—just like a team of entry-level coworkers. 'You hear people that talk about their job now is to assign work to a bunch of agents, look at the quality, figure out how it fits together, give feedback, and it sounds a lot like how they work with a team of still relatively junior employees,' Altman said. Altman also predicted that by next year (2026), some AI agents will go beyond simply helping—they might "discover new knowledge" or come up with non-trivial solutions to business problems. He said: 'I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge, or can figure out solutions to business problems that are kind of very non-trivial,' he added. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Esposo deja a su esposa enferma por su amante. Cuando regresa por su herencia, ella hace esto. Cash Roadster Undo OpenAI co-founder wanted a 'doomsday bunker' In a related news, former OpenAI chief scientist and co-founder Ilya Sutskever recently told his research team in 2023 that the company would need to build a protective bunker, often known as 'doomsday bunker,' before releasing artificial general intelligence (AGI), according to new revelations from an upcoming book about the AI company's internal turmoil. "We're definitely going to build a bunker before we release AGI," Sutskever declared during a 2023 meeting with OpenAI scientists, months before his departure from the company. When pressed about the seriousness of his proposal, he assured colleagues that bunker entry would be "optional." The disclosure comes from excerpts of " Empire of AI ," a forthcoming book by former Wall Street Journal correspondent Karen Hao based on interviews with 90 current and former OpenAI employees. The book details the dramatic November 2023 boardroom coup that briefly ousted CEO Sam Altman, with Sutskever playing a central role in the failed takeover. iQoo Z10 Review: 7300mAh battery packed in a slim design AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store