logo
$3 billion acquisition that has triggered all-time high tension between Microsoft and ChatGPT-maker OpenAI

$3 billion acquisition that has triggered all-time high tension between Microsoft and ChatGPT-maker OpenAI

Time of India5 hours ago

and Microsoft are locked in their most heated dispute yet over the AI startup's $3 billion acquisition of coding company Windsurf, with tensions escalating to the point where OpenAI executives have discussed filing
antitrust complaints
against their longtime partner, according to sources familiar with the matter.
Tired of too many ads? go ad free now
The standoff centers on Microsoft's current access to all of OpenAI's intellectual property under their existing agreement. OpenAI wants to block Microsoft from accessing Windsurf's technology, particularly as Microsoft offers its own competing AI coding product,
GitHub Copilot
, the Wall Street Journal reported.
Partnership at breaking point as conversion deadline looms
The dispute has grown so intense that OpenAI has considered what insiders describe as a "nuclear option" – accusing Microsoft of anticompetitive behavior and seeking federal regulatory review of their contract terms.
Such a move would threaten to unravel one of tech's most celebrated partnerships.
The companies are simultaneously battling over OpenAI's conversion to a for-profit structure, which must be completed by year-end or the startup risks losing $20 billion in funding. Microsoft is demanding a larger ownership stake in the converted company than OpenAI is willing to provide.
Under their current deal, Microsoft has exclusive rights to sell OpenAI's software through its Azure cloud platform and serves as the company's primary compute provider.
OpenAI now wants to partner with other cloud providers to expand its customer base and access additional computing resources.
The relationship has grown increasingly strained as both companies have evolved from partners into direct competitors across consumer chatbots and business AI tools. Last year, Microsoft CEO
even hired a rival of OpenAI CEO
to secretly develop competing AI models.
Despite the tensions, both companies issued a joint statement calling their partnership "long-term" and "productive," saying talks remain ongoing and expressing optimism about continuing to "build together for years to come."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How India Inc. can reduce energy demand with edge computing
How India Inc. can reduce energy demand with edge computing

Time of India

timean hour ago

  • Time of India

How India Inc. can reduce energy demand with edge computing

Recently, AI-generated, Ghibli-style images took the internet by storm. But behind their charm lies an invisible environmental cost—water and energy consumption. Even Sam Altman, CEO of OpenAI, acknowledged the toll, tweeting: 'It's super fun seeing people love images in ChatGPT… But our GPUs are melting.'As artificial intelligence continues to evolve, the energy demands of its infrastructure are becoming a growing concern. Traditional AI relies heavily on massive, centralized data centers operating round-the-clock. These facilities, packed with thousands of servers running complex computations, also consume enormous energy for cooling to prevent overheating. Currently, data centers account for roughly 2% of global electricity use —a number poised to rise as AI models become more complex. For perspective, training a single advanced model like GPT-3 can use as much electricity as several hundred homes consume in a year. So, the million-dollar question is: How can we continue to harness AI's potential while curbing its environmental impact? One of the most promising answers lies in edge computing . Edge computing processes data closer to where it's generated—on devices such as smartphones, IoT sensors, and embedded systems—rather than routing everything through centralized cloud data centers. This shift cuts down on transmission energy and reduces dependence on cloud infrastructure, making AI deployments significantly more energy efficient. This article explores why Indian enterprises must embrace edge AI to curb energy usage—and how advancements in chip design are driving more sustainable, local AI processing. Centralized Data Centers: A Growing Energy Challenge By 2026, data center electricity consumption is expected to exceed 1,000 terawatt-hours —roughly equivalent to Japan's entire electricity demand. The explosion of data centers is straining global power grids. Beyond computation, these facilities require constant cooling—often powered by fossil fuels—contributing to rising carbon emissions and climate risk. Even with increased investment in renewables, the pace may not be enough to keep up with AI's surging energy needs. How edge computing reduces energy use Edge computing decentralizes workloads by processing data at the edge of the network or directly on devices. This reduces the burden on cloud infrastructure and lowers overall energy consumption. Instead of continuously streaming data to remote servers, edge devices process data locally and send only essential insights. For example, an edge-enabled surveillance system can analyze footage in real time and transmit only alerts or key clips—saving substantial energy otherwise spent on transmission and storage. Additionally, local processing reduces idle time caused by round trips to the cloud, further boosting energy efficiency. Energy-efficient chipsets powering the edge A new wave of energy-efficient AI chipsets and microcontrollers is enabling powerful edge applications—from wearables and autonomous systems to smart homes and industrial automation. These chips are purpose-built for high-efficiency AI processing, integrating features like neural accelerators and micro-NPUs in compact, low-power formats. Optimized for tasks such as vision recognition, audio sensing, and real-time decision-making, these chipsets bring intelligence directly to the device. Techniques like adaptive power scaling, heterogeneous computing, and low-precision AI operations allow them to balance performance and energy efficiency—resulting in faster processing, lower memory usage, and longer battery life. With built-in security features and compatibility across ecosystems, these chipsets are simplifying deployment of scalable, secure AI at the edge. Techniques for building edge AI models Edge AI models are designed to work within the limited power and resource constraints of edge devices, enabling real-time and accurate data processing. Key techniques include: 1. Model Compression and Simplification Using quantization (reducing calculation precision) and pruning (removing unnecessary neural connections), developers can significantly shrink models. These lightweight versions consume less memory and power—without sacrificing accuracy. 2. Streamlined Architectures Models like MobileNet for image recognition and TinyBERT for language tasks are built specifically for constrained devices, balancing low power consumption with performance. 3. Leveraging Pre-Trained Models Platforms offering pre-trained models that can be fine-tuned for specific use cases enable businesses to integrate AI more efficiently. Embedding these models directly into chipsets allows for faster deployment of AI solutions with lower energy consumption—even without deep AI expertise. This minimizes the need for extensive customization and shortens go-to-market timelines. For silicon vendors, offering chips with an ecosystem of ready-to-deploy models adds significant value. A chip preloaded with AI capabilities lets customers bypass development hurdles and start immediately. Overcoming challenges in edge AI Despite its advantages, edge AI must overcome a few key hurdles to scale effectively: 1. Hardware Constraints Edge devices lack the compute, memory, and storage of cloud servers. Addressing this, demands continuous innovation in low-power, high-performance chip design. 2. Managing Complex Edge Ecosystems The decentralized nature of edge computing means managing a vast network of devices. As IoT adoption grows, robust frameworks and tools are essential for coordination and scalability. 3. Ensuring Security With sensitive data processed locally, security becomes non-negotiable. Techniques like secure boot, data encryption, and regular firmware updates are essential to maintaining trust and safeguarding information. Conclusion As AI becomes increasingly embedded in everyday life, sustainability must be at the forefront. Edge computing offers a powerful solution by moving intelligence closer to the data source—cutting energy use and easing pressure on central infrastructure. For India Inc., edge AI is more than just a trend—it's a strategic imperative. To align with the Net Zero Scenario, emissions must fall by 50% by 2030. Edge AI is paving the way for smarter, greener, and more responsive solutions—and the time to act is now.

OpenAI wins $200 million US defense contract
OpenAI wins $200 million US defense contract

Time of India

timean hour ago

  • Time of India

OpenAI wins $200 million US defense contract

ChatGPT maker OpenAI was awarded a $200 million contract to provide the U.S. Defense Department with artificial intelligence tools, the Pentagon said in a statement on Monday. "Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains," the Pentagon said. The work will be primarily performed in and near Washington with an estimated completion date of July 2026, the Pentagon said. OpenAI said last week that its annualised revenue run rate surged to $10 billion as of June, positioning the company to hit its full-year target amid booming AI adoption. OpenAI said in March it would raise up to $40 billion in a new funding round led by SoftBank Group at a $300 billion valuation. OpenAI had 500 million weekly active users as of the end of March. The White House's Office of Management and Budget released new guidance in April directing federal agencies to ensure that the government and "the public benefit from a competitive American AI marketplace." The guidance had exempted national security and defense systems.

Artificial intelligence may cause mass unemployment, says Geoffrey Hinton; 'Godfather of AI' reveals 'safe' jobs
Artificial intelligence may cause mass unemployment, says Geoffrey Hinton; 'Godfather of AI' reveals 'safe' jobs

Mint

timean hour ago

  • Mint

Artificial intelligence may cause mass unemployment, says Geoffrey Hinton; 'Godfather of AI' reveals 'safe' jobs

The 'Godfather of AI', Geoffrey Hinton, recently stated that some professions are safer than others when it comes to being replaced by AI. In an interview on the podcast "Diary of a CEO", which aired on Monday, Hinton said AI has the potential to cause mass joblessness, especially in white-collar jobs. Hinton reiterated his point on AI superiority. "I think for mundane intellectual labour, AI is just going to replace everybody," he said. "Mundane intellectual labour" refers to white-collar jobs. He also specified that AI would take the form of a person and do the work that 10 people did previously. Hinton said that he would be "terrified" to work in a call centre right now due to the potential for automation. However, he pointed out that blue-collar work would take a longer time to be replaced by AI. "I'd say it's going to be a long time before AI is as good at physical manipulation," Hinton said in the podcast. 'So, a good bet would be to be a plumber.' In the podcast, Hinton also challenged the notion that AI would create new jobs, mentioning that if AI automated intellectual tasks, there would be few jobs left for people to do. A person has to be very skilled to have a job that AI just couldn't do," Hinton said. Geoffrey Hinton, 78, is given the title of 'Godfather of AI' due to his work on neural networks, which he started in the late 1970s. He won the 2024 Nobel Prize in Physics for his work on machine learning (ML) and is currently teaching computer science at the University of Toronto. The interview comes just after OpenAI announced its restructuring plans in which the company's for-profit arm will become a public benefit corporation (PBC), in an attempt to appease the company's investors. OpenAI said that the plan will allow it to raise more capital to keep pace in the expensive AI race, reported Reuters. However, a group of critics raised concerns claiming that the plan" might be a step in the right direction", yet it does not adequately ensure that OpenAI sticks to its original mission to develop artificial intelligence for the benefit of humanity. The critics include Geoffrey Hinton and former OpenAI employees. They objected to OpenAI's proposed reorganisation because they said it would have put investors' profit motives ahead of the public good. OpenAI co-founder Elon Musk, who now is a competitor through his company xAI, also objected to the proposal on the same grounds, and is suing OpenAI for breaching the company's founding contract, reported Reuters.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store