
OpenAI eyes $500 bn valuation in potential employee share sale
That would represent an eye-popping bump-up from its current valuation of $300 billion, with the sale underscoring both OpenAI's rapid gains in users and revenue as well as the intense competition among artificial intelligence firms to secure talented workers.
The transaction, which would come before a potential IPO, would allow current and former employees to sell several billion dollars worth of shares, said the source, who requested anonymity because the talks are private.
Bolstered by its flagship product ChatGPT, OpenAI doubled its revenue in the first seven months of the year, reaching an annualised run rate of $12 billion, and is on track to reach $20 billion by year-end, the source added.
Microsoft-backed OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in February.
The share sale talks come on the heels of OpenAI's primary funding round announced earlier this year, which aims to raise $40 billion, led by Japan's SoftBank Group.
SoftBank has until the end of the year to fund its $22.5 billion portion of the round, but the remainder has been subscribed at a valuation of $300 billion, the source said.
Microsoft-backed OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in February.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
41 minutes ago
- Time of India
ChatGPT will no longer answer these questions affecting millions, and the reason will surprise you
In a quiet but significant update, OpenAI has changed how ChatGPT responds to users asking emotionally sensitive or life-altering questions. As of August 2025, ChatGPT will no longer give direct answers to queries involving mental health, emotional distress, or deeply personal decisions, affecting millions of users worldwide. Instead of offering advice like a digital therapist, the AI now responds with gentle prompts encouraging self-reflection and exploration. The move stems from a growing concern: people had started using the AI not just for information, but as a source of emotional guidance, a role OpenAI believes should be reserved for humans. Why OpenAI made this change in ChatGPT OpenAI noticed that users were increasingly turning to ChatGPT with questions like 'Should I leave my partner?' or 'Am I making the right life decision?' These are deeply personal, emotionally complex issues. While ChatGPT could generate thoughtful responses, OpenAI recognized that giving advice in such moments risks emotional overdependence and misplaced trust in a machine. Rather than blur the lines between AI and human empathy, OpenAI decided to pull back, choosing ethical responsibility over engagement metrics. Instead of giving a yes or no, ChatGPT now offers non-directive responses. These include open-ended questions, suggestions to consider different perspectives, and encouragement to consult trusted people or professionals. The goal is to help users think more clearly, not to decide for them. For example, someone asking about a major life decision might now see a response that encourages weighing pros and cons or considering long-term impacts. ChatGPT introduces break reminders for healthier use To reduce prolonged emotional reliance, ChatGPT now includes gentle nudges to take breaks. If a user is chatting continuously, the interface may show a calming pop-up that reads: 'Just checking in. You've been chatting a while. Is this a good time for a break?' These subtle UI changes aim to promote healthier digital habits, reduce dependency, and encourage stepping away when needed. OpenAI didn't make these changes alone. Over 90 doctors, psychiatrists, youth development experts, and HCI (human-computer interaction) researchers from more than 30 countries contributed to shaping this update. These professionals helped define red flags for emotional distress and crafted evaluation rubrics to guide ChatGPT's behavior in sensitive conversations. Their input ensures that ChatGPT recognizes when it's out of its depth and responds responsibly. Earlier in 2025, there were rare but important incidents where GPT-4o failed to detect emotional red flags in conversations. Though uncommon, these moments were enough to prompt OpenAI to rethink how the model should behave when users are vulnerable. The result is a firm boundary around emotional support that centers safety and ethical AI design. Not a therapist, but a thinking partner This change represents a broader shift in how OpenAI sees AI's role. ChatGPT isn't here to replace therapists, make decisions, or simulate emotional intimacy. Instead, it is a thinking partner—a tool to help users navigate uncertainty, not one to resolve it for them. By prioritizing trust over time-on-platform, OpenAI signals that responsible AI use means knowing when not to answer.


Time of India
an hour ago
- Time of India
Samsung sends third 'update email' to US employees on 'work from office' policy, says: This will ensure that ...
Samsung is the latest company to join the list of other Big Tech companies that are reworking their 'work from home' policy. Recently, an online report revealed that tech giant Microsoft will soon make it mandatory for employees to come to office thrice a week. Now, it appears that Samsung is planning to reinforce five-day-a-week return-to-office policy. According to a report by Business Insider, the South Korean electronics giant has sent its third email to US-based employees reinforcing its five-day-a-week return-to-office policy, specifically targeting its semiconductor division. This latest message from the company also outlines the release of a new compliance tracking tool which is created to monitor employee attendance and curb 'coffee badging'. For those unaware, the term coffee badging is used to describe workers who swipe in briefly before leaving the office. 'This will ensure that team members are fulfilling their expectations regarding in-office work,' reads the email sent by the company to its employees. The new policy, a shift from the company's previous "FlexWork" model that allowed up to two days of remote work, was fully implemented for a specific group of employees in early July. The internal email, obtained by Business Insider, reflects the company's push for "a greater presence" on-site to foster a "consistently full campus environment, where innovation and collaboration can happen in real time." In the August email, Samsung's HR team shared that the initial weeks of the policy had resulted in "more smiling faces in the hallways" and increased foot traffic. However, the email also noted that "there is still room to improve adherence to our policy." Read the complete email Samsung sent to its employees Read the full email Samsung sent to some workers in August below. FlexWork Update July brought the full implementation of our revised FlexWork policy . We recognize that many employees shifted personal schedules to accommodate the daily requirements to be together in the office. The first few weeks have indeed shown increased traffic on campus with parking stations staying full, cafeteria food consumption is on the rise, and more smiling faces can be seen in the hallways. That said, there is still room to improve adherence to our policy. While our absolute attendance number will never be 100% due to a variety of factors including PTO and business travel, we do expect greater presence across all employee populations. As a means to measure the effectiveness of our attendance policy, we will launch a new compliance tool for People Managers in the next few weeks. This tool will provide each Manager with visibility to the number of days & time in building metrics for each team member. This will ensure that team members are fulfilling their expectation regarding in office work - however that is defined with their business leader - as well as guarding against instances of lunch/coffee badging. This tool is not perfect. Consider this a V1. For example, we will not be able to integrate a PTO designation. Unfortunately, PTO data originates from a different platform system that DSA does not have domain control to customnize. We are investigating opportunities to potentially Integrate PTO with reporting in a future revision later this year. This tool should serve as a starting point for conversations between Managers and Employees regarding physical presence to expectation. Unfortunately, other factors, such as business travel, customer meetings, and approved periodic exceptions, are not expected to integrate into future tool upgrades. We continue to encourage employees to engage in dialogue with their Managers and Leaders regarding adherence expectations as well as when personal circumstances might require a periodic exception. Our FlexWork policy can be found here [link removed] on SharePoint. Additional information regarding the new tool will be made available to Managers this month. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Indian Express
an hour ago
- Indian Express
Meet Jules, Google's AI coding agent that could change software development workflows
As AI shifts from a niche concept to a common tool within the industry, it is also being used in software development. Software engineers and developers are increasingly using AI to write code. In fact, OpenAI's ChatGPT has become a widely adopted AI-powered tool among developers, with 82 per cent reporting regular use, according to data from Statista. GitHub Copilot is the second most popular option, used by 44 per cent of developers, while Google Gemini holds the third spot at 22 per cent. But as AI becomes embedded in development workflows, it is giving rise to AI agents, tools that can perform tasks autonomously. From writing entire applications and debugging errors to generating documentation and automating CI/CD workflows, these intelligent assistants now support nearly every part of the software development and developers are fast adopting. This is where Google's Jules comes in, an AI agent that reads your code, understands your intent, and gets to work. It is integrated with GitHub. Google announced Jules as a Google Labs project in December and introduced it to beta testers through a public preview at its I/O developer conference. Now, Jules is out of beta. Here's everything you need to know about Jules. Jules is essentially an AI coding agent, a digital helper that can perform tasks on your computer and is being positioned as the next big thing in artificial intelligence. It uses Gemini 2.5 Pro and allows developers to delegate tasks asynchronously via GitHub-integrated Google Cloud virtual machines. You can think of it as an autonomous AI developer assistant that clones your GitHub repo into a secure Google Cloud VM, understands the full project structure, fixes bugs, writes tests, adds features, and operates asynchronously – basically you assign tasks, and it gets to work while you focus on other things. Unlike other coding assistants, Jules doesn't just suggest code: it plans, edits, presents diffs with full reasoning, and offers audio changelogs. Not only does Jules work directly inside your repo, offering deep understanding of multi-file systems and real project logic but it also lets you assign tasks and walk away. Jules handles them in the cloud and sends back a reviewable pull request. Powered by Google's Gemini 2.5 Pro, the AI agent demonstrates high-level reasoning, ideal for complex feature implementation and refactoring. But Jules goes further: it can summarise changes in spoken form, allowing teams to quickly catch up during standups or asynchronous reviews. Its native support for GitHub PRs and branching makes adoption seamless for any development team. For developers working on frontend UIs, backend APIs, AI models, and end-to-end systems, the process can be time-consuming. This is where Jules comes in handy – automating sections of code that are repeatedly used across a program, writing unit tests, and suggesting modern dependency updates. While tools like GitHub Copilot act as coding assistants and some agents can even handle full workflows like building apps or writing test cases without manual input: Jules goes a step further, becoming a true co-developer: not a human, but an intelligent AI agent. Google updated Jules' privacy policy to clarify that data from public repositories may be used for AI training, while data from private repositories is not used. During beta testing, Google said thousands of developers used Jules, resulting in over 140,000 public code improvements. Jules is aimed at AI enthusiasts and professional developers. Jules is now out of beta, Google recently announced. The AI coding agent offers a free plan with 15 daily tasks. Since the public beta, Jules has logged 2.3 million visits worldwide, with India, the US, and Vietnam as the top traffic sources.