
OpenAI is giving ChatGPT to the government for $1
The company has been working to deepen its ties to lawmakers and regulators in recent months, and it will open its first office in Washington, D.C. early next year.
OpenAI said participating agencies will get access to its frontier models through ChatGPT Enterprise, and it will also offer access to features like Advanced Voice Mode for an additional 60-day period.
The company has partnered with the U.S. General Services Administration to launch the initiative.
"Helping government work better – making services faster, easier, and more reliable—is a key way to bring the benefits of AI to everyone," OpenAI said in a blog post.
In June, OpenAI launched a new offering called OpenAI for Government and said it was awarded a contract of up to $200 million by the U.S. Department of Defense.
The company is currently engaging in talks with investors about a potential stock sale at a valuation of roughly $500 billion, as CNBC previously reported.
OpenAI announced a $40 billion funding round in March at a $300 billion valuation, by far the largest amount ever raised by a private tech company.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
an hour ago
- Fast Company
Mark Cuban and Sam Altman just warned about disappearing jobs and the need to learn AI
OpenAI CEO Sam Altman isn't shy about discussing the future of AI. As the CEO of a market leading company, his predictions carry plenty of weight, such as his worry that AI could make things go 'horribly wrong,' or that AI agents will completely transform the workplace. Nor is billionaire Mark Cuban, who also sees vast changes to an AI-dominated workplace. Altman's recent remarks to finance executives at a Federal Reserve conference on large banks and capital requirements included his belief that entire job categories will be eaten up by AI. He said customer service is all but completely ready for an AI takeover right now, as reported by the Guardian newspaper. 'That's a category where I just say, you know what, when you call customer support, you're on target and AI, and that's fine,' he said. When a user calls a hotline now, AI answers, and it's like 'a super-smart, capable person,' Altman explained, adding that 'there's no phone tree, there's no transfers. It can do everything that any customer support agent at that company could do. It does not make mistakes. It's very quick. You call once, the thing just happens, it's done.' You may have already encountered an AI customer service system, or at the very least spoken briefly to one before being forwarded to a person with the info you're seeking. And anecdotally, if Altman's promise of no mistakes proves true, then that's a huge sell for customer service departments—and consumer satisfaction. (We all know how frustrating it can be calling these lines.) What an AI can offer under these circumstances is also clearly defined: customers probably call with a discrete set of common issues, and the AI can be trained on what to do. Subscribe to the Daily newsletter. Fast Company's trending stories delivered to you every day Privacy Policy | Fast Company Newsletters But the next industry Altman said was ripe for an AI takeover is more complex, requiring deep knowledge and empathy, and there are much higher stakes at play. According to the AI CEO, AI is already better than human doctors. It can, 'most of the time,' surpass human physician skills, he argued, suggesting it's 'a better diagnostician than most doctors in the world.' But then he pointed out a very human truth: 'people still go to doctors,' he said, and he added that he felt the same, 'maybe I'm a dinosaur here, but I really do not want to, like, entrust my medical fate to ChatGPT with no human doctor in the loop.' That at least aligns with warnings from medical experts who say that while AI may be useful for medical advice under some circumstances, like helping to make medical notes, it's just too subject to misinformation errors to be trusted to give mental health advice or diagnoses, for example. In fact a group of therapists recently warned of the danger in doing so. Altman also told the bankers that he's worried near future AIs could be used by bad actors, perhaps based overseas, to attack the U.S. financial system. He cited the issue of AI voice clones as a direct risk. While he's not predicting AI will steal banking jobs here, he is essentially warning that the entire industry could be upended by AI, used the wrong way. You may think Altman is being unnecessarily doomy here. In this case, you may be more aligned with the thinking of billionaire entrepreneur Mark Cuban. He's just suggested that in his expert mind, AI will become a 'baseline' workplace skill inside five years. Essentially he thinks that 'like email or Excel,' everyone, from fresh graduates to practiced entrepreneurs, will have to master AI to succeed at their tasks. in an interview with Fortune, Cuban predicted that thanks to the force multiplying effects AI can have, 'we'll see more people working for themselves' thanks to the rise of AI assistants, possibly powered by agent AI tech, which can transform 'solo founders into full teams.' And worse, if you're note already using AI to 'move faster or make smarter decisions, you're behind,' he said. While framed more positively than Altman's statements, a closer look says Cuban is still predicting whole classes of jobs will disappear inside five years. Why would a startup CEO need a personal assistant, a coding expert or a marketing adviser if all those tasks could be done by next-gen AI? advertisement All of this, while interesting, could be dismissed as mere PR for the AI industry, but you should actually care about this expert advice. Altman's warnings could have you looking at what tasks you already feel comfortable outsourcing to an AI tool instead of a human worker. And then, taking Cuban's advice, you should consider taking time to properly educate yourself about the promises and risks of AI technology, and also plan on upskilling or reskilling your existing staff. The potential efficiencies AI promises mean they could — By Kit Eaton This article originally appeared on Fast Company's sister publication, Inc. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.


CNET
4 hours ago
- CNET
ChatGPT's Boss Says You Still Shouldn't Trust It as Your Main Source of Information
When you start a conversation with ChatGPT, you might notice some text at the bottom of the screen: "ChatGPT can make mistakes. Check important info." That's still the case with the new GPT-5 model, a senior OpenAI executive reiterated this week. "The thing, though, with reliability is that there's a strong discontinuity between very reliable and 100 percent reliable, in terms of the way that you conceive of the product," Nick Turley, head of ChatGPT at OpenAI, said on The Verge's Decoder podcast. "Until I think we are provably more reliable than a human expert on all domains, not just some domains, I think we're going to continue to advise you to double-check your answer." That's something we've been advising about chatbots for a long while now across our AI coverage. OpenAI does too. Always double-check. Always verify. While the chatbot might be useful for some tasks, it might also totally make stuff up. Turley is hopeful for improvement on that front. "I think people are going to continue to leverage ChatGPT as a second opinion, versus necessarily their primary source of fact," he said. The problem is, it's really tempting to take a chatbot response -- or an AI Overview in Google search results -- at face value. But generative AI tools (not just ChatGPT) have a tendency to "hallucinate," or make stuff up. They do this because they're built primarily to predict what the answer to a query will be, based on the information found in their training data. But gen AI models have no concrete understanding of truth. If you talk to a doctor or a therapist or a financial adviser, that person should be able to give you the correct answer for your situation, not just the most likely one. AI gives you, for the most part, the answer it determines is most probably correct -- without specific domain expertise to check against. While AI is pretty good at guessing, it's still, for the most part, just guessing. Turley acknowledged that the tool does best when paired with something that provides a better grasp of the facts, like a traditional search engine or a company's specific internal data. "I still believe that, no question, the right product is LLMs connected to ground truth, and that's why we brought search to ChatGPT and I think that makes a huge difference," he said. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Don't expect ChatGPT to get everything right just yet Turley said GPT-5, the newest large language model that undergirds ChatGPT, is a "huge improvement" in terms of hallucination, but it's still far from perfect. "I'm confident we'll eventually solve hallucinations and I'm confident we're not going to do it in the next quarter," he said. In my testing of GPT-5, I've seen it commit a few errors already. When testing the new personalities of the language model, it got confused about a college football schedule and said games scheduled throughout the fall would all happen in September. Be sure you're checking whatever information you get from a chatbot against some reliable source of truth. That could be an expert like a doctor or a reliable source on the internet. And even if a chatbot gives you information with a link to a source, don't trust that the bot summarized that source accurately. It may have mangled the facts on its way to you. If you're going to make a decision about the information, unless you absolutely don't care about making the right one, double-check what AI tells you.

Business Insider
4 hours ago
- Business Insider
9 African countries with restricted ChatGPT access and AI adoption
ChatGPT, the AI chatbot developed by OpenAI, faces bans and restrictions in several African countries, sparking debates about access, regulation, and the continent's readiness for advanced AI. A July 2025 report highlights that 9 out of the over 20 nations restricting ChatGPT are in Africa, primarily due to issues like limited internet infrastructure and political challenges. The African Union has endorsed an AI strategy to foster Africa-owned, development-oriented innovation while ensuring safeguards against associated risks. Several African countries have imposed restrictions on ChatGPT amidst debates on accessibility, regulation, and the continent's readiness for advanced AI. According to a July 2025 report by Cybernews, which analyzed OpenAI's service data, over 20 countries worldwide have restricted ChatGPT access, with 9 of them in Africa. While non-conformist nations like China, Russia, and Iran have imposed political bans on ChatGPT and are investing in their own AI alternatives, Africans are grappling with a more complex set of issues, including limited internet infrastructure, sanctions, political instability, and regulatory uncertainty, leading to restrictions on the platform. African countries where ChatGPT faces restrictions No Country Status Reason for Restrictions 1 Eritrea Blocked by Government Limited internet infrastructure and state-controlled information environment 2 Libya Blocked by Government Political instability and lack of regulatory framework 3 Eswatini Blocked by Government Small market size, regulatory uncertainty, and limited OpenAI business presence 4 Burundi Blocked by Government Underdeveloped infrastructure and limited digital policy 5 South Sudan Blocked by Government Weak digital infrastructure and ongoing conflict 6 Sudan Blocked by Government Authoritarian restrictions and conflict-related disruptions 7 Central African Republic Blocked by Government Limited technological infrastructure and fragile governance 8 Chad Blocked by Government Restricted internet access and political interference 9 Democratic Republic of Congo Blocked by Government Sanctions, weak infrastructure, and lack of digital regulation Notably, African countries restricting access to ChatGPT are predominantly those embroiled in conflict or under authoritarian rule, whereas citizens in nations like Nigeria, Kenya, South Africa, and Ghana have full access to and are actively exploring AI regulation and innovation. This contrast underscores how systemic stability determines access to transformative technologies. In countries with robust institutions, AI is flourishing in various sectors, including education, small businesses, professional services, and logistics. Conversely, those with restrictions risk being excluded from a global AI-driven economy projected to add trillions of dollars in value by 2030. Experts warn that if Africa does not embrace AI, it risks falling behind in what is often described as the next industrial revolution. As President Paul Kagame noted at the Global AI Summit in Kigali: "Africa can't afford to be left behind, once again playing catch-up. We have to adopt, cooperate, and compete because it is in our best interest to do so." he said. Additionally, the emergence of powerful alternatives such as Anthropic's Claude, Google's Gemini, Elon Musk's Grok, and Meta's AI models adds another layer of urgency. With global competition accelerating, Africa faces the danger of becoming only a consumer of imported technologies rather than a creator of its own. In July 2024, the African Union formally endorsed a continent-wide AI strategy calling for an 'Africa-owned, people-centered, development-oriented, and inclusive approach to accelerate African countries' AI capabilities … while also ensuring adequate safeguards and protection from threats.' The policy highlights the continent's intent to avoid overdependence on foreign systems by prioritizing African-led innovation and regulation. Meanwhile, concerns about AI's potential risks are shaping international debates. Geoffrey Hinton, often referred to as the 'godfather of AI,' has warned that AI systems could manipulate humans and even pose existential threats, estimating a 10% - 20% chance that AI could one day wipe out humanity. Others, such as Fei-Fei Li, the 'godmother of AI,' argue instead for a human-centered approach that preserves dignity and agency.