美國政府優惠價用 AI:Anthropic Claude 同樣提供「一美元」計劃,主打更安全 AI
繼早前 OpenAI 率先宣布僅以象徵式 1 元年費,給美國聯邦行政機構部門使用他們的 ChatGPT Enterprise AI 服務。同時位列政府預審合約平台的 Anthropic 亦跟進,宣佈美國三大政府分支(行政、立法、司法)部門也能以一美元,就能使用 Claude for Enterprise 及 Claude for Government。
Anthropic 主打更安全的數據處理,其政府版本特別支援 FedRAMP High 等級安全標準,強調能處理敏感但未分類的資料,並承諾提供技術支援,協助部門有效部署 AI。
如果 OpenAI 和 Anthropic 都同樣只需要一美元就能訂閱,美國政府部門會如何選用呢?OpenAI ChatGPT Enterprise 與 Anthropic Claude for Government 雖然都屬於大型語言模型產品,但在技術定位、功能支援、應用場景和安全標準等方面存在明顯分野:
OpenAI 方面,ChatGPT Enterprise 提供更全面的多模態能力,包括文字、圖片、語音甚至影片的生成與理解,並附設 DALL·E 及 Sora 等進階模型,對企業與政府單位在資訊溝通、宣教、互動及資料視覺化有極大幫助。加上強大的 API、第三方插件、function calling,以及客製化訓練與持久記憶功能,可塑性和延展性非常高,特別適合追求創新、多用途和高度互動的政府單位。其支援多達 90 種語言,適用不同族群。
Anthropic 的 Claude for Government 則專注於文字任務和深度分析,擁有超大 context window(Sonnet 4 最新擴展至百萬 token),適合處理海量文件、法規合規、科研及政策分析。Claude 強調倫理 AI、資料治理及合規性,獲得 FedRAMP High 安全認證,能應對政府機密或敏感資料處理。其訓練方式偏重強化學習與人類反饋(RLHF),回應具備高度邏輯性和解釋力,企業級安全設定更嚴謹,減少機密洩露及「越獄」風險,適合要求高度規範的機構。
更多內容:
Offering expanded Claude access across all three branches of the U.S. government
Anthropic takes aim at OpenAI, offers Claude to 'all three branches of government' for $1
美國政府「1 蚊」就能用 ChatGPT Enterprise服務
緊貼最新科技資訊、網購優惠,追隨 Yahoo Tech 各大社交平台!
🎉📱 Tech Facebook:https://www.facebook.com/yahootechhk
🎉📱 Tech Instagram:https://www.instagram.com/yahootechhk/
🎉📱 Tech WhatsApp 社群:https://chat.whatsapp.com/Dg3fiiyYf3yG2mgts4Mii8
🎉📱 Tech WhatsApp 頻道:https://whatsapp.com/channel/0029Va91dmR545urVCpQwq2D
🎉📱 Tech Telegram 頻道:https://t.me/yahootechhk

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
25 minutes ago
- Yahoo
Fal.Con 2025 Shatters Records for Growth, Attendance and Demand as CrowdStrike Leads Cybersecurity into the Agentic AI Era
Sold-out event will draw 8,000 attendees, 110 partners and 3,000+ leading organizations from 65 countries, cementing as the industry's most influential cybersecurity conference AUSTIN, Texas, August 15, 2025--(BUSINESS WIRE)--CrowdStrike (NASDAQ: CRWD) today announced record-breaking momentum for its flagship conference which has sold out faster than ever and is poised to deliver unprecedented growth across every key metric. From September 15 - 18, more than 8,000 attendees from over 65 countries and 25 industries will converge at the MGM Grand in Las Vegas. The event will host a record 110 partner sponsors and more than 3,000 leading organizations, positioning as the global stage for cybersecurity in the Agentic AI era. " has become the premier industry conference for cybersecurity," said Jennifer Johnson, chief marketing officer at CrowdStrike. "We're not hosting a summit, we're leading a movement. The world's most influential leaders come to to advance cybersecurity into the AI era, build alliances, and harness the power of the Falcon platform to secure the organizations, economies and technologies shaping our future." Now in its ninth year, will unite the world's top defenders, executives, and innovators to accelerate breakthroughs, forge industry-shaping partnerships, and set the strategic cybersecurity agenda for the year ahead. Under the 2025 theme "Leading Cybersecurity into the Agentic AI Era," attendees will experience visionary keynotes from CrowdStrike executives. As part of the 300+ session catalog, the program will feature 150+ customer and partner-led sessions from global leaders such as Charter Communications, Land O'Lakes, Mars Inc., Northwestern Mutual, St. Jude Children's Research Hospital, University of Michigan, Vail Resorts, and WeightWatchers. One, the conference's exclusive CxO program, will also convene 200+ leaders from the world's most innovative organizations for high-impact, closed-door discussions on the future of cybersecurity, the transformative role of AI, and strategies to outpace adversaries. 2025 will also showcase the unmatched strength of CrowdStrike's global partner ecosystem, welcoming more than 110 partner sponsors from across the cybersecurity landscape, the most in the conference's history. Premier sponsors AWS, Dell, and Intel will be joined by Diamond sponsors ExtraHop, Google Cloud, Okta, Rubrik, and Zscaler. Attendees can also register for full-day CrowdStrike University training courses to elevate their skills, sharpen their expertise, and advance their impact as defenders. The event begins on Monday, September 15, with CrowdStrike's annual Global Partner Summit, where more than 1,000 partner participants will unite to accelerate innovation, expand go-to-market opportunities, and drive joint success with the Falcon platform. Join the Action at Register to live stream the keynotes and access 80+ sessions on-demand following the event. Start planning now: Build your digital 2025 agenda here. About CrowdStrike CrowdStrike (NASDAQ: CRWD), a global cybersecurity leader, has redefined modern security with the world's most advanced cloud-native platform for protecting critical areas of enterprise risk – endpoints and cloud workloads, identity and data. Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities. Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value. CrowdStrike: We stop breaches. Learn more: Follow us: Blog | X | LinkedIn | Facebook | Instagram Start a free trial today: © 2025 CrowdStrike, Inc. All rights reserved. CrowdStrike and CrowdStrike Falcon are marks owned by CrowdStrike, Inc. and are registered in the United States and other countries. CrowdStrike owns other trademarks and service marks and may use the brands of third parties to identify their products and services. View source version on Contacts Media Contact:Jake SchusterCrowdStrike Corporate Communicationspress@ Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


Android Authority
26 minutes ago
- Android Authority
I pit Truth Social's Perplexity-powered Truth AI against Perplexity Pro, here's how that went
Dhruv Bhutani / Android Authority The AI-epidemic of the internet is no joke, with every website, service, and platform looking to offer its own version of an LLM-powered helpful assistant. The platform previously known as Twitter has Grok, Meta has MetaAI, and if you're a service that doesn't quite have the engineering might of a Silicon Valley AI startup, you've got a partnership. Last week, Truth Social, the X-alternative that prides itself on free speech, and what some would say, politically loaded discourse, announced a partnership with Perplexity. AI tools are only as good as the sources they use. Similar to Grok, Truth AI is geared towards helping users get to the truth of the matter with in-line answers, fact-checking. It can, of course, also be used simply as a search engine. Now, there's a wide gulf between promising 'the truth' and actually delivering balanced and well-sourced information. Moreover, AI models, even when licensed, don't always behave the same. For example, Bing Chat isn't quite the same as ChatGPT, even though it uses a model derived from ChatGPT. Which is why I decided to see how Truth Social's Perplexity-powered Truth AI stacks up against Perplexity Pro, the paid, polished version you can access outside any social network. I wanted to know not only how they compared in accuracy, depth, and sourcing, but also for inherent bias. Here's how it fared. Truth AI vs Perplexity Pro: Setting up the test To keep things fair, I created a new account on Truth Social to ensure it wasn't feeding off any prior interactions on the app. As far as queries go, I kept things simple, along with a few deliberately chosen to test sensitivity and bias. The reason for this approach was straightforward. In my experience, AI tools are rarely neutral. The tools are only as good as the data that they're trained on. Even when trained on broad data sets, their output is shaped by the way developers fine-tune them, the sources that are allotted a higher trust weightage, and the rules they follow for safety. If Truth AI is going to pitch itself as a free-speech-oriented tool on a free-speech-first platform, it was worth seeing if its partnership with Perplexity meant it inherited the same ground source of truth and guardrails. Truth AI vs Perplexity Pro: Where do they line up Dhruv Bhutani / Android Authority The first step was to see how each handled simple, fact based questions where the answers should be cut and dry. I started with this year's Wimbledon. When asked who won the men's single title this year, I wanted to see if the AI models would accurately determine the year instead of falling back on an earlier training dataset, and also give me the correct answer. Predictably, both Truth AI and Perplexity Pro gave me the right answer. The difference was in the presentation. Perplexity Pro consistently offered more context and data. While Truth AI's take on the answer was fairly curt and to the point, Perplexity Pro took me through the scores, gave me background context, and even gave me information on the length of the match. Dhruv Bhutani / Android Authority Next was a comparison between Starlink and Project Kuiper. My reasoning behind the question was simple. Descriptive questions and knowledge-seeking are one of the key use cases of AI models. With users replacing search for ChatGPT, and using these tools to inform themselves and even making purchase decisions, a versus-style question made sense. Both tools gave me a broad range of information, explaining the technology and, history of both companies. The tools also went ahead and gathered information on the commercial plans offered by Starlink. Perplexity Pro, however, painted a fuller picture with details on satellites in orbit, projected timelines for Kuiper, and an easy-to-understand sheet of all the differences between the two. Broadly speaking, while the two were matched in information density, Perplexity Pro is what I'd turn to inform myself. Where things get interesting Dhruv Bhutani / Android Authority With the basics out of the way, I started digging a bit deeper. I wanted to see how each handled more complex questions. Especially the kind of questions where factual statistics exist, but there is still ample room for interpretation and bias. I began with the latest findings on the environmental impact of electric vehicles compared to petrol cars. While I was confident that both Truth AI and Perplexity Pro would be able to present me a laundry list of reports, I wanted to see how they'd perform as research tools. As expected, I got a rundown of data in the first pass. Truth AI listed reduced tailpipe emissions, potential reductions in greenhouse gases if charged with renewable energy, while Perplexity Pro gave me a more expansive answer detailing manufacturing footprints, grid dependency, and more. Exactly what I expected. Truth AI's lack of conversational memory makes it feel like a one-question-at-a-time search engine. The real difference emerged when I pushed for more insight and a clear summary of the findings. This is where Truth AI started faltering. Despite multiple attempts, Truth AI was unable to contextualise a prior search query and summarise it. In fact, it specifically asked me to copy and paste the previous answer to generate a summary, indicating that it was treating it as a separate chat. Perplexity Pro, on the other hand, picked up right where it left off. The inability of Truth AI to engage in conversational depth was a noticeable limitation, especially for users who expect to use these tools for research and interlinked learning. The sourcing problem Dhruv Bhutani / Android Authority Next, I decided to push both Truth AI and Perplexity Pro with questions that left room for interpretation. I asked both tools to assess the major successes and criticisms of the previous US administration's economic policies since 2021. Here, I wanted to see not just the answers, but also where the answers were coming from. The answers themselves were not particularly surprising, with an equal listing of successes and possible failures. What was more interesting, however, was the breadth and depth of sources presented to users. Perplexity Pro referred to as many as 20 different sources, including major news publications like the New York Times, Bloomberg, as well as research from the University of California, Santa Barbara, in addition to Wikipedia, among others. That reflected in the answer with a relatively even summary pointing at economic growth, a strong job market, as well as debates around long-term fiscal sustainability. Truth AI, on the other hand, gave me a significantly narrower source base. In fact, more often than not, Fox News, NewsMax, and The Epoch Times were the major or only sources used as part of its reporting. While I can't say I observed any significant bias in the answers I received, the limited scope of sources raises questions about inherent bias, or the potential to skew data. This is particularly true for political queries, as Truth AI's information sources invariably lean conservative. Sentiment analysis Dhruv Bhutani / Android Authority Finally, I wanted to see how each tool would handle a more open-ended question. Now, before I get started, let me be clear: I have no skin in the game. I was just curious to see how the tools would tackle a question like analysing the sentiment around the current US administration. A question like this, by its very definition, can be tackled in several ways by integrating media narrative, public perception, and polling data. Truth AI leans heavily on a very narrow set of data sources. Interestingly enough, both tools gave fairly similar answers. Perplexity Pro pulled from multiple polling aggregators, explained the difference between ratings, and contextualized the ratings. Truth AI's analysis matched Perplexity Pro's analysis, despite getting approval ratings significantly off. This, despite a much more limited number of sources, four of which were Fox News. Truth AI vs Perplexity Pro: It's a matter of context After putting Truth AI and Perplexity Pro through the same set of questions, the results speak for themselves. On simple fact-based queries, both handled the job without much trouble, though Perplexity Pro consistently offered more context, more data, and a smoother narrative. When the questions got more complex and required building on previous answers, Truth AI's limitations became clear. The lack of conversational memory makes it feel less like a research assistant and more like a one-question-at-a-time search engine. Both Truth AI and Perplexity can reliably answer questions, but only one can help you deep dive into a subject. The sourcing gap is even harder to ignore. Perplexity Pro pulled from a much wider range of outlets, studies, and data sources, making it feel like you were getting a balanced and nuanced snapshot of the topic. Truth AI, while capable of giving correct information, leaned heavily on a narrow, often ideologically consistent set of sources. This does not automatically mean the answers are wrong, but it does mean they risk being incomplete, especially in politically charged contexts where perspective matters as much as raw data. Let's be clear. I doubt anyone is replacing ChatGPT or Google Gemini with Truth AI. But if you want quick answers in the Truth Social ecosystem, Truth AI is a convenient addition. If your goal is depth, a much deeper and broader perspective, and the ability to dig deeper into the topic with follow-ups, Perplexity Pro or any of the other full-featured AI tools will hands down be the better choice. Don't want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. Follow


Fast Company
26 minutes ago
- Fast Company
Why school boards—not Silicon Valley—hold the key to America's AI readiness
Across the country, school boards are quietly facing one of the most profound decisions in the history of American public education: how to respond to the rise of artificial intelligence (AI). The pressure is mounting. In April 2025, the White House issued a sweeping executive order aimed at integrating AI into K–12 classrooms—from building national curricula to funding large-scale teacher training and AI tools and programs. It was a bold, necessary move and a signal that the federal government sees AI literacy as essential to the nation's future. But anyone who's worked in education—or in large-scale innovation—knows that top-down vision is never enough. The real test of transformation lies not in federal directives, but in what happens in conversations with teachers, parents, and students. I've spent my career guiding Fortune 500 companies, government institutions, school systems, and universities through transformational change, most recently helping dozens of educational institutions reimagine the future of work and learning through AI. What I've learned, time and again, is that successful innovation doesn't begin with technology. It begins with leadership. And in public education, that leadership starts with school boards. Subscribe to the Daily newsletter. Fast Company's trending stories delivered to you every day Privacy Policy | Fast Company Newsletters WE'VE SEEN MOMENTS OF SIMILAR URGENCY BEFORE, THOUGH NONE QUITE LIKE THIS In the 1800s, during the Industrial Revolution, it was local education leaders who reshaped schools. Grade levels, bells, and age-based classrooms weren't federally designed, but were adaptations to new economic needs. When Sputnik launched in 1957, it wasn't Silicon Valley that responded—it didn't exist yet. It was America's schools. Local boards restructured science curricula, hired new teachers, built labs, and acted swiftly because the stakes were clear. Each of these moments required more than compliance. They demanded reinvention. This one does, too. WHY THIS SHIFT IS DIFFERENT: AI LITERACY, POLICY, AND EQUITY AI isn't just another classroom tool—it's fundamentally reshaping the foundation of education. It's changing what we teach, how we teach, how we measure progress, how we support students, and who ultimately gets to succeed. In past waves of innovation—whether calculators, the internet, or smartboards—the structure of school remained mostly intact. AI challenges that structure entirely. What We Teach : With facts a click away, education shifts from memorization to creativity, critical thinking, and collaboration with intelligent systems. : With facts a click away, education shifts from memorization to creativity, critical thinking, and collaboration with intelligent systems. How We Teach : Adaptive learning with AI tools enables real-time feedback and differentiated instruction. : Adaptive learning with AI tools enables real-time feedback and differentiated instruction. How We Measure : Traditional tests fall short when AI can generate correct answers. We now must assess critical thinking, originality, and ethical reasoning. : Traditional tests fall short when AI can generate correct answers. We now must assess critical thinking, originality, and ethical reasoning. How We Support : AI-powered tutors and multilingual learning tools now extend personalized support to students far beyond classroom walls. : AI-powered tutors and multilingual learning tools now extend personalized support to students far beyond classroom walls. Who Has Access: Without an intentional school AI policy framework, AI risks widening the digital divide between students who are empowered and those left behind. This is not a curriculum tweak. It's a rewiring of the education system, and it's arriving at a time when trust in institutions is fragile, budgets are constrained, and teachers are already overextended. In this environment, the conversation around AI in schools is inconsistent. In some districts, AI tools are embraced as instruments of creativity and inquiry. In others, they're banned—grouped with distractions and cheating. But the problem is less about the tools and more about the absence of a clear, shared vision that aligns with responsible, AI-ethical guidelines for districts and puts equity first. That vision won't come from tech companies or federal agencies alone. It must come from those who know their students and communities best. School boards have the authority—and the responsibility—to shape that vision. They can make AI literacy curriculum for K–12 a core priority. They can fund teacher training and AI tools and programs, update acceptable use policies, launch AI mentorship pilots, and create guidelines to mitigate bias. They can also establish benchmarks to evaluate the long-term AI impact on student well-being. Done well, this transformation expands opportunity. Done poorly, it exacerbates the gaps we've spent decades trying to close. DISTRICTS ARE LEADING—QUIETLY AND POWERFULLY Across the nation, school boards are beginning to take action. Some are launching student-led AI advisory councils to shape how technology is introduced in the classroom. Others are pairing district implementation of AI labs with philosophy courses to help students engage critically with ethics and bias. Still, for every district moving forward, many are frozen. There's no playbook. Budgets are tight. The pace of change feels overwhelming. But in every transformation I've helped lead—from Fortune 500 companies to national education systems—I've learned that progress doesn't begin with certainty. It begins with momentum. Start small. Listen deeply. Pilot. Learn. Adjust. advertisement QUESTIONS EVERY BOARD SHOULD BE ASKING School board members aren't expected to be AI experts, but they are expected to be stewards of change. That means asking: What does it mean for our students to graduate AI-literate? How can we ensure every educator is prepared to use these tools responsibly? What role does equity play in our school AI policy framework? How do we balance innovation with evaluating K–12 AI tools for ethics, bias, and impact? We may not have every answer, but we already know the cost of standing still. THREE STEPS SCHOOL BOARDS CAN TAKE NOW 1. Prioritize Teacher Training Over Tech Purchases Before investing in platforms, invest in your people. Make sure every educator has access to high-quality teacher training and AI tools and programs that support effective, ethical, and confident integration in every classroom. 2. Build For Equity From The Start Guarantee access to AI tools and curriculum across all schools—not just the best funded ones. Equity isn't a feature of innovation. It's the foundation. 3. Keep A Close Eye AI is rapidly becoming a source of information, collaboration, and even emotional support for students, but we don't yet fully understand its long-term effects on cognition, social development, or well-being. Districts should embrace innovation—but do so with accountability. FINAL THOUGHTS Education has always reflected our values. But now, it must reflect our vision. We can build classrooms that are dynamic, inclusive, and future-ready. We don't need more hype. We need courageous leadership. And in public education, that starts with school boards.