
San Francisco rolls out Microsoft's Copilot AI for 30,000 city workers
The city will use Microsoft 365 Copilot Chat, powered by OpenAI's GPT-4o, for employees like nurses and social workers to improve city services.
"It's going to allow us to use LLMs and produce faster response times," Lurie said.
Lurie's administration said the move will make San Francisco one of the world's largest local governments to leverage AI.
City hall said Copilot will be made across its departments to tackle administrative work like data analytics and drafting reports, giving workers more time to respond to residents.
The move comes after a six-month test involving more than 2,000 city workers that showed generative AI gave them productivity gains of up to five hours weekly.
Lurie said the city used the 311 city services line as a test case that showed ways to improve service times for things like trash, homeless encampments and language translation.
"We have over 42 languages spoken here in San Francisco," he said. "We don't always have enough translators to do all that. The AI tool is going to help us do that in seconds."
While San Francisco is home to AI leaders from Anthropic to OpenAI and more, the city is relying on AI technology that will be available under its existing license with Microsoft, coming at no additional cost to the city, the mayor's office said.
Lurie said he wants San Francisco to be "a beacon for cities around the globe on how they use this technology, and we're going to show the way."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
24 minutes ago
- Android Authority
This is why I use two separate ChatGPT accounts
Calvin Wankhede / Android Authority I'll admit it: I'm a bit of a recovering AI addict. While I've had mixed feelings about AI from the start, as someone who spends a lot of time lost in thought, I've found it can be a useful tool for ideation, proofreading, entertainment, and much more. Recently, I've started scaling back my usage for reasons beyond the scope of this article, but for a while, I actually had two paid ChatGPT accounts. I know what you're thinking, and you're right, it's a bit excessive. Still, in some cases, it really can make sense to have two accounts. Would you ever consider having two AI accounts at once? 0 votes Yes, it's smart to seperate business and personal. NaN % Yes, but only if it's two different AI tools. NaN % No, it's a waste of resources and I get by fine with what I have. NaN % Other (Tell us in the comments) NaN % It all started when I found myself constantly hitting usage limits for my personal projects and entertainment, leaving me in a lurch when I needed AI for work-related tasks. For those who don't know, the ChatGPT Plus tier has different limits depending on the model. Some like the basic GPT 4o are virtually unlimited, while others have a firm daily or weekly count. For example, GPT 03 lets you send 100 messages a week, while 04-mini-high gives you 100 messages a day, and so 04-mini gives you 300 a day. I tend to rely the most on 03 and 04-mini-high outside of basic stuff like editing, because it is actually willing to tell you that you're wrong, unlike many of the other models that are people-pleasers to the extreme. Realizing I was blowing through my message limits long before the week was up, I immediately started considering my options, including adding a Gemini subscription instead of ChatGPT. Truthfully, I had tried both before and always found myself coming back to ChatGPT, so the decision was basically made for me. At that point, I began manually migrating some of my old chats over to the new account, basically copying and pasting core logs so ChatGPT and deleting records from my original mixed-use account. As a freelancer, my goal was to make sure anything related to clients was separated from my personal projects, which were mostly entertainment or experimental (like messing around with the API and similar tools just to learn). It wasn't even just about the limits. I found this separation helpful for more than just avoiding blowing through my limits on the wrong thing. As you might know, ChatGPT can learn your preferences. It's not exactly learning or memory in the traditional sense, but instead it basically creates an abstract pattern of your communication styles and preferences. Let's just say my way of talking about personal matters is very different from my professional voice. Lots of cursing and the like. After splitting my usage, I noticed that ChatGPT actually became better suited for the specific tasks I was performing on each account, as it understood my preferences for each use case a little better. That's probably an oversimplification of how ChatGPT works, but you get the idea. These days, I no longer pay for two accounts since I don't rely as heavily on ChatGPT or any AI tool anymore, but it's useful to keep my old logs around, and so I still have a ChatGPT Plus account for business and another free account that is for personal use. This way, I also retain the option of renewing my paid subscription if my usage habits change again in the future. How do you sign up for two accounts, and is this a TOS violation? Calvin Wankhede / Android Authority Think you could benefit from a second account? Signing up for two accounts is easy as long as you have at least two different email addresses. For payment, I used two different credit or bank cards, though it's unclear if that's really necessary. The bigger question is if it actually okay to do this, or will your accounts get suspended for violating policy? When I first considered this, I did my research. According to the Terms of Service (TOS), there's no firm rule against having two accounts as long as you aren't purposely trying to circumvent usage limits. My first thought was, 'Well, I kind of am' — after all, running out of limits was a big part of my problem. Still, by separating accounts, I was doing more than just trying to increase my limits. By dividing business and personal/entertainment uses, I was also organizing information better, and I was making sure I didn't use up all my limits on personal stuff that would hurt my work productivity. Before, I'd burn through my limits pretty quickly on silly time-wasting stuff like writing alternate timeline fiction and other entertainment. Ultimately, having two accounts can be a bit of a gray area, but as long as you're careful about how and why you use each account, it's not technically against the TOS. For what it's worth, ChatGPT agrees — but with some caveats. As the AI explains, two accounts are fine if: Your main reason for separating is genuinely to keep business and personal activities distinct—billing, data, privacy, and not accidentally using up the business quota on personal stuff. This is a reasonable, defensible use. If you had one account and were hitting limits due to mixed usage, it's normal (and frankly smart) to create a second account for business, especially if your work depends on reliable access. As noted by the ChatGPT bot itself, the TOS is mainly aimed at stopping people from abusing the system by creating multiple accounts to stack free or paid uses, or for heavy API stacking. Reading the actual TOS seems to give the same picture as well. Could this kind of 'gray area' usage still attract attention from ChatGPT staff? Maybe, but as long as you're genuinely separating your use cases, there shouldn't be any major issues. In fact, it's common practice to create accounts specifically for business use, including for tax purposes, and so I'd wager this is probably more common than many realize.


Fox News
25 minutes ago
- Fox News
AI can simulate a teacher, but it can't shepherd a soul
Across America, education is changing at a pace few could have imagined even a decade ago. Artificial intelligence is being deployed to train machines to teach our children. School systems are embedding gender ideology and political agendas into their curriculum with little regard for parental input. At the same time, traditional values are being pushed to the margins, and our students are caught in the middle. We're witnessing a moment that demands urgent attention: Will we preserve the soul of education, or surrender it to machines and ideologies that don't know the human heart? The recent announcement that Microsoft, OpenAI and Anthropic are launching an AI-powered "teacher training academy" is a sign of what's coming. At first glance, it sounds innovative, maybe even helpful. But the deeper question is this: What is education really for? Education is not merely about delivering information. It's about shaping lives. At its best, it forms character, teaches discernment and prepares students to serve their communities with integrity. That kind of formation can't be automated. It requires a relationship. It requires mentorship. It requires people. At Southeastern University, we're not anti-technology. In fact, we're actively exploring ethical ways to use AI to serve students and improve access to learning. But we draw a clear boundary: AI can assist educators, but it must never replace them. Because no machine, no matter how advanced, can love a student, model virtue or guide a young adult toward a life of purpose. We believe every student is made in the image of God, and that shaping a soul requires more than a circuit board. The deeper danger in this shift isn't just logistical, it's philosophical. When we hand over the role of teacher to algorithms, we're also handing over the power to decide what gets taught, how it's taught and what values are emphasized. In an era where AI is already reflecting and amplifying ideological bias, that should concern every parent and educator in America. Are we prepared to let unelected tech companies, with their own worldviews and profit motives, determine how future generations learn, think and believe? This is about more than machines. It's about the mission of education itself. We must reclaim education as a deeply human, deeply moral enterprise. We must elevate teachers, not replace them. We must ensure that classrooms remain spaces where students are not just informed but transformed, not just prepped for a career, but prepared for a calling. Now is the time for Christian colleges and universities to lead boldly. We must protect the relational core of learning, model truth-seeking without compromise, and remind our nation that the formation of a soul can't be outsourced. Let's use technology, but never be used by it. Let's build an educational future that reflects the dignity of every student, and the responsibility we have to shape them with wisdom, courage and conviction.
Yahoo
26 minutes ago
- Yahoo
It's Never ‘Happened in the History of Tech to Any Company Before': OpenAI's Sam Altman Says ChatGPT is Growing at an Unprecedented Rate
When Sam Altman, CEO of OpenAI, described the extraordinary surge in user demand following a viral AI launch, he offered a candid glimpse into the operational pressures that come with leading the artificial intelligence (AI) revolution. Altman's remarks, delivered during a Bloomberg Originals interview, capture both the scale of recent events and the practical constraints that even the world's most advanced AI companies must contend with. Speaking about the massive spike in users resulting from the launch of Studio Ghibli-style images in a recent ChatGPT release, Altman recounted, 'This level of virality is an unusual thing. This last week, I don't think this has happened in the history of tech to any company before. I've seen viral moments, but I have never seen anyone have to deal with an influx of usage like this.' More News from Barchart OpenAI CEO Sam Altman Calls DeepSeek's Bluff: 'I Don't Think They Figured Out Something Way More Efficient' Vanguard Is Now the Top Investor in MicroStrategy Stock. Should You Buy MSTR Too? The Saturday Spread: Using Science to Pinpoint Empirically Enticing Trades in WMT, OKTA and RCAT Tired of missing midday reversals? The FREE Barchart Brief newsletter keeps you in the know. Sign up now! Altman's experience, while anecdotal, is rooted in the realities of managing systems that, in a matter of hours, can attract millions of new users. When pressed on the numbers, Altman confirmed that OpenAI added more than a million users in just a few hours — an unprecedented feat even by the standards of Silicon Valley. The technical demands of such growth are immense. Altman explained that generating images with the latest AI models is a computationally intensive process. To cope with the surge, OpenAI had to divert compute resources from research and slow down other features, highlighting the finite nature of their infrastructure. 'It's not like we have hundreds of thousands of GPUs sitting around spinning idly,' he noted, underscoring the limits faced even by leading AI firms. Altman's authority on these matters is well established. As the architect behind OpenAI's rise, he has overseen the development and deployment of some of the most influential AI systems in the world. His leadership has been marked by a willingness to confront both the opportunities and the constraints of large-scale AI. The decisions to borrow compute capacity and restrict certain features reflect a pragmatic approach to resource management — a challenge that is increasingly central as AI adoption accelerates. The quote also reveals Altman's forward-looking mindset. He described reviewing a list of planned feature launches and realizing that, without additional compute resources, not all could be delivered as intended. 'More compute means we can give you more AI,' he concluded, succinctly connecting infrastructure investment to the pace of innovation. Altman's comments resonate in a market environment where demand for AI services routinely outstrips supply. The rapid adoption of generative AI tools has forced companies to rethink their infrastructure strategies, driving massive investments in data centers, GPUs, and cloud capacity. Industry observers note that such surges in usage are likely to become more common as AI applications proliferate across sectors. In sum, Sam Altman's reflections on OpenAI's viral growth episode provide a window into the operational realities of modern AI development. His experience and measured responses reinforce his reputation as a leader capable of steering his company through both the promise and the growing pains of technological transformation. On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on