Glendale voters to decide on measures related to VAI Resort project
The Brief
Glendale voters have until May 20 to vote on two ballot measures.
Both measures relate to the VAI Resort project.
The 'no' side of the measure say zoning changes related to the project would be irresponsible, as it would allow what they see as unfair entitlements from the city.
GLENDALE, Ariz. - The fate of a major entertainment hotel and theme park will soon be in the hands of Glendale voters.
On May 9, community leaders and supporters of Propositions 401 and 402 gathered in Downtown Glendale.
"It's really good for Glendale," said Glendale City Councilmember Bart Turner.
What we know
According to documents provided by the City of Glendale, Proposition 401 asks voters to approve or reject an amendment to the city's General Plan Map that redesignated the use of a piece of land located south of Cardinals Way, between 94th and 95th Avenues, from "Parks and Open Space" to "Corporate Commerce Center."
Meanwhile, Proposition 402 asks voters to either approve or deny a city ordinance involving rezoning and the VAI Resort.
City officials said a group named Worker Power PAC filed two referendum petitions with the City Clerk's Office on Dec 20, 2024. If the measures are approved, construction at the VAI Resort can continue as planned. If rejected, then the council's decision will be overturned, creating delays and uncertainty with the resort's future.
"2,000 jobs are on the line. $2.2 billion of tax revenue going right back into this community is on the line, as well as a more sustainable and entertaining resort is on the line," said Grant Fisher, President and CEO of VAI Resort.
Dig deeper
The ambitious multi-faceted project, which is set to include four hotels, indoor and outdoor performance venues, and a Mattel theme park, has been under construction since 2021.
Supporters say VAI would transform Glendale into a national and international tourist destination, estimating it would pump thousands into the local economy each year, and generate so much sales tax revenue, that public services would be improved without raising local taxes.
"It will increase police staffing, give us better equipment, better training," said Dave Goitia, President of the Glendale Fraternal Order of Police.
The other side
Not everyone, however, agrees.
"The last thing we need is to turn space that was zoned for parks into a parking lot and office building," said Brendan Walsh, Executive Director of Worker Power.
Walsh said the zoning changes would be irresponsible, as it would allow what he sees as unfair entitlements from the city.
"The resort can bring in revenue, it can bring in jobs, but the question is: do they need everything the city is giving them to do that? I don't think they do," said Walsh. "I think the resort can be built, it can be open, it can be a good neighbor, but it needs all the entitlements it needs from the City of Glendale, and I don't think residents are very happy about that."
What's next
The election, which is done via mail-in ballot only, will conclude on May 20. People who live in Glendale should already have their ballots sent to their home.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
39 minutes ago
- Bloomberg
Fed Seeks to Preserve Labor Market Lessons in Rewriting Rates Strategy
By and Jonnelle Marte Save The long period of low inflation before Covid taught the Federal Reserve not to fear low unemployment. But the shock of the pandemic reminded policymakers how painful high inflation could be. As they rewrite the strategy document that guides their interest-rate decisions, Fed officials are trying to figure out how to embrace both lessons. They want to preserve an approach that maximizes the benefits of economic expansions for working Americans, while making it clear they won't let inflation slip out of control again.


Forbes
40 minutes ago
- Forbes
AI: The Overinvestment Bubble Or The Fungible Opportunity?
Tom Traugott, SVP of Emerging Technologies at EdgeCore Digital Infrastructure. In 1986, Warren Buffett famously outlined his goal "to be fearful when others are greedy and to be greedy only when others are fearful." Several weeks since Nvidia's annual GTC conference, I can't help but think of this phrase when layering the excitement, growth and optimism of everything that is AI from that week on top of recent heightened economic uncertainty, market panic and fear at consumer and business levels. At GTC, Nvidia's Jensen Huang captured this tension with some key quotes in his own right. On the optimistic side: "The more you buy, the more you save. It's even better than that. Now, the more you buy, the more you make." On the disruptive side: "I'm the chief revenue destroyer"—in speaking to the major performance gains of new Blackwell chips over previous Hopper chips, and that older generation chips' value has plummeted as a result (perhaps not that different from the price of a 2024 new car when 2025 gets announced). On January 27, 2025, markets reacted negatively in response to the wider availability of the Chinese firm DeepSeek's low-cost R1 model, prompting pundits to amplify concerns about an 'AI bubble' of overinvestment, which continues in the midst of economic uncertainties with revenue generation an increasing focal point versus the R&D-like investment in training large models. Yet, the rapid evolution of every AI model, from ChatGPT and Gemini to Grok and Claude, not to mention the promise of an agentic service such as Manus AI, points to only greater innovation and capabilities to come. What's a data center developer or cloud provider to do? Lessons found from past turbulent times may be instructive, and for that, my mind goes back to 2008 and the Great Financial Crisis. In tracking the sale of Lehman Brothers' two data centers, Rich Miller, founder and former editor-at-large of Data Center Frontier, pointed out that "the $330 million valuation for the two data centers is also higher than the $250 million valuation of Lehman's North American investment banking and trading unit." This marks for me a turning point for the data center industry. Why? When the GFC hit in 2008, enterprises began to accelerate the outsourcing of IT infrastructure to third parties, choosing to focus on core business competencies and preserve capital for other uses, such as revenue generation and business expansion, not noncore facilities. Internet and tech companies began leasing data center capacity from the outset, focusing on value creation up the stack. I'm a believer in the continued value of technology assets—and while the AI boom has put pressure on the adaptive reuse and value of existing facilities available now, the pressure has also heightened the need to dramatically increase the size of investment in data center and energy infrastructure to support what's next. But what is the sweet spot for investment that isn't misguided? For that, a more recent answer emerges from an article in the MIT Technology Review describing the glut of data centers in China prompted by the launch of ChatGPT in late 2022. To prepare for the AI boom, China mobilized to spur significant investment across the country, and from 2023 to 2024, over 500 new development projects were announced across the country, with 150 built by 2024. A critical insight from the article was that the surge of development came from 144 companies targeting LLM training, but that at the end of 2024, only 10% of those companies continued to focus on LLM development. What's the lesson here? It's one that was very much present at GTC 2025 and was highlighted by Jensen Huang repeatedly. AI's revenue-generating promise manifests itself in all that is inference—and this is what is instructive about China's overbuild; it was more so in remote locations that were purpose-built for asynchronous training yet lacked the proximity and low-latency connectivity that inference demands. A panel comprised of top hyperscalers at GTC discussing lessons learned as part of building 100,000-plus GPU clusters generally reached consensus on the right answer: fungibility of infrastructure. The panel discussed this emerging best practice and explained that a large cluster of GPUs may need to serve multiple purposes, and may end up doing so dynamically, shifting from training to inference in the same cluster. GPU buildouts should therefore include the higher memory and storage that inference requires from the outset, along with the associated power and cooling requirements needed to expand their usability. Furthermore, the data center campus may continue to integrate with traditional cloud services, bringing fungibility forward as a key consideration as well. So, to be greedy in the current climate of fear, my recommendation is to follow Jensen's advice: Buy more to make more, but make data center investments in locations that provide access to scale, density and low-latency proximity to key population centers and cloud regions. This ensures a multipronged path to revenues, which ultimately is the pragmatic answer in times of economic uncertainty, affirming that investment in AI infrastructure isn't a bubble but a prudent decision to make. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Fast Company
44 minutes ago
- Fast Company
Thanks to AI, the one-person unicorn is closer than you think
When Mike Krieger helped launch Instagram in 2010 as a cofounder, building something as simple as a photo filter took his team weeks of engineering time and tough trade-offs. Now, as chief product officer at Anthropic, he's watching early-stage startup founders accomplish far more in far less time—sometimes over a single weekend. Thanks to intuitive agentic AI models (or AI agents), founders are experimenting with product, code, and business strategies, often without needing to hire specialized team members. 'When I think back to Instagram's early days, our famously small team had to make painful decisions—either explore adding video or focus on our core creativity,' Krieger tells Fast Company. 'With AI agents, startups can now run experiments in parallel and build products faster than ever before.' To him, it signals a seismic shift: the rise of agentic entrepreneurship. Enterprises can supercharge engineering teams while individuals with bold ideas but no technical background can finally bring their visions to life. 'At Anthropic, 90% of Claude's code is now written by AI, and this has completely transformed how we build products. Recently, Claude helped me prototype something in 25 minutes that would have taken me six hours,' Krieger says. 'I see founders who tried every model, couldn't get their startup to work, then with Claude, their startup suddenly works.' Krieger believes agentic AI is fundamentally redefining what it means to be a founder. You no longer need to write code or raise significant capital to start building. The bottlenecks, he says, have shifted to decision-making and operational friction—like managing merge queues. And the numbers support this momentum. In its first week of launch, Claude 4 reportedly tripled Anthropic's subscriber base and now accounts for more than 60% of the company's API traffic. Usage of Claude Code, its specialized AI coding agent, has spiked nearly 40%, drawing interest from both developers and nontechnical builders. Krieger shared that some users have even begun treating AI agents less like tools and more like capable creative collaborators. 'AI models can now function like an entry-level worker, and that is going to have a big impact on the workforce. We think we need to talk about this so we can prepare our economy and our society for this change, which is happening very fast,' he says. 'It's too late to stop the train—but we can steer it in the right direction.' A few weeks ago, Anthropic CEO Dario Amodei predicted that 'the first one-employee billion-dollar company' could emerge as soon as 2026, enabled by AI. He also suggested AI could eliminate half of all entry-level jobs within the next five years—a claim that drew immediate pushback from some in the tech industry. Among the skeptics was Google CEO Sundar Pichai, who cautioned against overestimating the reliability of AI systems like Gemini. 'Even the best models still make basic mistakes,' Pichai said during the recent Bloomberg Tech Summit in San Francisco. 'Are we currently on an absolute path to AGI? I don't think anyone can say for sure.' On the prospect of AI displacing the workforce in the near future, Pichai remained measured. 'We've made predictions like that for the last 20 years about technology and automation,' he said. 'And it hasn't quite played out that way.' Yet even amid skepticism, a quieter revolution is unfolding beneath the surface of agentic AI—one that's reshaping how work itself is defined in the era of intelligent software collaborators. MCP: The Infrastructure That Makes AI 'Work' The unsung hero behind Anthropic and Claude's leap in capability isn't just the model itself—it's the Model Context Protocol (MCP). While Claude 4 is praised for its intelligence and natural language fluency, MCP is the system-level breakthrough that enables it to move from passive assistant to active collaborator. This open standard allows Claude's AI agents to securely interface with tools like GitHub, Stripe, Webflow, Notion, and even custom internal systems. As a result, Claude isn't limited to answering prompts. It can pull real-time analytics, trigger actions, update databases, launch web assets, and manage entire project pipelines. Just as http enabled browsers to interact with websites, MCP is creating a universal interface layer for AI agents to operate across digital tools. 'Previously, AI agents were largely isolated—they could process information you gave them, but they couldn't directly interact with your actual tools and systems,' Krieger says. 'By solving the connection problem together, we're building infrastructure that will unlock entirely new possibilities for human-AI collaboration, making AI systems dramatically more useful and relevant in real-world contexts.' Major tech companies are already integrating MCP. Microsoft has built it into Windows 11, Azure, and GitHub, allowing AI agents to run workflows across OS and cloud infrastructure. Google has added it to Gemini SDKs to bridge model interactions with live apps. Companies like Novo Nordisk, GitLab, Lyft, and Intercom are also deploying Claude agents into live workflows. In this light, Amodei's 'one-person unicorn' prediction seems less like hype and more like a reflection of a deeper platform shift. 'As developers build new connections between knowledge bases, development environments, and AI assistants, we're seeing the early emergence of the more connected AI ecosystem we envisioned,' Krieger says. 'As AI assistants become more agentic, MCP will evolve to support increasingly sophisticated workflows. [MCP] might be the most important thing Anthropic has ever shipped.' Agentic AI Is Redefining the Modern Startup Tech Stack Krieger sees the combination of Claude 4 and MCP as a genuine platform shift—one where the AI acts like a partner rather than just a productivity tool. He describes Claude Opus 4 as Anthropic's most powerful agentic model yet and the world's best coding model. '[Opus 4] can work autonomously for nearly seven hours, which transforms how teams approach work. When I can prototype something in minutes, that fundamentally changes what's possible for a single person,' Krieger says. 'In my experience, it mirrors how people manage their work. That level of autonomous task execution just wasn't possible before.' With MCP in play, Claude becomes more than an assistant. It can push code, analyze logs, manage documentation, and send updates—without the constant context switching that slows teams down. In some cases, Krieger says, it simulates workflows that once required coordination across multiple departments. 'When you can iterate at speed, every manual process, every unnecessary meeting becomes this jarring interruption,' he noted. Still, not everyone is convinced that AI-powered unicorns are imminent. Analysts caution that while AI agents can automate many workflows, they can't yet match the experience seasoned professionals bring. 'The state of LLM-based AI agents is that you must give them simple decisions to make to reliable answers. We are not close to being able to throw a bunch of data at an AI agent and trust its decision,' Tom Coshow, a senior director analyst at Gartner, tells Fast Company. 'Is there an automatic VP of sales ready to go? Not even close.' Coshow emphasizes the need for realistic expectations. 'It's important to get real about what you can and can't build,' he says. 'No-code design is incredibly powerful, but it also creates this illusion that anything you type into the box will just magically work. It doesn't.' Building robust AI agents for real-world business use, he explains, is far from trivial, noting, 'Complex agents are hard to get right. LLMs are inherently probabilistic, and most business processes simply can't rely on that kind of unpredictability.' A Brave New Startup Era? Anthropic's core bet reflects its broader philosophy: We're moving toward a world where major chunks of work are automated. 'It's better to be aware of the risk and adjust to the change than to take the chance and be caught unprepared,' Krieger says. 'We're seeing this shift begin with tech companies, but it's going to move quickly into other knowledge-intensive industries.' So, is the one-person unicorn just hype—or a sign of things to come? It may still be too early to know. For experts like Coshow, the future lies not in abrupt disruption, but in careful evolution. 'The path forward is well-designed agentic workflows with a human in the loop,' he says. Whether or not a billion-dollar solo startup emerges by 2026, the tools to build one are already here. And that, as Krieger sees it, changes everything. 'It's going to be about finding people who can work at the intersection of customer problems and AI capabilities,' he says. 'The most valuable early hire might not be a traditional engineer—it could be someone who translates needs into iterative, AI-powered solutions. The one-person unicorn will be relentlessly curious, and fluent in working with intelligent collaborators.'