
How Florida quietly surpassed California in solar growth
Despite removing climate change from its official state policy in 2024, Florida added more utility-scale solar than California last year, with over 3 gigawatts of new capacity coming online.
"This is not a fluke," said Sylvia Leyva Martinez, senior analyst at Wood Mackenzie. "Florida is now shaping national solar growth."
The surge is being driven by utilities, not rooftop panels. Florida Power & Light alone built over 70% of the state's new solar last year. A state rule lets developers skip lengthy siting reviews for projects under 75 megawatts, which speeds up construction and cuts costs.
"There's no silver bullet," said Syd Kitson, founder of Babcock Ranch, a town designed to be powered almost entirely by solar. "But one thing Florida got right is acceptance. Here, people want solar. And we're proving it works."
Babcock Ranch runs on its own microgrid and stayed online during Hurricane Ian in 2022, while much of southwest Florida went dark.
"We didn't lose power, internet, or water," said Don Bishop, a homeowner there. "That changes how you think about energy."
The economics are doing the rest. With industrial demand rising and natural gas prices climbing, solar is increasingly the cheapest option, even without subsidies.
"Utilities aren't building solar because it's green," Martinez said. "They're doing it because it's cheaper."
But new challenges are emerging.
In July, President Trump signed the One Big Beautiful Bill, which accelerates the rollback of solar and wind tax credits. Homeowners lose the federal investment credit after 2025. Developers face tighter deadlines and stricter sourcing rules.
"It won't kill the market," said Zoë Gaston, an analyst who follows the solar industry at Wood Mackenzie. "But it makes the math harder."
Analysts now expect a 42% drop in rooftop solar installs in Florida over the next five years. And while utility-scale growth continues, grid constraints are becoming an issue. Utilities are pouring money into storage, smart infrastructure, and grid upgrades to keep up.
Babcock Ranch is piloting new microgrid systems to add resilience. The hope is that other communities can take the playbook and adapt it, storm-proofing neighborhoods one block at a time.
"We've been testing this for years," Kitson said. "Now it's about scale. It's about showing others they can do it too."
The bigger question is whether Florida can keep this momentum going without policy support, and while still leaning heavily on natural gas.
"Florida has the solar resources," said Mark Jacobson, a professor at Stanford's Department of Civil and Environmental Engineering. "What's missing is political consistency."
Watch the video to see how Florida became a solar leader and what could slow it down.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
5 hours ago
- The Hill
Stanford lays off hundreds, blaming Trump policies
Stanford University has announced staffing reductions affecting 361 people, blaming the Trump administration for funding cuts. The university pointed to policies from the administration in its decision to reduce its general budget by $140 million to support operations. The drop in money will affect research funding and support increases in endowment taxes that were passed by congressional Republicans. In June, the president and provost of Stanford told the community that while they will continue to advocate for the values of the university, 'At the same time, we need to be realistic about the current landscape and its consequences. There is significant uncertainty about how federal support for universities will evolve, but it is clear that the status quo has changed.' Funding for universities has been in a turbulent situation as President Trump has threatened federal money over alleged antisemitism on campuses, transgender athletes in women's sports and diversity, equity and inclusion programs. Collectively, the administration has paused billions of dollars to institutions including Harvard University, the University of California, Columbia University and the University of Pennsylvania, among others. Some schools, such as Columbia and Brown University, have been able to restore federal funding through deals with the White House, but not without agreeing to pay substantial amounts of money. Along with staff reductions, schools and departments at Stanford were asked to create plans to '[p]osition the university to be resilient as federal policy evolves.'

Business Insider
5 hours ago
- Business Insider
Trump's tariffs and the tax bill are splitting the stock market. Here's the playbook for investors, according to Morgan Stanley.
President Donald Trump's policies are splitting the market into distinct camps, Morgan Stanley says. Lisa Shalett, the chief investment officer of the bank's wealth management arm, pointed to the effects of Trump's tax bill and his sweeping tariffs in a recent podcast. "Now, as the impacts of the tax reform bill and global tariff implementation begin to roll through the economy, we sense that yet another series of great divides are opening up," Shalett said. Here are the splits that are emerging: 1. Consumer-facing businesses vs. B2B businesses Businesses that sell directly to consumers are more impacted by any potential weakening fo household balance sheets, a risk that business-to-business firms are less worried about. Market pros believe that tariffs could weaken consumers' spending power, as companie can pass along the cost of import duties by raising prices. Shalett added that those pressures are coming at an already critical time for consumers, pointing out that more Americans are falling behind on credit card and auto loan payments. The job market is also flashing signs of weakness, with payrolls in May and June seeing a large downward revision, while job growth for the month of July was below expectations. A weaker labor market often leads to a pullback in consumer spending. 2. Multinational exporters vs. importers Multinational exporters outside of the consumer space are facing "fewer external barriers" to sending products abroad, Shalett said, suggesting they were more shielded from the trade war. Those firms are also benefitting from a weaker US dollar, which is making their products more attractive to foreign customers, Shalett added. Multinational firms are also typically more capital- and research & development-intensive, she said. That also positions them to benefit more from the tax benefits outlined in the " One Big Beautiful Bill," which creates favorable tax treatment for domestic R&D costs. "So, with this new structural division emerging, global stock selection is more important now than ever," Shalett said. Here are some characteristics of the companies investors should be leaning toward, in Shalett's view: Multinational non-consumer exporters. Tailwinds for these companies should continue, Shalett said. Select tech, financials, industrials, energy, and healthcare stocks. Stocks in these areas could benefit from some of the policies included in Trump's tax bill, which could lead to upside surprises in earnings and cash flow. Stocks that aren't "overhyped." International stocks, commodities, and energy infrastructure. Companies in these areas could help an investor diversify their portfolio, she added. Sentiment has shifted slightly more bearish in the last week, with Trump doubling down on tariff threats and markets digesting weaker-than-expected economic data. Goldman Sachs, Evercore ISI, Stifel, Pimco, and HSBC are among the firms that have recently flagged the risk of a stock correction or advised investors to rethink their portfolio allocations.


Forbes
7 hours ago
- Forbes
How Direct Preference Optimization Can Bring User‑Driven Agility To AI
Ashutosh Synghal, VP of Engineering at Midcentury Labs, pioneers of a decentralized AI & secure data exchange. Imagine training a voice‑recognition system without hand‑transcribing thousands of hours of audio. Traditional supervised learning demands that developers label every snippet with exact text—a costly, error‑prone bottleneck. Now flip the script: Present two candidate transcriptions and ask a reviewer which sounds closer to reality. That quick 'A or B?' encodes far more than it seems. Multiply it across samples, and you obtain a rich dataset of human judgment. This shift—teaching AI through preference rather than perfection—is powering a new training method called direct preference optimization (DPO). From Labels To Choices The recent boom in generative AI has exposed the pain of manual labeling. For tasks such as captioning images or refining a chatbot's tone, there is rarely a single 'correct' answer—only a spectrum of better or worse ones. DPO exploits that truth by optimizing models directly on comparisons: Which output did people prefer? The idea builds on reinforcement learning from human feedback (RLHF). RLHF asks humans to rank outputs, trains a separate reward model and then fine‑tunes the base model via reinforcement learning. It works, but the pipeline is heavy: three models, delicate reward tuning and weeks of compute. Stanford researchers showed you can drop the middleman. With DPO, you skip the reward model and the reinforcement loop. You simply fine‑tune the base model so that preferred answers become more probable and rejected ones less so. Alignment becomes a straightforward classification‑style loss, reducing complexity and instability. Faster, Cheaper, Often Better Because it eliminates reward modeling and iterative rollouts, DPO can reduce training time and compute budgets. Teams iterate in days, not weeks, and early studies find quality equal to—or slightly better than—classic RLHF on tasks such as sentiment control and summarization. In one benchmark, a language model tuned with DPO outranked its RLHF counterpart in human preferences while using a fraction of the resources. Why Preference Learning Wins Humans are far better at choosing between options than at crafting flawless answers from scratch. Pairwise votes or thumbs‑up/thumbs‑down signals capture that intuitive skill, bypassing the need for exhaustive gold‑standard datasets. A customer‑service bot, for instance, can launch with a starter model and rely on user clicks or ratings to improve continuously—no massive annotation campaign required. Organizations already sit on mountains of implicit preference data: A/B tests, search click‑through rates, star reviews. DPO transforms that by‑product into training fuel. Microsoft's Azure OpenAI team notes that customers 'often have preference data already collected' and can reach RLHF‑level quality with a far simpler workflow. The method also shines in subjective domains—speech, translation, multimodal generation—where 'correctness' is nuanced. Whether a voice assistant sounds friendly or an image caption feels apt is ultimately a matter of taste. By directly optimizing for those tastes, DPO teaches models tone, style and context in ways rigid labels cannot. Momentum In The Market Open‑source communities quickly embraced DPO as an accessible alignment strategy, and enterprise platforms are following. Azure OpenAI now offers DPO fine‑tuning in preview, citing equal effectiveness to RLHF with faster turnaround. Intel's NeuralChat and several startups report similar gains. The technique is moving from research curiosity to industry standard. Real‑World Impact For product builders, the benefits are tangible: • Speed: Preference loops compress iteration cycles, letting a two‑person team ship and refine a niche speech application in weeks. • Cost: Cutting out reward models can save compute and reduce carbon footprints. • Safety: Because humans review outputs, harmful or biased generations are spotted earlier. DPO's direct link to user sentiment can also curb model drift toward unwanted behaviors. Caveats To Address DPO isn't magic. Biased or low‑quality feedback will poison results, and narrow demographic sampling can overfit the model to a single user group. Teams must curate diverse, representative comparisons and periodically audit outcomes. The good news? The efficiency gains free up time and budget to do exactly that. Still, swapping labels for likes doesn't magically wash away bias. If the judgments you feed a model are lopsided or sloppy, DPO will learn those flaws just as efficiently. The fix isn't glamorous, but it's straightforward: Be explicit about what "better" means, draw signals from a genuinely mixed crowd, and watch how that crowd behaves over time. I start with a one‑page cheat sheet for reviewers—clarity, safety, usefulness, tone—so 'prefer A' isn't just a gut reaction, but a choice grounded in shared criteria. Diversity beats sheer volume. A thousand comparisons from one demographic tells you what that group likes, not what your market needs. I've had more success with smaller, stratified batches—different regions, expertise levels and even devices—than with massive but skewed logs. And because not all clicks carry the same signal, I quietly weigh feedback: Quick, low‑effort taps matter less than consistent raters whose judgments line up with peers. Maintenance is constant but light. I seed each batch with a few 'gold' pairs I already know the answer to; if accuracy on those slips, something's off—fatigue, fuzzy instructions or a pipeline bug. I also schedule periodic red‑team passes around sensitive topics. Those exercises surface blind spots and generate fresh comparison pairs that keep the model honest. The upside of DPO's efficiency is that you can afford this hygiene. When you're not burning weeks on reward tuning, you can spend that time auditing feedback, tightening guidelines and collecting smarter comparisons. In my own projects, that trade—less GPU thrash, more human rigor—has been the difference between a model that merely ships and one that actually feels aligned with its users. Listen To Your Users The future of AI training is looking a lot less like drudgery and a lot more like collaboration. By embracing preference‑based learning, we can reduce manual grind and gain a direct line to what people actually expect. In my own projects, models trained through the lens of human preference not only ship faster—they feel more attuned to users. In the high‑velocity AI market of 2025 and beyond, that alignment will be decisive. DPO proves that training smarter with feedback humans naturally provide unlocks better AI sooner—and it is fast becoming a cornerstone of modern development. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?