logo
State officials say generation of renewables not enough to replace oil revenue

State officials say generation of renewables not enough to replace oil revenue

Yahoo17-04-2025

Apr. 16—New Mexico should've started diversifying its revenue a decade ago.
That's according to Stephanie Garcia Richard, the public lands commissioner in New Mexico. Garcia and other officials spoke at a Wednesday discussion on energy in an event hosted by Axios in Santa Fe.
"We don't have time on our side for this," Garcia Richard said.
Meanwhile, state officials are also waiting to see how a federal administration not fond of renewable energy generation will impact New Mexico, an alternative form of energy the state has been increasingly promoting and trying to generate revenue from.
The State Land Office oversees 13 million mineral acres and 9 million surface acres of New Mexico public land. Revenue-wise, Garcia Richard said most state money generated from land leases comes from 2 million acres of state land leased out for oil and gas operations in southeastern New Mexico.
Garcia Richard said that needs to change. She's not the only one who thinks so.
"While we want to continue producing the high levels, we have got to find a way to diversify," Missi Currier, president and CEO of the New Mexico Oil and Gas Association, said on a separate panel at the event.
She pointed out that fossil fuels are a finite resource. "And until our state is able to create a business climate that will attract other businesses here, it will be increasingly difficult to diversify," Currier said.
It's already a challenge to lessen the state's dependence on fossil fuels.
"I've always said that we're never going to replace money from oil and gas development one to one with any other industry," Garcia Richard said. "We have a resource there that is world-class, and so unfortunately we will not be able to replace those billions of dollars one for one to any other diversification tool.
"But we should be looking at all kinds of diversification for our revenue."
She pointed out that renewable energy resources in the state have increased nearly sevenfold under her tenure, which started in 2019. She added that roughly 2,800 megawatts of renewable energy are generated on state lands today.
The money New Mexico gets from clean energy generation is still nowhere near that of oil and gas.
And the politicization of renewable energy is making the market more volatile. Garcia Richard said Inflation Reduction Act funds are hanging in flux under the Trump administration.
Melanie Kenderdine, secretary for the New Mexico Energy, Minerals and Natural Resources Department, said on another panel her agency in the last year alone brought in $400 million in federal funds.
"That's a lot of money for New Mexico," she said. "We don't know what is going to happen in this administration (around) budget reconciliation."
Nonetheless, EMNRD is working on what Kenderdine described as an energy analysis of the state, which she hopes can help shape roadmaps around the country and even around parts of the world.
"The renewables and the natural gas need to be working together," Kenderdine said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The job market is brutal for women executives
The job market is brutal for women executives

Axios

time4 hours ago

  • Axios

The job market is brutal for women executives

It's a brutal time for women executives and others who don't neatly fit the stereotypical ideal of a leader. Why it matters: Not only has the zeal for diversity that's defined the past decade faded, but backlash from the White House has made firms even less willing to take risks on "nontraditional" candidates, including women, people of color and LGBTQ+ people. The big picture: For years, executive recruiters were asked to find diverse slates to fill the top spots inside U.S. companies, moving up the numbers, if only slightly, inside these firms. That's not happening anymore, says Lindsay Trout, a partner and talent consultant at executive search firm Egon Zehnder, who finds candidates at the C-suite and board level for large companies. How it works: A board running a search for a CEO will draw up a list of specifications for candidates. They might say, "We're looking for public company CEOs who are in the tech industry at companies that are at X scale," Trout explains. Go with those specs and "you end up with a list that inevitably excludes females from consideration." It would be like looking for someone to head an NGO and only considering those with experience as a U.S. president or secretary of state. Zoom out: For a few years, firms would also say they would like to consider a "diverse slate" so you could bring in other qualified candidates with potential who did not exactly meet the criteria. "Now that is not part of the conversation or expectation," Trout says. "I took my pronouns out of my resume," an editorial leader who was recently hunting for jobs tells Axios, asking for anonymity to speak freely while still searching. They decided to "tone it down," feeling like being too openly queer was hurting the search. They locked down a contract job soon after making those adjustments, though it was not clear if that was a factor. Zoom in: There was rising backlash and some fatigue at the board level even before President Trump took office in January and made ending DEI efforts a priority. "People are generally satisfied with the progress that they have made," Trout says. Context: After a surge of appointments in 2020 and 2021, companies are now naming fewer women and people of color to their boards, Axios reported last year. Some firms have started to reverse course. At Meta, women make up 23% of board directors. Just a few years ago in 2022, that figure was 44%. Over the past year, no women CEOs were recruited into Fortune 500 firms. The ones who nabbed that top spot were internal hires. Between the lines: The process of looking for diverse slates did drive some resentment and likely hurt some men. "It probably is true that equally as compelling white male talent didn't get the nod" for board roles, Trout says. Reality check: Many large companies have continued to defend DEI efforts, at least from shareholder attacks. Not all firms have walked away from this work, says Jennifer McCollum, CEO of Catalyst, a nonprofit that advocates for women in the workplace. Scaling back entirely from DEI can lead to increased risks on the legal side, as well as talent loss and reputational damage, she says.

Why unemployment for Black women is rising
Why unemployment for Black women is rising

Axios

time4 hours ago

  • Axios

Why unemployment for Black women is rising

The jobless rate for Black women has been creeping higher all year. Why it matters: This could be a sign of weakness in the overall job market, economists say, though others point to the Trump administration's purge of the federal workforce and its push to eliminate DEI efforts. By the numbers: The demographic-level data on unemployment from the Labor Department can be volatile month-to-month, so Axios looked at the three-month trailing average, based on the jobs numbers released Friday. By this measure, Black women's unemployment rose to 5.8% in May, up from 5.3% a year ago, surpassing the jobless rate for Black men, which declined to 5.6%. For white women, the jobless rate has stayed relatively flat. While it rose for white men, their unemployment rate is still below the overall number. The intrigue: The share of Black women working in the federal government shrank nearly 33% over the past year, per data cited by Bloomberg last month. Between the lines: Women comprise a slight minority of the federal workforce but represent the majority of employees among the agencies targeted by the White House, including USAID, the Consumer Financial Protection Bureau and the Department of Education, where Black women make up 28% of workers. Women also make up a greater share of probationary workers, the National Women's Law Center notes. Black women are overrepresented in the federal employment compared with the private sector, accounting for nearly 12% of the federal workforce in 2020, compared with about 7% in the civilian labor force, according to a federal report. Zoom out: It's "largely an untold story," according to a ProPublica report last week on how firings have disproportionately impacted Black women. Federal contract work has also fallen, though those numbers are harder to come by. "The layoffs at the federal level where Black people are more represented, the impacts of the tariffs, particularly on small businesses that hire Black women, and the overall use of DEI as a slur, which may be contributing to a lack of hiring of Black women, all of these factors are probably at play," Andre Perry, senior fellow at the Brookings Institution told Bloomberg. Reality check: The federal job loss doesn't fully explain the decline in overall employment for Black women. Federal employment has fallen by 69,000 this year, per the Bureau of Labor Statistics, but the number is expected to move higher, since those on paid leave or receiving severance are counted as employed. From January to March, Black women's employment fell by a whopping 306,000. It has recovered a bit since then and now is down by 233,000 for the year to date. "It's a really large drop and it can't just be the federal layoffs," according to Jessica Fulton, senior fellow at the Joint Center for Political and Economic Studies. What to watch: Black unemployment in the U.S. always trends higher than the overall number. Historically it has been double the rate of white workers, but that gap narrowed a bit in the post-pandemic job market. A higher Black unemployment rate can be an early recession warning sign, Fulton says.

Behind the Curtain: The scariest AI reality
Behind the Curtain: The scariest AI reality

Axios

time4 hours ago

  • Axios

Behind the Curtain: The scariest AI reality

The wildest, scariest, indisputable truth about AI's large language models is that the companies building them don't know exactly why or how they work. Sit with that for a moment. The most powerful companies, racing to build the most powerful superhuman intelligence capabilities — ones they readily admit occasionally go rogue to make things up, or even threaten their users — don't know why their machines do what they do. Why it matters: With the companies pouring hundreds of billions of dollars into willing superhuman intelligence into a quick existence, and Washington doing nothing to slow or police them, it seems worth dissecting this Great Unknown. None of the AI companies dispute this. They marvel at the mystery — and muse about it publicly. They're working feverishly to better understand it. They argue you don't need to fully understand a technology to tame or trust it. Two years ago, Axios managing editor for tech Scott Rosenberg wrote a story, "AI's scariest mystery," saying it's common knowledge among AI developers that they can't always explain or predict their systems' behavior. And that's more true than ever. Yet there's no sign that the government or companies or general public will demand any deeper understanding — or scrutiny — of building a technology with capabilities beyond human understanding. They're convinced the race to beat China to the most advanced LLMs warrants the risk of the Great Unknown. The House, despite knowing so little about AI, tucked language into President Trump's "Big, Beautiful Bill" that would prohibit states and localities from any AI regulations for 10 years. The Senate is considering limitations on the provision. Neither the AI companies nor Congress understands the power of AI a year from now, much less a decade from now. The big picture: Our purpose with this column isn't to be alarmist or " doomers." It's to clinically explain why the inner workings of superhuman intelligence models are a black box, even to the technology's creators. We'll also show, in their own words, how CEOs and founders of the largest AI companies all agree it's a black box. Let's start with a basic overview of how LLMs work, to better explain the Great Unknown: LLMs — including Open AI's ChatGPT, Anthropic's Claude and Google's Gemini — aren't traditional software systems following clear, human-written instructions, like Microsoft Word. In the case of Word, it does precisely what it's engineered to do. Instead, LLMs are massive neural networks — like a brain — that ingest massive amounts of information (much of the internet) to learn to generate answers. The engineers know what they're setting in motion, and what data sources they draw on. But the LLM's size — the sheer inhuman number of variables in each choice of "best next word" it makes — means even the experts can't explain exactly why it chooses to say anything in particular. We asked ChatGPT to explain this (and a human at OpenAI confirmed its accuracy): "We can observe what an LLM outputs, but the process by which it decides on a response is largely opaque. As OpenAI's researchers bluntly put it, 'we have not yet developed human-understandable explanations for why the model generates particular outputs.'" "In fact," ChatGPT continued, "OpenAI admitted that when they tweaked their model architecture in GPT-4, 'more research is needed' to understand why certain versions started hallucinating more than earlier versions — a surprising, unintended behavior even its creators couldn't fully diagnose." Anthropic — which just released Claude 4, the latest model of its LLM, with great fanfare — admitted it was unsure why Claude, when given access to fictional emails during safety testing, threatened to blackmail an engineer over a supposed extramarital affair. This was part of responsible safety testing — but Anthropic can't fully explain the irresponsible action. Again, sit with that: The company doesn't know why its machine went rogue and malicious. And, in truth, the creators don't really know how smart or independent the LLMs could grow. Anthropic even said Claude 4 is powerful enough to pose a greater risk of being used to develop nuclear or chemical weapons. OpenAI's Sam Altman and others toss around the tame word of " interpretability" to describe the challenge. "We certainly have not solved interpretability," Altman told a summit in Geneva last year. What Altman and others mean is they can't interpret the why: Why are LLMs doing what they're doing? Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology." Amodei called this a serious risk to humanity — yet his company keeps boasting of more powerful models nearing superhuman capabilities. Anthropic has been studying the interpretability issue for years, and Amodei has been vocal about warning it's important to solve. In a statement for this story, Anthropic said: "Understanding how AI works is an urgent issue to solve. It's core to deploying safe AI models and unlocking [AI's] full potential in accelerating scientific discovery and technological development. We have a dedicated research team focused on solving this issue, and they've made significant strides in moving the industry's understanding of the inner workings of AI forward. It's crucial we understand how AI works before it radically transforms our global economy and everyday lives." (Read a paper Anthropic published last year, "Mapping the Mind of a Large Language Model.") Elon Musk has warned for years that AI presents a civilizational risk. In other words, he literally thinks it could destroy humanity, and has said as much. Yet Musk is pouring billions into his own LLM called Grok. "I think AI is a significant existential threat," Musk said in Riyadh, Saudi Arabia, last fall. There's a 10%-20% chance "that it goes bad." Reality check: Apple published a paper last week, "The Illusion of Thinking," concluding that even the most advanced AI reasoning models don't really "think," and can fail when stress-tested. The study found that state-of-the-art models (OpenAI's o3-min, DeepSeek R1 and Anthropic's Claude-3.7-Sonnet) still fail to develop generalizable problem-solving capabilities, with accuracy ultimately collapsing to zero "beyond certain complexities." But a new report by AI researchers, including former OpenAI employees, called " AI 2027," explains how the Great Unknown could, in theory, turn catastrophic in less than two years. The report is long and often too technical for casual readers to fully grasp. It's wholly speculative, though built on current data about how fast the models are improving. It's being widely read inside the AI companies. It captures the belief — or fear — that LLMs could one day think for themselves and start to act on their own. Our purpose isn't to alarm or sound doomy. Rather, you should know what the people building these models talk about incessantly. You can dismiss it as hype or hysteria. But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue. In the AI 2027 report, the authors warn that competition with China will push LLMs potentially beyond human control, because no one will want to slow progress even if they see signs of acute danger. The safe-landing theory: Google's Sundar Pichai — and really all of the big AI company CEOs — argue that humans will learn to better understand how these machines work and find clever, if yet unknown ways, to control them and " improve lives." The companies all have big research and safety teams, and a huge incentive to tame the technologies if they want to ever realize their full value.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store