logo
#

Latest news with #DanRoberts

Q3 2025 IREN Ltd Earnings Call
Q3 2025 IREN Ltd Earnings Call

Yahoo

time15-05-2025

  • Business
  • Yahoo

Q3 2025 IREN Ltd Earnings Call

Mike Power; Director of Investor Relations; IREN Ltd Dan Roberts; Co-Founder and Co-CEO; IREN Ltd Kent Draper; Chief Commercial Officer; IREN Ltd Belinda Nucifora; Chief Financial Officer; IREN Ltd Nick Giles; Analyst; B. Riley Securities Reginald Smith; Analyst; JPMorgan Darren Aftahi; Managing Director, Senior Research Analyst; Roth Capital Partners LLC Operator Good day and thank you for standing by. Welcome to the IREN Q3 FY25 results conference call. At this time, all participants are in a listen-only mode. Please be advised that today's conference is being recorded. (Operator Instructions)I would now like to hand the conference over to your speaker today, Mike Power, Director of Investor Relations. Mike Power Thank you, Josh. Good afternoon and welcome to IREN's third-quarter FY25 results presentation. My name is Mike Power, Director of Investor Relations, and with me on the call today are Daniel Roberts, Co-Founder and Co-CEO; Belinda Nucifora, CFO; and Kent Draper, Chief Commercial we begin, please note this call is being webcast live with an accompanying presentation. For those that have dialed in via phone, you can like to ask a question via the moderator after our presentation. I would like to remind you that certain statements that we make during the conference call may constitute forward-looking statements, and IREN cautions listeners that forward-looking information and statements are based on certain assumptions and risk factors that could cause actual results to differ materially from the expectations of the company. Listeners should not place undue reliance on forward-looking information or statements. Please refer to the disclaimer on slide 2 of the accompanying presentation for more you and I will turn the call over to Dan Roberts. Dan Roberts Thanks, Mike. Good afternoon, everyone, and thank you for joining IREN's third quarter FY25 earnings call. I'm Daniel Roberts, Co-Founder, Co-CEO of IREN and today we will provide an update on our financial results for the quarter end of March 31, 2025, along with some operational highlights and some strategic updates from both our Bitcoin Mining business and our AI Infrastructure vertical. We'll then end the call with Q& starting with the highlights. Q3 was another strong quarter, operationally and financially. We delivered our second consecutive quarter of profits after tax, where we posted $24 million in net profit. This reflects a 28% growth quarter on quarter. Revenue then hit a record $148 million for the quarter, driven by growth in both our Bitcoin Mining and our AI Cloud came in at just under $83 million also a record for us. Operationally, we continued our cadence of delivering 50 megawatts every month of data centers with the energization of Childress phase four. And during the quarter, we averaged 29.4 exa hash of operating mining capacity, which represents a nearly 5x uplift year on these results reinforce both the earnings power of our growing data center platform, along with the strength of our procurement, engineering, construction, mining, and AI teams who simply continue to execute. We anticipate this earnings momentum to continue into fiscal Q4 as we further progress on our key growth initiatives, which I'll now come on looking forward, our strategy is anchored across value-accretive investments in both Bitcoin Mining and AI Infrastructure. On Bitcoin Mining, we're on track to reach 50x a hash of installed capacity by June 30. That milestone represents a 4x increase from only 10x a hash in June last year and cements us as one of the world's largest and importantly, lowest cost Bitcoin producers globally. But we're pausing further mining expansion at that point. That decision is deliberate, while mining remains highly profitable, we see more compelling shareholder value creation in AI Infrastructure, and we want to be disciplined in capital AI Cloud, momentum continues, revenues are increasing, underpinned by new contracts and customer attention. Our GPU fleet has been running at or near full utilization with hardware level margins north of 95%. Kent will speak to this more our AI Data Centers vertical, we're advancing two significant buildouts Horizon 1. A 50-megawatt liquid-cooled data center targeting Q4 2025 delivery. It's been designed and built for next generation AI workloads, supporting 200-kilowatt racks, which is around 20 times the rack density of traditional data centers. Again, we'll talk a little bit more to rack density later in the second significant build out is Sweetwater, our 2-gigawatt flagship data center hub in West Texas. 1,400 megawatts at Sweetwater 1 is on track for energization in less than a year now, April 2026. The power is contracted, long-lead equipment is secured, and site preparation and construction is a few notes on funding and structure. So first, we continue to practice disciplined capital allocation, particularly in the face of broader market volatility like we've seen over the past few months. Our decision to pause on further Mining CapEx is a good example of this discipline inaction. We've also engaged advisors across multiple debt financing work streams. Discussions are active and we expect execution in the coming months as markets continue to stabilize. And finally, as previously noted, we will be transitioning to a US-domestic-issuer status from July 1 this year that will align our reporting with US GAAP and reflect our increased US asset footprint along with the increased US investor in summary, record performance this quarter, consecutive profitability, near-term milestones all on track, and clearly a capital discipline lens as we focus on high-return-infrastructure growth and value creation for shareholders in the AI space going to talk a little bit about Bitcoin Mining. This slide speaks to the performance both in absolute financial terms and inefficiency, especially in our Mining segment. What we're showing here is not just growth, but how we're executing well across many multiple key operating and financial metrics. Despite macro headwinds, we're maintaining margins, we're scaling, and we're using operating cash flow to help fund our growth in the AI let's start with the headline figures. We averaged 29.4 exa hash in operating hash rate this quarter. That's up 30% from the second quarter and is driven by the continued build out at our Childress site and the deployment of new generation hardware. What we've also seen is 326% year-on-year hash rate growth against only a 40% increase in network difficulty. Reinforcing that we're not just growing, we're outpacing the industry, and we're growing our market share. All of this feeds directly into revenue and earnings continue to lead the sector on efficiency. Our fleet level efficiency remains best-in-class at 15 joules per tera hash. Our power costs average $0.033 per kilowatt hour at Childress last quarter and are among the lowest of any scaled miner globally. They're assisted by our energy market intelligence and software-driven optimization during price spikes or curtailment events. So all of this translates to strong operational leverage. And as you can see, as we continue to add scale, our unit economics hold up and even improve in some perhaps just as important as this margin is how we fund growth from this point. So as many of you will know on this call, we've made a deliberate choice to support our growth, including the growth in AI, using cash flows from daily Bitcoin liquidation, rather than raising dilutive equity the top right you can see what all of this efficiency, all of this profitability looks like in different metrics. So our all-in hash cost was $23 per peta hash per day versus an average cash price or a revenue of 54%, representing over a 50% gross margin, even on a fully loaded cost basis, which includes all indirect, direct OpEx, all look at it a different way on a per Bitcoin basis, our all in cash cost, direct, indirect costs all in was $41,000. As compared to $93,000 in realized revenue per Bitcoin mined this quarter, and again realized, we liquidated, we achieved an actual price of $93,000 against an actual all-in cost of $41,000. So that's gross profit per Bitcoin of roughly $52,000 for the quarter locked in margins are top tier clearly in the industry and also include all the cash costs for our AI business vertical. So they give us a strong buffer in the face of network or price volatility and give us a great platform to scale further from this point onwards, particularly noting that we had an average of 29.4 exa hash last quarter, and we are within weeks of hitting 50 exa hash. So exciting if we turn our mind to the financials along the bottom, revenue grew from roughly $120 million in Q2 to $148 million in Q3, up 24%. Adjusted EBITDA from $62 million to $83 million. And on a statutory basis, EBITDA also rose 32% to $82.7 million. Finally, profit after tax, we saw an increase of 28% from a profit after tax last quarter of $18.9 million to a profit after tax this quarter of $24.2 these results, they're not just about growth, they're about quality of earnings. We're expanding while keeping costs maintained and margins pretty resilient. So we're excited about the expansion ahead as we round out from what you've seen here as an average of 29.4 exa hash has to hitting 50 exa hash in the next few weeks. So in summary, we're operating efficiently, profitably, we're utilizing operating cash flows to support funding our next phase of growth as we scale AI infrastructure on the 50 exa now to talk about our progress towards that key near-term milestone, which I've mentioned a couple of times, achieving 50 exa hash of installed Bitcoin mining capacity by the end of June. It also highlights how this sets us up, again, as I mentioned earlier, it's one of the largest- and lowest-cost miners globally, with substantial free cash flow available to support our AI strategy going as of April 16, 2025, we reached 40 exa hash of installed capacity. What's really interesting is that's up from just 1 exa hash in December 2022, representing a 40x growth in less than 2.5 years. So the right-hand chart shows this visually with our installed hash rate tracking a 361% cumulative average growth rate since December 2022. Simply a testament to our team's execution we're now on the final leg towards this 50 exa hash has target. The data center's phase five of our Childress campus, an additional 150 megawatts of capacity is nearing completion. The primary substation is already on site and nearing energization, so that's the 138 kV to 34.5 kV transformation. And then the bulk substation we've completed the key upgrades and the second 345 kV to 138 kV, so this is the high-voltage substation, is scheduled for delivery imminently. So this is important because not only does it support the full 750-megawatt deployment. But it also creates and provides some additional redundancy as we head into alternate applications for this site, which we'll come on to later in the terms of miners, all the hardware is procured, it's been secured for a little while. It's now scheduled for shipping from Southeast Asia and is scheduled to land well within the 90-day tariff pause for reciprocal duties. So that's a bit of a full 50 exa hash deployment, The current market conditions, as you can see in that table, based on a $95,000 Bitcoin price at least, supports $588 million in illustrative adjusted EBITDA. The table at the bottom right walks through this. So at 40 exa hash is where we are today, we estimate adjusted EBITDA of around $450 million already delivering a 60% margin. At 50 exa hash, given the unit economics driven by scale efficiencies that fixed cost leverage, that rises to $588 million and a 62% these aren't projections, clearly we don't know where Bitcoin price will be, but they do illustrate the underlying profitability of our mining business and some of the resilience going forward. So at this point, yes, we have made a deliberate decision, despite all this, to pause expansion beyond 50 exa hash even though we had originally contemplated the 52 exa reason for this decision, firstly, it saves us around $43 million of near-term hardware CapEx. Secondly, it allows us to reallocate capital and internal resources, importantly, towards liquid-cool AI Data Centers. And finally, we still preserve that strategic flexibility depending on the Bitcoin price, depending on network difficulty, depending on unit economics to resume this growth in the future. But for right now, we've built out scale and we're now switching our focus to maximizing return on invested capital and we see the best opportunity to do that in the AI vertical in the near we're now one of the largest- and lowest-cost Bitcoin miners globally that positions us with meaningful and consistent cash flows, which is particularly valuable in capital intensive sectors like AI Infrastructure. So our ability to generate that cash allows us to self-fund some of the high-margin, high-growth verticals, such as AI Cloud, such as AI liquid-cool data centers, and allow us to minimize dilution for shareholders going forward. So we're delivering scale, we're controlling costs, and we're using this platform to drive the next phase of our growth in AI Infrastructure. So this is what makes 50 exa hash clearly not a stopping point or an end point, but a great place for us to pause and really drive growth in a new over to Kent to talk about one of those verticals. Kent Draper Thanks, Dan. As Dan alluded to, our AI Cloud Service is one of two AI verticals that we're scaling today and demonstrates our ability to develop and scale AI Infrastructure quickly, which is critical to our success in such an agile market. For context, we launched our AI Cloud Service or IREN Cloud as it's now known as a proof of concept back in August 2023, to support our broader AI Infrastructure initial deployment began with 248 NVIDIA H100 GPUs in early 2024 and scaled to 1,896 H100 and H200 GPUs all within less than 12 months. Today the platform is supporting both training and inference workloads for a range of AI-native customers on both a reserved and on-demand basis. All of this was enabled by the versatility of our proprietary data center design and delivery GPUs for IREN Cloud were installed in our 50-megawatt data center in Prince George, British Columbia, which had solely been operating Bitcoin mining workloads up to that point. The transition from ASICs to GPUs took just 6 to 8 weeks with minimal incremental CapEx. This flexibility to easily pivot between Bitcoin and AI workloads with speed is one of the major factors giving us a competitive edge in the industry. We're able to respond extremely rapidly to shifts in market demand and rapid GPU while the Prince George site is supporting AI-cooled GPUs today, it also has flexibility to accommodate liquid-cooled configurations in the future, giving us versatility for both current-gen and next-gen GPUs should we ex expand deployments further. In terms of our expertise, scaling infrastructure is one thing, doing it reliably and repeatedly is another. And that's why we've invested and continuing to invest in a growing in-house AI Infrastructure team across North network architects, InfiniBand engineers, cybersecurity specialists, dev ops and cloud go-to-market teams, we've assembled a team with significant depth and range. These aren't third-party contractors; this is all internal talent that we're scaling along with the platform. Additionally, we built strong direct relationships with key ecosystem partners. NVIDIA for GPUs and networking, Dell, Lenovo, and Supermicro for servers and hardware, Intel for CPUs, and WEKA for high-performance storage solutions. These relationships are foundational to how we scale, not just through procurement, but through roadmap alignment and image that you can see on the right here shows our Prince George facility in action. It's one of the few real-world deployments where Bitcoin and AI workloads are operating side by side, connected using shared infrastructure, but independently optimized. So IREN Cloud gives us a unique edge. It gives us a proven ability to deliver real AI Infrastructure, not just plan it. It provides internal expertise for deploying, operating and optimizing this infrastructure. And it's these capabilities that we're now scaling up at Horizon 1 and to the next slide. We saw increased customer engagement during the quarter within our AI Cloud Services business, and today our GPU fleet is running out on year full utilization with new contract wins post quarter end. We're seeing a mix of on-demand and multi-year terms ranging from short-term flexibility up to three-year commitments. Perhaps most exciting for us, we're now supplying white label compute to leading US AI-cloud providers, supporting both training and inference workloads. This validates both the technical quality of our infrastructure and the depth of our engineering and operations team's experience and is also proving to help strengthen our AI co-location Prince George, we still have an additional 47 megawatts of air-cooled capacity. That's enough to support over 20,000 NVIDIA B200s, which are currently in high demand for infrared-scale workloads. We're in active dialogue with customers around these deployments and continue to evaluate opportunities, including by deploying small clusters of latest generation GPUs for internal and customer testing. All of this activity sits within an investment framework focused on risk adjusted returns. We're optimizing the sources and uses of capital and view debt and GPU financing as a very realistic path to the right-hand side here you can see 33% quarter-on-quarter revenue growth in our AI Cloud segment, building on the foundation that we established with IREN Cloud last year. Our 97% hardware profit margins, which reflects revenue less electricity costs, highlight the strong margins that we're able to achieve via our vertical integration and ownership of our data centers. The photo below from Prince George, where our GPU and ASIC halls operate side by side, shows the modularity and flexibility of our infrastructure we're not just building this infrastructure, we're actively monetizing it, and we're doing it efficiently with our own data centers and reinvestment of cash flows from Bitcoin mining. This is what sets us apart in the current market and provides a solid foundation for our entry into the AI Data Center market, and I'll pass back to Dan for more detail. Dan Roberts Thanks, Kent. So that talks about AI Cloud. Now I'm going to talk a little bit about the Infrastructure and the co-location opportunities specifically behind that. So what we're showing on this slide is the fundamental, I guess, why behind. Our pursuit and our focus of AI Infrastructure. So I think it's clear that the demand profile for AI compute is unlike anything that we've seen before across broader I don't think it's exactly speculative anymore, particularly given what we're seeing in the market. And what we're seeing is this shift is happening fast, and it's really becoming at scale. And what we're seeing is the markets are simply not ready to serve it. So I'll come on to that a little bit more, but if we look at the left-hand side, forecast global AI with the demand side, it's projected to triple from 346 million to over 950 million in the next five years. And this growth is not driven just by consumers, like it's all on chat GPT, but also by enterprises embedding LLMs and other AI tools into everyday use. And everything from medical imaging to call centers to software developments, the expansion of this user base ultimately leads directly to compute growth. And as we see inference workloads scale in production environments, they require persistent infrastructure, not just burst compute for trading, which brings us to what's on the right-hand side of the here we show that user growth translates to global AI Data Center demand. Expected to grow 3.5 times in the next 5 years. So that sounds like a big number, but what's even bigger if you understand the energy market and the scale of what this really represents in terms of real-world infrastructure, 44 gigawatts today to 156 gigawatts in 2030. So that's more than 100 gigawatts of new infrastructure required globally. And today's traditional data center players simply aren't set up to deliver that at the required scale, speed, or our 2.9 gigawatts that we've secured that we've worked really hard over the last six years to secure, it's a big number. We all know it's a really, really big number, but in the context of 100 gigawatts, it's still small. So it sets us up for a really interesting few years ahead. So meeting this demand requires a fundamentally new class of infrastructure, so new designs, rack densities for AI have jumped 225% year on year. And continue to look like they're going to data center designs simply do not work anymore. They're faster construction cycles, so customers want sites built in 9 to 12 months, not 2 to 3 years. So that's really forcing a rethink of how infrastructure is designed and delivered in this sector. And we're seeing scale. The industry is now talking about 50 to 11,000 megawatt clusters. With customers asking for 250 megawatts or more across multi-year of this is happening live time and the scale and the intensity just continues to climb month on month. So grid connectivity, this is really a gating factor. A greenfield hyperscale site can take up to five to seven years just to secure that transmission access and energization. So that is the critical bottleneck, and it's where we clearly believe IREN has a competitive advantage along with a few other points which we'll get we are clearly well-positioned to meet this demand. We've got 2.9 megawatts of secured power capacity already contracted, including those large-scale grid connections at Childress and Sweetwater. We own our own land, which means we're not reliant on M&A. We're not reliant on third-party developers, and we can move quickly when customer demand materializes and is contracted. And importantly retain the upside on the infrastructure I guess my message here is the demand is real. It's growing more and more real week by week, and it's infrastructure constrained. We've got the land, we've got the power, and importantly, we've got the engineering and the execution capabilities to capture a good share of this growth. And we're making it happen right now. So if we zoom out briefly and look at how we are positioned to address one of the biggest barriers to adoption. So it's not just about the GPUs, it's about the ability to deploy those GPUs with power, cooling, permitting, and speed, and that's where most of the market is getting stuck today. So we've spent years assembling the fundamental and foundational ingredients for secured those large-scale sites that I mentioned. They're very quickly becoming strategic emerging AI hubs, particularly in West Texas. We've got a track record in high density data center delivery going back to 2018. This is not our first rodeo. We have been doing power dense data centers for seven of you who have followed us for that time understand that yes, we had oranges and our roots in Bitcoin mining, but we never went down the path of sea cans, old abandoned warehouses, shipping containers. We built multifunctional data centers from day one. And a testament to that, again, I'll repeat it, in Prince George, in the same data center we originally built, we have Bitcoin mining racks operating right next to latest-generation NVIDIA GPU's servicing AI customers, the exact same data this experience in engineering, designing, deploying, and then operating power dense data centers is what our business was set up to do and it's part of our foundation and we're continuing to do it. The only difference is we're continuing to iterate on that power dense designed to service these future workloads. So that in-house development and procurement team is critical. It gives us a direct control over the project pacing. It reduces reliance on third parties, we don't need to sign big contracts with third parties and outsource all of this. It de-risks those execution timelines, but critically we still work with tier one engineering, tier one OEM, tier one EPCM partners where it makes sense, and they're already engaged on horizon and the Sweetwater all of this is what allows us to move with certainty, compress timelines, and meet the most demanding specs, whether it's 200-kilowatt racks, whether it's liquid-cooling or multi-100-megawatt campuses, and today we've got 2.9 gigawatts of aggregate power capacity across those sites to service I guess my message here is simple. We're not just a site developer. We haven't just signed simply options on land and gotten lucky around this power. We're a builder, we're an operator, we've been doing this for seven years. We've structured the company to scale infrastructure in the power-dense HPC space, simply as fast as customers can commit. So while the demand is clearly global supply is constrained and we've got the power, we've got the partners, we've got the execution track record to meet that demand at the next major milestone in our infrastructure rollout is Horizon 1, 50 megawatts of liquid-cooled AI data centers in Childress, which is currently under development and due for delivery in quarter four this year. So as we've mentioned, there's a growing scarcity of sites capable of supporting liquid called GPUs at scale. So this is specifically designed to cater to NVIDIA's Blackwell platform, but also beyond, with rack density of 200 kilowatts per rack. So this aligns with the next generation of model customer interest has well exceeded the initial 50-megawatt data center. We've got multiple customers actively engaged in due diligence, actively engaged in commercial negotiations. So it is clear that there is a clear gap in the market, and we're looking to fill in terms of the specs, just to recap, it's 50 megawatts approximately of IT load for phase one. We're designing it for 200 kilowatts of rack density and to put that in perspective, that's compared to only 130 kilowatt for the new generation Blackwells that have been released over the course of this year. We've got 4 UPS and diesel backup systems, sub-6 millisecond round trip latency to Dallas, which supports both AI training along with latency-sensitive inference workloads. And finally, forecast CapEx unchanged $6 million to $7 million per megawatt of IT load, which we continue to gain conviction is very competitive for liquid-cooled data center deployments in the current clearly securing an anchor customer for Horizon 1 is a top priority. It catalyzes our formal entry into the AI Data Center co-location market and importantly builds confidence for the broader site development opportunity across the full 750 megawatts available at Childress. It also further differentiates us in the market as we prepare to bring Sweetwater online in April 2026, and the 1.4 gigawatts immediately available at that terms of financing, we're actively exploring pathways that optimize for capital efficiency. So that includes customer prepayments, project level debt, corporate level debt, equipment leasing, and convertibles. So we're also open to joint ventures, particularly with infrastructure capital providers, as long as they're aligned with our long-term control in terms of milestones, the project's progressing along a well-defined delivery schedule, it's on track. So we're looking to start earthworks and grading in the coming weeks. In the third quarter, we'll start preparing structure, cooling, and the electrical system, and then in Q4, we expect final delivery and readiness for occupancy. So long-lead items already ordered, procurement teams very active, we remain on track to deliver Horizon 1, on the original schedule that we outlined. So Horizon 1's our first at-scale AI Data Center, and it's also the model for how we can potentially develop and scale across our broader platform, including Sweetwater. So we've locked it all in, now it's a matter of executing over the coming this slide brings the focus back to the 750 megawatts at Childress and our roadmap for potentially transforming this entire site into a world-class liquid-cooled AI campus. So there are three main things to take away. Firstly, customer interest, as I mentioned, already exceeds the Horizon 1's initial 50-megawatt capacity. This just validates and reinforces our decision to invest ahead of the curve and make the commitment to build out this capacity. And also reinforces why we've already begun work on expanding capacity across the broader design is now underway for a full 750-megawatt transformation. So this is not just additional racks, it's a complete reconfiguration of the site for liquid-cooled AI workloads. Associated upgrades to power redundancy, cooling infrastructure, network architecture. Thirdly, we're futureproofing, as I mentioned before, 200-kilowatt rack densities as compared to the 130 kilowatts required for Blackwells. Well above what we're expected to require for the next generation, setting this apart from traditional data centers even further that are really struggling to handle this basic level of I think the quote here captures the design philosophy. So the project's not being engineered just for what we need now and what we need over the next 6 to 12 months, both based on the roadmap we're seeing on the GPU side, but what we think AI will demand next. So we're not just catching up, we're building ahead of the curve, and this is a real step change in capability and what we're offering the right you can see a real photo of the Childress site as of April, clearly very active, nearing completion of multiple new-build buildings, and below that is a rendering of the 750-megawatt Horizon concept, illustrating how the site might be evolved to accommodate 750 megawatts of liquid-cooling, high-rack density, and AI specific workloads. So in context, we own the land, we own the substation, the infrastructure, we're built into rack-level specifications required by future workloads, not just what fits today, and it positions us, Horizon, and Childress to be one of the few potential large-scale liquid-cooled AI campuses in North this is a blueprint, not just for Childress, but also for Sweetwater and how we can potentially scale our broader platform to meet this rise in demand. So Sweetwater is our flagship AI site, which we've mentioned for the last year or so now, and it's a very rare combination of secured land, grid scale power and site readiness that is well in motion. So to recap some of the fundamentals. 1800 acres with up to 2 gigawatts of high-voltage-power capacity already secured through binding contractual agreements. That's enough capacity to support over 700,000 next-generation GPUs, including those liquid-cooled Blackwell GB site level civil works are already well underway, so we're not waiting on paperwork, we're not waiting on anything else. We're actively building this at the moment and preparing the site for it does feel like we might be entering a super cycle of AI Infrastructure build out, particularly when you look at these forecasts of 125 gigawatts, not megawatts, of AI Data Center capacity over the next five years, over $5 trillion of capital, compute, energy, land, cooling, networking, and this is where Sweetwater stands out. Most of that demand is bottlenecked by land use, zoning, grid connection, politics, and at Sweetwater, we've solved those the site has the potential to support up to $70 billion in end-user AI Infrastructure investment. $70 billion. That's the development site that we've been actively incubating and are preparing. So the initial energization is targeted for April next year. All long lead substation equipment is on order. The looped fiber connection between Sweetwater 1 of 1.4 gigawatts and Sweetwater 2 of 600 megawatts is already designed, and the flexibility to scale in 1 to 500 megawatt increments gives us a lot of agility to align CapEx with customer commitments and the discussions that we're there are very few sites in North America or even globally, with this unique combination of scale, power, land, control, and readiness. We own the land, we've secured the interconnect, and we've started the site Readiness. We believe Sweetwater is one of the most-advanced and actionable AI clampers in development today. So with Horizon leading our first delivery. And Sweetwater anchoring our medium-term growth pipeline, we really believe we're well-positioned to meet this market momentum in to Kent now to touch on CapEx and funding. Kent Draper Thanks. This next slide outlines how we're funding growth, and in particular how our business model provides a unique combination of the internal cash flow and external funding flexibility. As of April 30, we had $160 million in cash on the balance sheet. This, combined with strong cash flows from Bitcoin Mining and our AI Cloud, provides significant funding support for our next phase of estimate a net funding requirement of up to $250 million over the remainder of 2025, primarily to support the expansion to 50 exa hash, which is already nearly complete. The delivery of Horizon 1, our first liquid-cooled AI Data Center, targeting energization in Q4 2025. And substation development and site preparation at Sweetwater, to prepare for energization in April terms of capital markets, we've engaged advisors across multiple debt financing work streams, and we expect execution in the coming months as markets continue to stabilize. The important point here is we're not reliant on equity issuance to grow. We have a strong balance sheet, significant tangible assets, cash-generating operations, and access to diversified capital channels. This capital strategy gives us flexibility to continue scaling AI Infrastructure and maximize returns from the platform that we've terms of illustrative cash flows, the table below shows illustrative annualized adjusted EBITDA outputs under various Bitcoin price assumptions, holding other inputs constant. At the current total network hash rate and the $95,000 Bitcoin price, we show $942 million in Mining revenue. After subtracting power, OpEx, Rex, and layering in our AI Cloud contribution of $28 million, we arrive at a $616 million adjusted EBITDA figure. You can also see that even at lower prices, for example, $60,000 Bitcoin, we still deliver nearly $270 million in adjusted EBTIDA. This is thanks to our low-cost power, lean cost structure, and best-in-class hardware efficiency to help smooth that exposure.I'll now pass over to Belinda to walk through the financial results. Belinda Nucifora Good morning to those in Sydney and good afternoon to those in North America. Thank you for joining us for our Q3 FY25 earnings Dan mentioned at the start of the presentation, during the quarter, we reported consecutive quarters of profit after tax of $24.2 million for Q3 and $18.9 million for Q2. We delivered record mining revenue of $141.2 million and recorded record adjusted EBITDA of $83.3 million and EBITDA of $82.7 million. The average operating hash rate increased by 30% from 22.6 exa hash to 29.4 exa hash, and we mined 1,514 Bitcoin at an average realized price of $93 k. During the quarter, the total net electricity cost increased by 30% from $28.9 million to $36.5 million in line with the increased megawatt usage at the quarter, the power prices remained relatively flat at $0.036 cents per kilowatt hour. The average net electricity cost per bitcoin mine was $24 k. Other costs of $25.3 million remained relatively flat despite a business today that continues to deliver significant growth and continues to support the projected continued expansion across our AI vertical, as well as the costs associated with regulatory and compliance to our cash flows, I wanted to note that our consolidated cash flow statements are now presented in line with IFRS. Requiring the proceeds from the sale of Bitcoin mines to be classified as cash flow from investing activities. As such, IREN filed a 20-F(a) on March 20 of this year for the period ended June 30, 2024, along with a 6-K(a) for the previous two quarters of this financial year, resting its cash flows to reflect closing cash at bank on March 31, 2025, was $184.3 million with receipts from Bitcoin mining activities of $141.2 million and AI Cloud Services of $3.8 million. We had a decrease in cash flow used in operating activities of $11.6 million, which was primarily due to a decrease in our annual ins insurance payments of $9 million that was made in Q2. We had an increase in net cash used in investing activities of $234.8 million, primarily due to significant milestone payments made on mining had a decrease in net cash from financing activities of $349.9 million, primarily due to the net proceeds from the convertible notes that received in Q2. During the second quarter, by the ATM, the company issued 10 million shares for gross proceeds of $110.9 million. And since the balance sheet, we've issued a further $17.4 million for gross proceeds of $107.6 to the balance sheet. As of March 31, 2025, the total assets recognized the $2 billion including property, plant and equipment of $1.6 billion, which provides a strong balance sheet to support future growth. In relation to the $440 million convertible note issued on December 6, 2024, in accordance with IFRS, we reported a current liability of $346.2 million, including in an embedded derivative of $23.7 million and a current asset of $11.7 million for the Capped Call and a non-current asset of $34.7 million in relation to the Prepaid Forward that would entered into concurrently with the convertible IREN transitions to US GAAP reporting from July 1, 2025, the accounting for the convertible notes, Capped Call, And Prepaid Forward will be reassessed in line with applicable accounting standards. Total equity increased to $1.4 billion with 10 million shares sold under the ATM during the I'll now hand back over to Mike to commence the Q&A. Operator (Operator Instructions)Nick Giles, B. R Riley Securities. Nick Giles Thanks, operator and good afternoon, everyone. Thanks for the comprehensive presentation first question, Dan, you used the words ahead of the curve and it made me think of Dennis and his team were really ahead of the curve on building out an ecosystem required in AI Cloud Services. So my question is, ultimately, how should we think about your appetite to fill out the available capacity of Prince George and growth beyond that. When I look at slide 16 on the CapEx site, I see the 50 exa hash, Horizon 1 and Sweetwater, but I don't see anything, explicit on the GPU side. So thank you very much. Dan Roberts No, thanks, Nick. Yes, you're right, those facilities were designed from day one to accommodate rack densities of 70-to-80-kilowatt air cool and have been successfully operating NVIDIA H100s and H200s there over the last 12, 15 months. The opportunity to scale there is clear. In terms of scaling our AI Cloud, we're very focused on Capital and risk adjusted returns. So it's all about matching sources and uses, quite frankly. The demand is there on a spot we want to incur GPU financing to finance revenues that are short-term? You know that there's a risk profile attached to that. Do we want to use equity to finance further growth in our AI Cloud? Never say never, but given the tools we've got, given the scale, given the access to GPU financing, I think our strong preference is to try and match debt with customer contracts as a way of growing that AI Cloud vertical out. And we're looking at that, in parallel, both the GPU financing, as well as multiple contract conversations with customers, particularly around Blackwell's, in the 100,000 GPU-plus clusters. So we're looking at all that in parallel with customer conversations on Horizon and Sweetwater. Nick Giles Great, I appreciate that. My second question is, you noted being open to JVs, and so I was curious at what stage it could make sense to bring a partner in. Is this something that could accelerate a definitive agreement at Horizon 1, for instance, or would it be more related to additional scaling later on? Dan Roberts It could be anything, but clearly when you've got a $70 billion-development project at Sweetwater, we can't deliver all that capital in our current state and with our current market capitalization. So we would need to bring in further partners, absolutely on the project financing and debt side, but also potentially on the equity side. And like all of this, it's all about options and running through the scenarios in front of how do you finance this? We've got equity in IREN, given our current market cap and cost of equity. Clearly there's a cost to that, which we've got to be very sensitive to. When you're dealing with private infrastructure players, the cost of capital if you can get the risk profile right, is substantially lower. I mean that's a lot of our backgrounds in private infrastructure. So being able to bring that in where it may make sense to complement what we've built in a listed environment may deliver a better value accretion to also a control aspect, so we just need to be careful around whether we want to engage with third-party equity and enter into those joint ventures. But given where we're at with those customer conversations, given the prospects we're seeing in debt financing instruments and multiple different types, we're pretty optimistic about financing the capital associated with these developments, and really it's about the customer side, in the short term. Nick Giles Good to hear and appreciate all the details, so keep up the good work. Dan Roberts Thanks, Nick. Operator Reggie Smith, JP your line is now open. Reginald Smith Sorry, I was on mute. Congrats on the quarter. It's pretty remarkable that you guys have been able to scale your Bitcoin hash rate while still pursuing HBC, so I wanted to give you guys kudos for two quick questions. One, I noticed that I thought you guys called out the growth in your AI Cloud business I was curious if you could provide some details on how you guys are performing there from an up time and a utilization perspective like what APIs do you track and how that performance kind of benchmarks against, I guess industry norms and expectations. I think that's probably a very important selling point as you as you engage the customer. Kent Draper Yeah, I'm happy to take that one. Yeah, we track a full range of metrics across the operations of that AI Cloud Services business. We have focused a lot within the operations on automation as well as telemetry and being able to record a huge amount of data that we're continually feeding back into the way we operate and maintain these systems to make sure that we're improving and maintaining performance levels over terms of how we've been performing, we get continual feedback from our customers that we are among the best of their cloud providers. And in some instances there's very clear feedback that both the up time and our response to any issues that they have is extremely favorable compared to other providers that they're using and ultimately, the proof is in the pudding, with these operations and as I mentioned, while I was presenting earlier, we are seeing in particular this uptake of white labeling our GPUs for other cloud service they obviously have extremely good insight into technical capabilities and performance levels, and the fact that they are contracting with us for capacity I think provides a very good sign that our performance and operating levels are extremely good. Reginald Smith No that's good to hear. If I could sneak one more you guys are having discussions with potential tenants at Horizon 1. What milestones or signals are you looking for in the coming months to indicate that you're moving closer to a formal agreement, and then maybe talk a little bit about how the conversations have changed, more recently versus which you may have been discussing a few months the texture of the conversations or whatever, any insights you could provide there to give us a sense of how things are progressing and what that looks like as you as you move through the discussions. Thank you. Dan Roberts Yeah, I'm happy to handle this, Kent, and then you can add in anything there's conversations with multiple customers on ongoing, and I appreciate that's a bit of a hand wavy statement. So to add a little bit more detail, there's been multiple site visits, like several, lots of detailed due diligence, contractual negotiations, discussions of exclusivities and [roafers], in the advanced stage of negotiation with several at the moment, and we're just working through it. So we're highly confident of contracting ahead of commissioning in Q4. Clearly, we're not going to earn revenue before then anyway. So a lot of it is just making sure that we do the right deal with the right counter party on the right terms. We get all the technical detail right. We get the contracting structure right. And importantly, there's a lot of conversation with these customers around pathway to scale. So most of these customers, if not all, are not interested in 50 megawatts. They're interested in the fact that this site can around to that 200- to 250-megawatt mark, over the coming period. But equally, some of them, are looking beyond that. So that's where we spoke about the 750 megawatts, potentially all becoming liquid-cooled AI Data Center capacity, looking at what we're hearing from the customers. I mean, there's a bookend, we might be sitting here two years and there's no more Bitcoin. But that's just the reality of what we're doing. We're not religious, we're not wedded to anything other than driving the highest creation of value for ourselves and shareholders, and that's driving the decision making. So if we can track the full 750 megawatts on better risk adjusted terms, then Bitcoin mining will do it. Kent Draper Yeah, the one thing I'd add to that, is in addition to those ongoing conversations that Dan mentioned and site visits and technical D-Day, we continue to see good levels of demand from new, potential customers. So we continue to see a lot of, inquiries from customers that we haven't previously interacted with, so it does seem clear to us that that the level of demand, particularly in the near term for liquid called data centers, is driving a lot of those interactions. Reginald Smith Got it, that makes sense. If you decided to transform, would you be able to kind of continue to run Bitcoin mining until a full cutover occurred or like how would that work? Kent Draper Yeah, so it's a bit of a combination. So at the Childress site, Dan had the rendering up earlier as to what a potential full-site build out for liquid-cooled capacity could look like, and there is ample space at that site to be able to build, additional phases of horizon. On areas that currently haven't been built out as well as in the future then retrofitting the existing buildings for further obviously the approach that you're taking there, if it is new build phases, from the ground up, obviously there's no interruption to your Bitcoin mining activities until -- right near the end, when you switch the power across and power up the new liquid cooled data centers, where you are undertaking retrofits of existing capacity, then yes, you do need to take that capacity offline at some point prior to the new liquid-cooled capacity coming online. Reginald Smith Understood. Thank you so much. Operator Darren Aftahi, ROTH. Darren Aftahi Hi, guys. Thanks for the questions and congrats on the kind of a clarification on the CapEx spend per megawatt. You mentioned it includes UPS and diesel gen. I guess, how are you able to kind of reach that CapEx span when it seems like it's kind of below market? And then I guess, in the conversation you're having with potential parties at Horizon 1 and maybe beyond, can you characterize kind of maybe what those clients might look like? Is it hyperscalers, neo clouds, large enterprise, all of the above, any color would be great. Thanks. Kent Draper Yeah, happy to touch on the cost element there. So in terms of the buildout, as Dan mentioned, we've been doing power dense data centers for over seven years now. So we are extremely experienced in building out these facilities. We've spent a lot of time optimizing our data center design. And importantly, in the build out for Horizon 1, we're doing it in a way that utilizes a lot of the existing data center design, so the same building shells, a lot of the electrical infrastructure is very similar, and then we're just layering in the redundancies that these AI customers ultimately are looking for in terms of what you mentioned around gensets, UPS, what that enables is that we're able to do it in an extremely cost-efficient manner versus a traditional new AI Data Center build out where people may be using, for example, concrete building shells. Which require a significant amount of additional CapEx versus our design. So everything that we're delivering is consistent with, what customers expect, and we know that because we've been going through these detailed, technical due diligence conversations with them over the past number of months. So that's really the key elements as to how we're able to achieve better cost. Dan Roberts I think just to add to that, like this isn't a small team of finance guys just trying to sign a contract and then outsource everything on the technical side. Like as Kent said, we've been doing this for seven years. It's a founder-led business where every single element of every data center and everything we do goes back to first principles. But whether people want to acknowledge it or not, this is an entirely new asset class, power dense data centers are fundamentally different in terms of how they've developed, how they've been engineered, how they've been operated, and we've had the benefit of seven years from the ground up optimizing everything. Like no one believed that we could build air cool data centers for $650,000 a megawatt that run next-generation AI workloads, H100s, H200s, we've been doing it for 15 it's the same thing with all this. It's just a bottom-up analysis, how much does the raw materials cost? What's the most efficient way to assemble everything, not signing layers upon layers of contractors, designers, builders, etc. It's all controlled in-house. And I think you're right, like it's going to be a competitive advantage, the ability to deliver cost at this level. Darren Aftahi Thank you. Operator Thank you. I would now like to turn the call back over to Dan Roberts for any closing remarks. Dan Roberts Thank you. Thanks again to everyone for the questions and also for joining us today. As you've heard throughout this call, IREN continues to deliver consecutive quarters of profitability, substantial free cash flow, and really strong execution across both Bitcoin and AI. So we've built a business that performs through the cycle. You can see those scenarios all the way down to a $33,000 Bitcoin price all the way up to wherever your minds would like to not just when Bitcoin is running, but through disciplined operations, efficient infrastructure, and capital allocation that stacks up in any market. So we lead on fundamentals and that's what sets us apart. It's what allows us to fund growth from cash flows that we're generating, while still scaling into one of simply the most exciting infrastructure opportunities of our time. So we're incredibly excited about what lies ahead in AI, and really confident in our ability to capture that upside but capture it in the right thanks again. We look forward to updating you all next quarter. Thank you. Sign in to access your portfolio

Improvements in 'reasoning' AI models may slow down soon, analysis finds
Improvements in 'reasoning' AI models may slow down soon, analysis finds

Yahoo

time13-05-2025

  • Business
  • Yahoo

Improvements in 'reasoning' AI models may slow down soon, analysis finds

An analysis by Epoch AI, a nonprofit AI research institute, suggests the AI industry may not be able to eke massive performance gains out of reasoning AI models for much longer. As soon as within a year, progress from reasoning models could slow down, according to the report's findings. Reasoning models such as OpenAI's o3 have led to substantial gains on AI benchmarks in recent months, particularly benchmarks measuring math and programming skills. The models can apply more computing to problems, which can improve their performance, with the downside being that they take longer than conventional models to complete tasks. Reasoning models are developed by first training a conventional model on a massive amount of data, then applying a technique called reinforcement learning, which effectively gives the model "feedback" on its solutions to difficult problems. So far, frontier AI labs like OpenAI haven't applied an enormous amount of computing power to the reinforcement learning stage of reasoning model training, according to Epoch. That's changing. OpenAI has said that it applied around 10x more computing to train o3 than its predecessor, o1, and Epoch speculates that most of this computing was devoted to reinforcement learning. And OpenAI researcher Dan Roberts recently revealed that the company's future plans call for prioritizing reinforcement learning to use far more computing power, even more than for the initial model training. But there's still an upper bound to how much computing can be applied to reinforcement learning, per Epoch. Josh You, an analyst at Epoch and the author of the analysis, explains that performance gains from standard AI model training are currently quadrupling every year, while performance gains from reinforcement learning are growing tenfold every 3-5 months. The progress of reasoning training will "probably converge with the overall frontier by 2026," he continues. Epoch's analysis makes a number of assumptions, and draws in part on public comments from AI company executives. But it also makes the case that scaling reasoning models may prove to be challenging for reasons besides computing, including high overhead costs for research. "If there's a persistent overhead cost required for research, reasoning models might not scale as far as expected," writes You. "Rapid compute scaling is potentially a very important ingredient in reasoning model progress, so it's worth tracking this closely." Any indication that reasoning models may reach some sort of limit in the near future is likely to worry the AI industry, which has invested enormous resources developing these types of models. Already, studies have shown that reasoning models, which can be incredibly expensive to run, have serious flaws, like a tendency to hallucinate more than certain conventional models. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Improvements in ‘reasoning' AI models may slow down soon, analysis finds
Improvements in ‘reasoning' AI models may slow down soon, analysis finds

TechCrunch

time12-05-2025

  • Business
  • TechCrunch

Improvements in ‘reasoning' AI models may slow down soon, analysis finds

An analysis by Epoch AI, a nonprofit AI research institute, suggests the AI industry may not be able to eke massive performance gains out of reasoning AI models for much longer. As soon as within a year, progress from reasoning models could slow down, according to the report's findings. Reasoning models such as OpenAI's o3 have led to substantial gains on AI benchmarks in recent months, particularly benchmarks measuring math and programming skills. The models can apply more computing to problems, which can improve their performance, with the downside being that they take longer than conventional models to complete tasks. Reasoning models are developed by first training a conventional model on a massive amount of data, then applying a technique called reinforcement learning, which effectively gives the model 'feedback' on its solutions to difficult problems. So far, frontier AI labs like OpenAI haven't applied an enormous amount of computing power to the reinforcement learning stage of reasoning model training, according to Epoch. That's changing. OpenAI has said that it applied around 10x more computing to train o3 than its predecessor, o1, and Epoch speculates that most of this computing was devoted to reinforcement learning. And OpenAI researcher Dan Roberts recently revealed that the company's future plans call for prioritizing reinforcement learning to use far more computing power, even more than for the initial model training. But there's still an upper bound to how much computing can be applied to reinforcement learning, per Epoch. According to an Epoch AI analysis, reasoning model training scaling may slow down. Image Credits:Epoch AI Josh You, an analyst at Epoch and the author of the analysis, explains that performance gains from standard AI model training are currently quadrupling every year, while performance gains from reinforcement learning are growing tenfold every 3-5 months. The progress of reasoning training will 'probably converge with the overall frontier by 2026,' he continues. Techcrunch event Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | BOOK NOW Epoch's analysis makes a number of assumptions, and draws in part on public comments from AI company executives. But it also makes the case that scaling reasoning models may prove to be challenging for reasons besides computing, including high overhead costs for research. 'If there's a persistent overhead cost required for research, reasoning models might not scale as far as expected,' writes You. 'Rapid compute scaling is potentially a very important ingredient in reasoning model progress, so it's worth tracking this closely.' Any indication that reasoning models may reach some sort of limit in the near future is likely to worry the AI industry, which has invested enormous resources developing these types of models. Already, studies have shown that reasoning models, which can be incredibly expensive to run, have serious flaws, like a tendency to hallucinate more than certain conventional models.

WATCH: BWWB holds news conference opposing bill that would restructure board
WATCH: BWWB holds news conference opposing bill that would restructure board

Yahoo

time30-04-2025

  • Politics
  • Yahoo

WATCH: BWWB holds news conference opposing bill that would restructure board

BIRMINGHAM, Ala. (WIAT) — The Birmingham Water Works Board is holding a news conference Wednesday morning to oppose an Alabama bill restructuring the board. Under Senate Bill 330, sponsored by Senators Dan Roberts, Jabo Waggoner, and Shay Shelnutt, the board would become a regional board and alter board membership. The current board would be replaced under the proposed structure and reduce the number of board members from nine to seven. In addition to eliminating two board spots, the bill would add new qualifications for board members and change who is tasked with appointing board members. The city of Birmingham called the legislation a political power grab. The bill passed through the Alabama Senate and now continues to make its way through state legislature. The city expressed several concerns with SB330. These are just a few: Elected officials can appoint themselves or other politicians to the board Board members pay would double Jefferson County Mayor's Association will lose their appointment on the BWWB False information is used in the bill The Birmingham Waters Works Board news conference will be livestreamed in the video player above at 9 a.m. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Alabama Senate passes bill overhauling Birmingham Water Works Board
Alabama Senate passes bill overhauling Birmingham Water Works Board

Yahoo

time25-04-2025

  • Politics
  • Yahoo

Alabama Senate passes bill overhauling Birmingham Water Works Board

Sens. Donnie Chesteen, R-Geneva (left) and Dan Roberts, R-Mountain Brook listen to testimony during budget hearings on Feb. 5, 2025 at the Alabama Statehouse in Montgomery, Alabama. (Brian Lyman/Alabama Reflector) A bill significantly restructuring the governance of large municipal water systems sailed through the Alabama Senate Thursday. SB 330, sponsored by Sen. Dan Roberts, R-Mountain Brook, is broadly worded but effectively targets the Birmingham Water Works Board (BWWB). The legislation overcame potential resistance after last-minute changes that expanded the proposed regional board. 'I think we have the makings of a great water system here with what we're doing … we're after a board whose goal is to work together, to provide true, true loyalty to the customer base, not to anyone else,' Roberts said after the bill's passage. SUBSCRIBE: GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX The bill applies to municipal water works boards that serve customers across four or more counties beyond the one where the authorizing city is located. It mandates converting such entities into regional boards; establishing new rules for board member appointments, qualifications and terms; implementing stricter ethics and financial reporting requirements and outlining specific board duties. Roberts said the changes are necessary for competent management and to prevent operational failures. 'We're losing 50% of the water that we pump that's potable. That's so far outside what is normative across the country. The replacement of pipes is probably responsible for some of this, but we're spending money on so many other things than showing a fiduciary responsibility to the customer base,' Roberts said. Changes to the BWWB have drawn strong opposition from Democrats in the Jefferson County delegation, who have filibustered similar pieces of legislation over the years over concerns that Birmingham and Jefferson County, the BWWB's largest customers, would lose power over water decisions to suburban counties. Democrats have also noted that the changes would take power from Birmingham, which is 67% Black, and shift it to majority-white suburban counties. Sen. Rodger Smitherman, D-Birmingham, who has led opposition to BWWB changes, declined to comment after the bill passed, saying that he'll 'talk about it once the governor signs it.' Roberts said the changes came 'after hard negotiations for several hours yesterday until late last night, and then again this morning,' which led to adding two additional members. Roberts said he preferred a board of five members because it would be easier 'to get them pulling in the same direction.' 'We sat down as a Jefferson County delegation and hammered it out in a back room of what it would take to get this bill to pass without creating lots of problems for the rest of our colleagues here in the Senate and the House,' Roberts said. The first amendment expanded the proposed board from five members to seven, adding one director appointed by the Governor and another by the governing body of the authorizing municipality, which would be the Birmingham City Council. Both appointees would have to live in Jefferson County. The second amendment requires the new regional board to include the authorizing municipality's name, Birmingham, in its official title. The bill specifies that certain board positions require financial, engineering, or general business backgrounds and sets initial staggered terms before transitioning to five-year terms, with a limit of two full terms. Directors will receive $2,000 per month plus expenses. Frank E. Adams, a spokesperson for Birmingham Water Works Board (BWWB), said in a statement that despite amendments adding local appointees, the board strongly opposes the bill and sees it as a 'hostile takeover by outside interests.' 'BWWB's daily focus is continuing to make improvements to our customer service, infrastructure and the overall operations of the system. We have made significant improvement in those areas over the last few months and SB 330 limits that progress,' Adams said in the statement. Board leaders previously indicated that operations are improving, according to and that monthly billing errors have been reduced to 500, down from 10,000. The bill now moves to the Alabama House of Representatives for consideration. House Speaker Nathaniel Ledbetter, R-Rainsville, said the legislation will be a priority in the last few days of session. SUPPORT: YOU MAKE OUR WORK POSSIBLE

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store