
Award-winning analytics and AI leadership: How Gas Networks Ireland is harnessing data and GenAI to unlock business insights and drive innovation
Last year, the organisation's data competency centre (DCC), led by head of data and analytics Alan Grainger, won the prestigious AI Project of the Year award at the 2024 Analytics and AI Awards. The project utilised GenAI to tackle a long-standing business challenge – the accurate classification of over more than 30,000 commercial gas customers across Ireland.
'A business customer's classification, eg brewery, hospital, supermarket, is set when they sign up with an energy provider,' Grainger explains.
'In many cases and over time, the customer data we received had been misclassified or not classified at all, limiting the ability to extract meaningful insights. We knew there was untapped potential in our customer data, but the lack of reliable classification made it challenging to engage effectively with specific sectors – particularly those with strong potential for renewable biomethane gas adoption. That's what made the GenAI project such a game-changer.'
The challenge had persisted for many years without a cost-effective solution, he adds. 'Three years ago, we wouldn't have been having this conversation at all, but ChatGPT woke us all up. It opened our eyes to new possibilities and we decided to test the potential of GenAI and large language models [LLMs] with a real-world business problem. We worked with a trusted partner and time-boxed the project to six weeks. In reality, it took just four weeks to complete, and we now have all 30,000 business customers classified to 99+ per cent accuracy.' The project combined structured and unstructured data with a private instance of ChatGPT-4 using a series of intelligent internet searches and prompts to build a classification profile for each business.
'For example, if the system found references to Leaving Cert exams, teacher listings and subjects on the curriculum, it would classify the business as a secondary school,' he explains.
'Pre-GenAI, we would have needed a large team working over months, if not years, to classify this data.' Grainger adds. 'Buying access to a third-party industry database was another option, but it would have been far more expensive and may not have given us the same level of accuracy or control.'
The project has revolutionised how Gas Networks Ireland understands and serves its customer base. 'The project proved that GenAI can solve real world business problems, even in safety critical environments like ours,' says Grainger. 'We didn't simply clean up legacy data, we unlocked new insights that enhance our approach to everything from customer engagement to strategic planning.'
One of the most impactful applications of data has been in the area of operational safety. Through close collaboration with the asset operations team, the DCC team has delivered a range of data-driven tools to improve safety outcomes.
The classified data is now fully integrated into Gas Networks Ireland's data warehouse and reporting systems. Customer-facing teams use it to target sustainability conversations especially around the roll-out of biomethane, a key pillar in Ireland's energy transition. It also supports regulatory reporting and strengthens alignment with national climate-action targets.
'We were delighted with the success of this ambitious project and the outcomes exceeded expectations. It's been a real turning point for how we think about data, insight and innovation. We were really proud when we won the AI Project of the Year award. It was a real milestone for the DCC team and Gas Networks Ireland as a whole. It showed what's possible when we take a bold but responsible approach to new technology, and it was a clear demonstration of innovation in practice.' Grainger established the DCC in 2022 and since then his focus has been on enabling Gas Networks Ireland to transform from a traditional engineering led utility to a digitally empowered, data-driven organisation. Data is now seen as a strategic asset central to how Gas Networks Ireland operates today, how it will grow tomorrow, and how it will prepare Ireland's gas network for a secure, decarbonised future.
'We've been on a data maturity journey since 2022 when we established the data competency centre,' he says. 'Before that, data tended to be viewed primarily as a byproduct of our business processes, rather than a strategic asset. Since then, we have moved the data maturity dial significantly and become a much more data-driven organisation.' Data now touches every part of the business, from operational performance, gas safety and customer insight to fraud prevention, cyber resilience and long-term network planning.
One of the most impactful applications of data has been in the area of operational safety. Through close collaboration with the asset operations team, the DCC team has delivered a range of data-driven tools to improve safety outcomes.
'Pre-GenAI, we would have needed a large team working over months, if not years, to classify this data,' says Grainger. Photograph: Mark Henderson
'We have a 60-minute SLA for responding to emergency call-outs and we achieve this 99.7 per cent of the time,' Grainger points out. 'We have been able to use visual analytics to monitor and map emergency calls over a number of years. We use this to spot anomalies and examine why incidents are recurring in certain places. We have been able to use this to rebalance resources to where they are most needed. We can also use it for predictive analysis to prevent incidents before they occur.' Also in the area of safety, the organisation operates regular helicopter flyovers of the network to ensure it is safe and secure. 'That activity comes with a considerable carbon footprint,' he notes.
'We are now exploring the use of satellite imagery and applying data analytics to that to perform the same task.' 'Our attitude has been to build it, and they will come,' he continues. 'Now we almost can't keep up with demand for new uses for data in the organisation. For example, our asset operations team started with a single project on meter-reading analysis and has since benefited from over a dozen high-impact use cases.' Looking ahead, he points out that Gas Networks Ireland is already preparing its data architecture to support biomethane and hydrogen integration as it continues on its decarbonisation journey.
'As more entry points from biomethane producers and others feed into the network, the complexity of calorific value calculations of different types of gas will increase. We are already building the data models and systems needed to manage this.' Other advanced uses of AI and analytics being explored include the application of image recognition to gas meter photos to spot corrosion, safety risks or fraud, and the future need for modelling integrated energy system patterns and flows in collaboration with electricity operators like such as EirGrid and ESB.
For Grainger, innovation isn't about adopting technology for its own sake – it's about using data intelligently to drive better decisions and outcomes. At the heart of this evolution are people: those with the insight to interpret data and the conviction to act. In a decarbonising world, the most resilient energy systems will be led by purpose and powered by intelligence.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Irish Times
a day ago
- Irish Times
Karen Hao on AI tech bosses: ‘Many choose not to have children because they don't think the world is going to be around much longer'
Scarlett Johansson never intended to take on the might of Silicon Valley. But last summer the Hollywood star discovered a ChatGPT model had been developed whose voice – husky, with a hint of vocal fry – bore an uncanny resemblance to the AI assistant voiced by Johansson in the 2013 Spike Jonze movie Her. On the day of the launch, OpenAI chief executive Sam Altman , maker of ChatGPT, posted on X a one-word comment: 'her'. Later Johansson released a furious statement revealing she had been asked to voice the new aide but had declined. Soon the model was scrapped. Johansson and a phalanx of lawyers had defeated the tech behemoths. That skirmish is one among the many related in Karen Hao's new book Empire of AI: Inside the Reckless Race for Total Domination, a 482-page volume that, in telling the story of San Francisco company OpenAI and its founder, Altman, concerns itself with large and worrying truths. Could AI steal your job, destabilise your mental health and, via its energy-guzzling servers plunge the environment into catastrophe? Yes to all of the above, and more. [ Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troubling Opens in new window ] As Hao puts it in the book: 'How do we govern artificial intelligence? AI is one of the most consequential technologies of this era. In a little over a decade, it has reformed the backbone of the Internet. It is now on track to rewire a great many other critical functions in society, from healthcare to education, from law to finance, from journalism to government. The future of AI – the shape this technology takes – is inextricably tied to our future.' It's a rainy day in Dublin when I travel to Dalkey to meet Hao, a Hong Kong-dwelling, New Jersey-raised journalist who has become a thorn in Altman's side. Educated at MIT, she writes for the Atlantic and leads the Pulitzer Centre AI Spotlight series, a programme that trains journalists in covering AI matters. Among families grabbing a bite to eat in a local hotel, the boisterous kids running around tables in the lobby and tourists checking in and out, Hao, neat and professional in a cream blazer with her hair tied back, radiates an air of calm authority. READ MORE 'AI is such an urgent story,' she says. 'The pursuit of AI becomes dangerous as an idea because it's eroding people's data privacy . It's eroding people's fundamental rights. It's exploiting labour, but it's humans that are doing that, in the name of AI.' Whether you're in Dublin or San Diego, AI is hurtling into our lives. ChatGPT has 400 million weekly users. You can't go on to WhatsApp , Google or Meta without encountering an AI bot. It was revealed in a recent UK Internet Matters survey that 12 per cent of kids and teens use chatbots to offset feelings of loneliness. Secondary school students are changing their CAO forms to give themselves the best chance of thwarting the broken career ladder that AI has created. The impact of AI on the environment is extraordinary. Just one ChatGPT search about something as simple as the weather consumes vast energy, 10 times more than a Google search. Or, as Des Traynor of Intercom put it at Dalkey Book Festival recently, it's like using a 'massive diesel generator to power a calculator'. It's far from the utopian ideal of a medical solutions-focused, climate-improving enterprise that was first trumpeted to Hao when she began investigating OpenAI and Altman in 2019. As a 20-something reporter at MIT Technology Review covering artificial intelligence, Hao became intrigued by the company. Founded as a non-profit, OpenAI claimed not to chase commercialisation. Even its revamp into a partially for-profit model didn't alter its mission statement: to safely build artificial intelligence for the benefit of humanity. And to be open and transparent while doing it. But when Hao arrived at the plush headquarters on San Francisco's 18th and Folsom Streets, all exposed wood beam ceilings and comfy couches, she noticed that: nobody seemed to be allowed to talk to her casually. Her photograph had been sent to security. She couldn't even eat lunch in the canteen with the employees. 'They were really secretive, even though they kept saying they were transparent,' Hao says. 'Later on, I started sourcing my own interviews. People started telling me: this is the most secretive organisation I've ever worked for.' Karen Hao in Dublin during the Dalkey book festival. Photograph: Nick Bradshaw The meetings Hao had with OpenAI executives did not impress her. 'In the first meeting, they could not articulate what the mission was. I was like, well, this organisation has consistently been positioning itself as anti-Silicon Valley. But this feels exactly like Silicon Valley, where men are thrown boatloads of money when they don't yet have a clear idea of what they're even doing.' Simple questions appeared to wrong-foot the executives. They spoke about AGI (artificial general intelligence), the theoretical notion that silicon chips could one day give rise to a human-like consciousness. AGI would help solve complex problems in medicine and climate change , they enthused. But how would they achieve this and how would AGI technology be successfully distributed? They hedged. 'Fire is another example,' Hao was told. 'It's also got some real drawbacks to it.' Since that time, AGI has not been developed, but billions have been pumped into large language models such as ChatGPT, which can perform tasks such as question answering and translation. Built by consuming vast amounts of often garbage data from the bottom drawer of the Internet, AI chatbots are frequently unreliable. An AI assistant might give you the right answer. Or it might, as Elon Musk's AI bot Grok did recently , praise Adolf Hitler and cast doubt on people with Jewish surnames. 'Quality information and misinformation are being mixed together constantly,' Hao says, 'and no one can tell any more what are the sources of truth.' It didn't have to be this way. 'Before ChatGPT and before OpenAI took the scaling approach, the original trend in AI research was towards tiny AI models and small data sets,' Hao says. 'The idea was that you could have really powerful AI systems with highly curated data sets that were only a couple of hundred images or data points. But the key was you needed to do the curation on the way in. When it's the other way around, you're culling the gunk and toxicity and that becomes content moderation.' One particularly moving section of Hao's book is when she journeys to poorer countries to look at how people who work on the content moderation side of OpenAI cope day-to-day. Meagre incomes, job instability and exposure to hate speech, child sex abuse and rape fantasies online are just some of the realities contractors face. In Kenya , one worker's sanity became so frayed his wife and daughter left him. When he told Hao his story, the author says she felt like she'd been punched in the gut. 'I went back to my hotel, and I cried because I was like, this is tearing people's families apart.' Hao nearly didn't get her book out. She had thought she would have some collaboration with Altman and OpenAI, but the participation didn't happen. 'I was devastated,' she admits. 'Fortunately I had a lot of amazing people in my life who were like, 'Are you going to let them win or are you going to continue being the excellent journalist you know you can be, and report it without them?'' Understanding companies such as OpenAI is becoming more important for everyone. In recent weeks, Meta, Microsoft, Amazon and Alphabet , Google's parent company, delivered their quarterly public financial reports, disclosing that their year-to-date capital expenditure ran into tens of billions , much of it required for the creation and maintenance of data centres to power AI's services. In Ireland, there are more than 80 data centres, gobbling up 50 per cent of the electricity in the Dublin region, and hoovering up more than 20 per cent nationally, as they work to process and distribute huge quantities of digital information. [ Let's get real: Ireland's data centre boom is driving up fossil fuel dependence Opens in new window ] Hao believes governments must force tech companies to have more transparency in relation to the energy their data centres consume. 'If you're going to build data centres, you have to report to the public what the actual energy consumed is, how much water is actually used. That enables the public and the government to decide if this is a trade-off worth continuing. And they need to invest more in independent institutions for cultivating AI expertise.' While governments have to play their part, it's difficult reading the book not to find yourself asking the simple question: why aren't tech bosses themselves concerned about what they're doing? Tech behemoths may be making billions – AI researchers are negotiating pay packages of $250 million from companies such as Meta – but surely they've given a care to their children's future? And their children's children? Wouldn't they prefer them to live in a world they still have flowers and polar bears and untainted water? [ Adam Kelly: I am a university lecturer witnessing how AI is shrinking our ability to think Opens in new window ] 'What's interesting is many of them choose not to have children because they don't think the world is going to be around much longer,' Hao says. 'With some people in more extreme parts of the community, their idea of Utopia is all humans eventually going away and being superseded by this superior intelligence. They see this as a natural force of evolution.' 'It's like a very intense version of utilitarianism,' she adds. 'You'd maximise morality in the world if you created superior intelligences that are more moral than us, and then they inherited our Earth.' Offering a more positive outlook, there are many in the AI community who would say that the work they are doing will result in delivering solutions that benefit the planet. AI has the potential to accelerate scientific discoveries: its possibilities are exciting because they are potentially paradigm-shifting. Is that enough to justify the actions being taken? Not according to Hao. 'The problem is: we don't have time to continue destroying our planet with the hope that one day maybe all of it will be solved by this thing that we're creating,' she says. 'They're taking real world harm today and offsetting it with a possible future tomorrow. That possible future could go in the opposite direction.' 'They can make these trade-offs because they're the ones that are going to be fine. They're the ones with the wealth to build the bunkers. If climate change comes, they have everything ready.' Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao is published by Allen Lane

The Journal
2 days ago
- The Journal
OpenAI releases new ChatGPT version with 'PhD level' expertise, but it can't spell 'blueberry'
OPENAI RELEASED A keenly awaited new generation of its hallmark ChatGPT yesterday, touting 'significant' advancements in artificial intelligence capabilities as a global race over the technology accelerates. ChatGPT-5 is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists. Co-founder and chief executive Sam Altman touted this latest iteration as 'clearly a model that is generally intelligent.' Altman cautioned that there is still work to be done to achieve the kind of artificial general intelligence (AGI) that thinks the way people do. 'This is not a model that continuously learns as it is deployed from new things it finds, which is something that, to me, feels like it should be part of an AGI,' Altman said. 'But the level of capability here is a huge improvement.' The rollout has not been without issues though. ChatGPT-5 has struggled to give correct answers to simple prompts since it was released. It even denies its own existence when asked about it. In response to the question, 'Is this ChatGPT-5?', the large language model replied: 'You're currently chatting with ChatGPT-4o architecture, which is part of the GPT-4 family – not ChatGPT-5.' A screenshot of ChatGPT-5 telling the user it is not ChatGPT-5. Screenshot taken by The Journal Screenshot taken by The Journal Another example of the new version of the chatbot making a basic error has prompted derision from some social media users. ChatGPT-5 cannot spell the word 'blueberry'. When users asked how many times the letter 'b' appears in the word, it replied that there are three. I had to try the 'blueberry' thing myself with GPT5. I merely report the results. [image or embed] — Kieran Healy ( @ ) August 8, 2025 at 1:04 AM Industry analysts have heralded the arrival of an AI era in which genius computers transform how humans work and play. 'As the pace of AI progress accelerates, developing superintelligence is coming into sight,' Meta chief executive Mark Zuckerberg wrote in a recent memo. Advertisement 'I believe this will be the beginning of a new era for humanity.' Altman said there were 'orders of magnitude more gains' to come on the path toward AGI. 'Obviously… you have to invest in compute (power) at an eye watering rate to get that, but we intend to keep doing it.' Tech industry rivals Amazon, Google, Meta, Microsoft and Elon Musk's xAI have been pouring billions of dollars into artificial intelligence since the blockbuster launch of the first version of ChatGPT in late 2022. Chinese startup DeepSeek shook up the AI sector early this year with a model that delivers high performance using less costly chips. 'PhD-level expert' With fierce competition around the world over the technology, Altman said ChatGPT-5 led the pack in coding, writing, health care and much more. 'GPT-3 felt to me like talking to a high school student — ask a question, maybe you get a right answer, maybe you'll get something crazy,' Altman said. 'GPT-4 felt like you're talking to a college student; GPT-5 is the first time that it really feels like talking to a PhD-level expert in any topic.' Altman expects the ability to create software programs on demand — so-called 'vibe-coding' – to be a 'defining part of the new ChatGPT-5 era.' In a blog post, British AI expert Simon Willison wrote about getting early access to ChatGPT-5. 'My verdict: it's just good at stuff,' Willison wrote. 'It doesn't feel like a dramatic leap ahead from other (large language models) but it exudes competence – it rarely messes up, and frequently impresses me.' However, Musk wrote on X, formerly Twitter, that his Grok 4 Heavy AI model 'was smarter' than ChatGPT-5. Honest AI? ChatGPT-5 was trained to be trustworthy and stick to providing answers as helpful as possible without aiding seemingly harmful missions, according to OpenAI safety research lead Alex Beutel. 'We built evaluations to measure the prevalence of deception and trained the model to be honest,' Beutel said. ChatGPT-5 is trained to generate 'safe completions,' sticking to high-level information that can't be used to cause harm, according to Beutel. The company this week also released two new AI models that can be downloaded for free and altered by users, to challenge similar offerings by rivals. The release of 'open-weight language models' comes as OpenAI is under pressure to share inner workings of its software in the spirit of its origin as a nonprofit. With reporting from David Mac Redmond


RTÉ News
2 days ago
- RTÉ News
OpenAI launches GPT-5 as AI race accelerates
OpenAI has launched its GPT-5 artificial intelligence model, the highly anticipated latest installment of a technology that has helped transform global business and culture. OpenAI's GPT models are the AI technology that powers the popular ChatGPT chatbot, and GPT-5 will be available to all 700 million ChatGPT users, OpenAI said. The big question is whether the company that kicked off the generative AI frenzy will be capable of continuing to drive significant technological advancements that attract enterprise-level users to justify the enormous sums of money it is investing to fuel these developments. The release comes at a critical time for the AI industry. The world's biggest AI developers - Alphabet, Meta, Amazon and Microsoft, which backs OpenAI - have dramatically increased capital expenditures to pay for AI data centers, nourishing investor hopes for great returns. These four companies expect to spend nearly $400bn (€342bn) this fiscal year in total. OpenAI is now in early discussions to allow employees to cash out at a $500bn (€428bn) valuation, a huge step-up from its current $300bn (€257bn) valuation. Top AI researchers now command $100m (€85m) signing bonuses. "So far, business spending on AI has been pretty weak, while consumer spending on AI has been fairly robust because people love to chat with ChatGPT," said economics writer Noah Smith. "But the consumer spending on AI just isn't going to be nearly enough to justify all the money that is being spent on AI data centres," he added. OpenAI is emphasizing GPT-5's enterprise prowess. In addition to software development, the company said GPT-5 excels in writing, health-related queries, and finance. "GPT-5 is really the first time that I think one of our mainline models has felt like you can ask a legitimate expert, a PhD-level expert, anything," OpenAI CEO Sam Altman said at a press briefing. "One of the coolest things it can do is write you good instantaneous software. This idea of software on demand is going to be one of the defining features of the GPT-5 era," he added. In demos yesterday, OpenAI showed how GPT-5 could be used to create entire working pieces of software based on written text prompts, commonly known as "vibe coding". One key measure of success is whether the step up from GPT-4 to GPT-5 is on par with the research lab's previous improvements. Two early reviewers said that while the new model impressed them with its ability to code and solve science and math problems, they believe the leap from the GPT-4 to GPT-5 was not as large as OpenAI's prior improvements. Even if the improvements are large, GPT-5 is not advanced enough to wholesale replace humans. Mr Altman said that GPT-5 still lacks the ability to learn on its own, a key component to enabling AI to match human abilities. On his popular AI podcast, Dwarkesh Patel compared current AI to teaching a child to play a saxophone by reading notes from the last student. "A student takes one attempt," he said. "The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student. This just wouldn't work," he said. More thinking Nearly three years ago, ChatGPT introduced the world to generative AI, dazzling users with its ability to write humanlike prose and poetry, quickly becoming one of the fastest growing apps ever. In March 2023, OpenAI followed up ChatGPT with the release of GPT-4, a large language model that made huge leaps forward in intelligence. While GPT-3.5, an earlier version, received a bar exam score in the bottom 10%, GPT-4 passed the simulated bar exam in the top 10%. GPT-4's leap was based on more compute power and data, and the company was hoping that "scaling up" in a similar way would consistently lead to improved AI models. But OpenAI ran into issues scaling up. One problem was the data wall the company ran into, and OpenAI's former chief scientist Ilya Sutskever said last year that while processing power was growing, the amount of data was not. He was referring to the fact that large language models are trained on massive datasets that scrape the entire internet, and AI labs have no other options for large troves of human-generated textual data. Apart from the lack of data, another problem was that 'training runs' for large models are more likely to have hardware-induced failures given how complicated the system is, and researchers may not know the eventual performance of the models until the end of the run, which can take months. At the same time, OpenAI discovered another route to smarter AI, called "test-time compute," a way to have the AI model spend more time compute power "thinking" about each question, allowing it to solve challenging tasks such as math or complex operations that demand advanced reasoning and decision-making. GPT-5 acts as a router, meaning if a user asks GPT-5 a particularly hard problem, it will use test-time compute to answer the question. This is the first time the general public will have access to OpenAI's test-time compute technology, something that Altman said is important to the company's mission to build AI that benefits all of humanity. Mr Altman believes the current investment in AI is still inadequate.