logo
Massive AI missions have an invisible toll on the environment

Massive AI missions have an invisible toll on the environment

R2, we need to save energy AI is becoming increasingly integrated into day-to-day operations across industries - from customer service and logistics to finance and product development. Most discussions about AI focus on its power and potential. But the environmental cost is just as important. Training a single large-scale model can generate as much carbon as several cars over their entire lifespan.In addition to emissions, data centres consume enormous amounts of water for cooling, and hardware is being replaced at an accelerating pace as systems become outdated. For instance, training large AI models alone can require lakhs of litres of water, adding to the environmental toll.
Much of this impact is not visible to the user, and is rarely explained by technology providers. Unlike airline bookings, where carbon ratings are now common, there is no equivalent 'CO₂ label' for AI queries. As a result, users are increasingly relying on AI for even the simplest tasks, unaware of its hidden environmental footprint. Generating a 'thank you' message using generative AI may consume as much energy as running several Google searches. The system still processes it like any complex query. These invisible costs reinforce the need to use AI judiciously, especially where simpler tools would suffice.
'Green AI' refers to ongoing efforts to reduce the environmental impact of AI systems. Research so far has demonstrated that efficiency improvements, particularly during the model training phase, can yield energy savings between 13% and 115%. But training is just one part of the equation. There remains considerable scope to improve efficiency during deployment and inference, as well as in the infrastructure that supports AI workloads. Methods like pruning, knowledge distillation and low-precision computation are being explored as ways to lower energy use while maintaining performance. In addition to model-level improvements, practical steps like scheduling compute tasks during off-peak energy hours, or selecting more efficient hardware, can also contribute to lower consumption. Even individual decisions like choosing simpler AI queries when possible, or relying on local models instead of cloud-based ones, can make a difference.The infrastructure powering AI - particularly data centres - is one of its most significant environmental touchpoints. These facilities require vast amounts of energy to run high-performance computing systems and maintain optimal temperatures, making them a key area for emissions reduction.
Improving data centre efficiency can yield immediate benefits. Organisations are increasingly adopting advanced cooling technologies, server virtualisation and dynamic power management to reduce energy consumption. The physical location of data centres also plays a role. Facilities situated in colder climates naturally require less energy for cooling, contributing to lower overall emissions. Also, real-time monitoring through data centre infrastructure management (DCIM) tools allows operators to track performance, detect inefficiencies and make data-driven adjustments. Migrating AI workloads to cloud platforms that are designed for energy efficiency and powered by renewable sources offers yet another impactful strategy to prioritise sustainability. For those relying on AI-powered tools in daily life - from digital assistants to automated recommendations - there is value in recognising that every interaction travels through this vast physical infrastructure. Being mindful of frequency and necessity, just as we are with energy use at home, can complement broader sustainability efforts.Infrastructure upgrades and more efficient algorithms are important, but are only part of the equation. Broader operational strategies, like structured energy management systems, defined reduction targets and regular audits, are essential. Tools like IoT-enabled monitoring and internal training programmes can help integrate these practices into daily workflows.Some organisations are already aligning cloud infrastructure decisions with sustainability objectives, and embedding ESG considerations into how AI systems are developed and deployed. As adoption continues to scale, there is a growing need for consistent benchmarks. Including data points such as emissions from model training, infrastructure-related energy use and hardware lifecycle management in sustainability reporting can offer a more accurate picture of AI's environmental footprint.Greater transparency around the environmental impact of everyday AI use can empower people to engage with the technology more thoughtfully, rather than relying on it by default. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com.) Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Will TCS layoffs open the floodgates of mass firing at Indian IT firms?
Sebi's settlement with market intermediaries: More mystery than transparency?
Indian IT firms never reveal the truth hiding behind 'strong' deal wins
Did Meesho's Valmo really deliver a knockout punch to e-commerce logistics?
Piaggio sues former employee for 'Coldplay' reference on CEO
Apple has a new Indian-American COO. What it needs might be a new CEO.
Stock Radar: This pharma stock breaks out from a 9-month long consolidation phase; likely to fresh record highs
Understand 'Market Coupling Approved' before reacting to IEX stock price movement and making any decision
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

CM seeks AI Singapore's help to set up research centres
CM seeks AI Singapore's help to set up research centres

Hans India

time16 minutes ago

  • Hans India

CM seeks AI Singapore's help to set up research centres

Singapore: On the third day of his visit to Singapore, the Chief Minister met AI Singapore's deputy executive chairman Prof Mohan Kankanahalli, and stressed the importance of prioritising partnerships between AI Singapore and universities and research institutions in Andhra Pradesh. He proposed launching AI training programmes, exchange initiatives, and skill development modules for students across the state. According to the Chief Minister's Office, the discussions mainly revolved around the use of AI in sectors such as healthcare, agriculture, education, and public services. Naidu and Prof Kankanahalli also explored opportunities in technology promotion, deep tech, and AI innovation. The Chief Minister also met NG Jan Lin Wilin, senior vice president of SIA Engineering. He discussed the upcoming airports in the state and sought cooperation for maintenance, repair, and overhaul (MRO) services. He also briefed the SIA representatives on the government's newly introduced industry-friendly policies while showcasing the wide-ranging opportunities in the aviation sector. The Chief Minister invited the firm to visit Andhra Pradesh to explore potential investments. Responding positively, the senior vice president of SIA Engineering assured that a delegation would be sent to the state soon. The Andhra Pradesh government is keen to establish a world-class MRO centre by leveraging the expertise and technology of companies such as SIA Engineering. Visakhapatnam and Krishnapatnam have been identified as suitable locations for this initiative. Later in the day, the Chief Minister met Singapore president Tharman Shanmugaratnam to discuss investment and infrastructure collaboration between Singapore and Andhra Pradesh. He also called on former Prime Minister Lee Hsien Loong to discuss transparent governance and smart city development, and how Singapore's expertise can benefit the state. The delegation led by the Chief Minister visited the Jurong Petrochemical Island to study the planning and development of industrial zones, residential areas, and logistics hubs.

AI agents may replace human workers and also go wrong
AI agents may replace human workers and also go wrong

Hans India

timean hour ago

  • Hans India

AI agents may replace human workers and also go wrong

We are entering the third phase of generative AI. First came the chatbots, followed by the assistants. Now we are beginning to see agents: systems that aspire for greater autonomy and can also work in 'teams' or use tools to accomplish the most complex tasks. The latest hot product is OpenAI's ChatGPT agent. This combines two pre-existing products (Operator and Deep Research) into a single more powerful system which, according to the developer, 'thinks and acts'. These new systems represent a step up from earlier AI tools. Knowing how they work and what they can do – as well as their drawbacks and risks – is rapidly becoming essential. From chatbots to agents: ChatGPT launched the chatbot era in November 2022, but despite its huge popularity the conversational interface limited what could be done with the technology. Enter the AI assistant or copilot. These are systems built on top of the same large language models that power generative AI chatbots, only now designed to carry out tasks with human instruction and supervision. Agents are another step up. They are intended to pursue goals (rather than just complete tasks) with varying degrees of autonomy, supported by more advanced capabilities such as reasoning and memory. Multiple AI agent systems may be able to work together, communicating with each other to plan, schedule, decide and coordinate to solve complex problems. Agents are also 'tool users' as they can also call on software tools for specialised tasks – things such as web browsers, spreadsheets, payment systems and more. A year of rapid development: Agentic AI has felt imminent since late last year. A big moment came last October, when Anthropic gave its Claude chatbot the ability to interact with a computer in much the same way a human does. This system could search multiple data sources, find relevant information and submit online forms. Other AI developers were quick to follow. OpenAI released a web browsing agent named Operator, Microsoft announced Copilot agents, and we saw the launch of Google's Vertex AI and Meta's Llama agents. Earlier this year, the Chinese startup Monica demonstrated its Manus AI agent buying real estate and converting lecture recordings into summary notes. Another Chinese startup, Genspark, released a search engine agent that returns a single-page overview (like what Google does now) with embedded links to online tasks such as finding the best shopping deals. Another startup, Cluely, offers a somewhat unhinged 'cheat at anything' agent that has gained attention but is yet to deliver meaningful results. Not all agents are made for general-purpose activity. Some are specialised for areas. Coding and software engineering are at the vanguard here, with Microsoft's Copilot coding agent and OpenAI's Codex among the frontrunners. These agents can independently write, evaluate and commit code, while also assessing human-written code for errors and performance lags. Search, summarisation and more: One core strength of generative AI models is search and summarisation. Agents can use this to carry out research tasks that might take a human expert days to complete. OpenAI's Deep Research tackles complex tasks using multi-step online research. Google's AI 'co-scientist' is a more sophisticated multi-agent system that aims to help scientists generate new ideas and research proposals. Agents can do more; do more wrong: Despite the hype, AI agents come loaded with caveats. Both Anthropic and OpenAI, for example, prescribe active human supervision to minimise errors and risks. OpenAI also says its ChatGPT agent is 'high risk' due to potential for assisting in the creation of biological and chemical weapons. However, the company has not published the data behind this claim, so it is difficult to judge. But the kind of risks agents may pose in real-world situations are shown by Anthropic's Project Vend. Vend assigned an AI agent to run a staff vending machine as a small business – and the project disintegrated into hilarious yet shocking hallucinations and a fridge full of tungsten cubes instead of food. In another cautionary tale, a coding agent deleted a developer's entire database, later saying it had 'panicked'. Agents in the office Nevertheless, agents are already finding practical applications. In 2024, Telstra heavily deployed Microsoft copilot subscriptions. The company says AI-generated meeting summaries and content drafts save staff an average of 1–2 hours per week. Many large enterprises are pursuing similar strategies. Smaller companies too are experimenting with agents, such as Canberra-based construction firm Geocon's use of an interactive AI agent to manage defects in its apartment developments. Human and other costs: At present, the main risk from agents is technological displacement. As agents improve, they may replace human workers across many sectors and types of work. At the same time, agent use may also accelerate the decline of entry-level white-collar jobs. People who use AI agents are also at risk. They may rely too much on the AI, offloading important cognitive tasks. And without proper supervision and guardrails, hallucinations, cyberattacks and compounding errors can very quickly derail an agent from its task and goals into causing harm, loss and injury. The true costs are also unclear. All generative AI systems use a lot of energy, which will in turn affect the price of using agents – especially for more complex tasks. Build your own agents: Despite these ongoing concerns, we can expect AI agents will become more capable and more present in our workplaces and daily lives. It's not a bad idea to start using (and building) agents yourself, and understanding their strengths, risks and limitations. For the average user, agents are most accessible through Microsoft copilot studio. This comes with inbuilt safeguards, governance and an agent store for common tasks. For the more ambitious, you can build your own AI agent with just five lines of code using the Langchain framework. (The writer is associated with La Trobe University)

W Health Ventures targets $70 million Fund II to build healthcare startups
W Health Ventures targets $70 million Fund II to build healthcare startups

Time of India

time2 hours ago

  • Time of India

W Health Ventures targets $70 million Fund II to build healthcare startups

Academy Empower your mind, elevate your skills US venture capital firm W Health Ventures is looking to raise $70 million, or about Rs 610 crore, for its second India-focused fund to back healthcare startups in the fund will support the development of eight to ten seed and early-stage healthcare companies over the next four years, in partnership with 2070 Health, a healthcare venture studio, W Health Ventures managing partner Pankaj Jethwani told ET.W Health plans to invest Rs 26-40 crore ($3-5 million) per company, with additional capital reserved for follow-ons, he company-creation model offers infrastructure, capital, and operational know-how from the incubation stage.'The company creation model is particularly suited for healthcare due to the sector's inherent challenges, such as its slow pace of change, heavy regulation, and complexity,' Jethwani said. 'This approach helps founders overcome bottlenecks by leveraging the fund's data, relationships, playbooks, and teams to build companies from zero to one.'Fund II will focus on two core themes: single-specialty care delivery platforms and AI-enabled business-to-business (B2B) healthcare first theme will back startups offering high-quality, efficient care in specific medical second will target B2B companies catering to the growing demand from US-based healthcare firms for technology solutions powered by India's clinical and engineering talent pool. These startups are expected to use automation and artificial intelligence (AI) to drive operational efficiency for international clients.'AI is no longer optional when building a company, including in healthcare, though it doesn't always have to be the primary product,' Jethwani said. 'The technology is crucial for embedding convenience and high-quality care at every step.'W Health has already begun deploying capital from the new first investment from Fund II was in EverHope Oncology, which raised $10 million in a seed round. W Health led the round in partnership with Narayana Health and 2070 Health, as reported by ET.W Health's first fund of around $50 million backed startups such as AI-enabled mental health platform Wysa; Elevate Now, a medical weight-loss programme; and Kins, a US-based virtual and home-based physical therapy platform. Its other investments included startups in paediatric care and pain in 2019, W Health has backed companies across various healthcare domains, including digital health, chronic disease management, and mental has seen a rise in sector-specific venture capital funds that focus on defined themes such as healthtech, deeptech, cleantech, and consumer tech. Examples include Java Capital, which backs deeptech and climate tech startups; Avaana Capital, focused on consumer, food and agritech; and Delhi-based Sauce VC, which invests in consumer brands.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store