
State police to host national-level AI Hackathon from June 27
Vijayawada: The Andhra Pradesh Police department is proud to announce the launch of its upcoming national-level AI Hackathon, a landmark initiative aimed at fostering innovation and identifying AI-driven solutions that can significantly enhance policing and public service delivery.
Director General of Police Harish Kumar Gupta and other officials unveiled the official posters at State police office in Mangalagiri on Wednesday.
The hackathon will serve as a strategic platform for demonstrating how AI can address real-world challenges in law enforcement and public administration. It will be held from June 27 to 29 at RVR&JC College of Engineering, Guntur, which will serve as the official Hosting Partner.
The AP Police department has partnered with 4SightAI as its knowledge partner, responsible for the overall conceptualisation, planning, and execution of the hackathon. All problem statements in the hackathon will be based on real policing use cases, ensuring that the solutions developed are practical and impactful.
The evaluation panel will include senior IPS officers and industry experts, ensuring a robust review process that recognizes both technological merit and real-world applicability. Participants will also have a unique opportunity to interact with government policymakers, industry leaders, and investors during an exclusive AI Symposium chaired by CM N Chandrababu Naidu.
The state police department invites IT companies, startups, and academic institutions both from India and overseas to participate and showcase their AI capabilities.
There is no entry fee for the hackathon. In addition, all boarding and lodging will be provided free of cost to selected participants. Participation certificates will be issued by AP Police department to all qualified teams, and cash prizes will be awarded to the top-performing teams. The State police department encourages all visionary AI teams to be part of this prestigious event that promises to push the boundaries of AI in public service and governance.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
15 minutes ago
- Business Standard
Reddit sues Anthropic for using user content to train AI without consent
Social media platform Reddit has filed a lawsuit against artificial intelligence (AI) company Anthropic, alleging that it illegally scraped user-generated content to train its chatbot, Claude. The suit was filed on Wednesday in the California Superior Court in San Francisco. Reddit claims Anthropic used automated tools to extract posts and comments from its platform despite explicit instructions not to do so. It says the content was then used to train Claude without proper user consent or licences. Ben Lee, Reddit's chief legal officer, criticised the alleged data practices, stating, 'AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data.' He said Reddit is committed to protecting its user community, which generates one of the internet's largest bodies of discussion content. Legal partnerships cited as contrast Reddit, which went public last year, pointed to its existing licensing agreements with companies like OpenAI and Google as examples of lawful collaboration. These partnerships, the company said, include mechanisms to remove content, filter spam, and protect users. 'These partnerships allow us to enforce meaningful safeguards for our users,' said Lee, underscoring the contrast with what Reddit describes as Anthropic's unlicensed use of its data. Anthropic rejects charges, prepares defence Anthropic, founded in 2021 by former OpenAI employees and backed by Amazon, denied the allegations. 'We disagree with Reddit's claims and will defend ourselves vigorously,' the company said in a brief statement. Focus on breach of contract, not copyright While many AI-related lawsuits centre on copyright violations, Reddit's case focuses on breach of contract and unfair business practices. It argues that Anthropic violated Reddit's terms of service by accessing data without authorisation. The filing cites a 2021 research paper co-authored by Anthropic CEO Dario Amodei, which named Reddit as a valuable training resource. Subreddits on gardening, history, and personal advice were specifically mentioned for teaching AI how humans communicate. Anthropic has previously maintained its use of public data is legal. In a 2023 letter to the US Copyright Office, the company stated that its training involves statistical analysis rather than content replication.


India Today
37 minutes ago
- India Today
AI's dirty secret: How your chatbot is draining Earth's water and energy
Every time you ask a chatbot for help or generate an AI image, somewhere a vast facility hums to life. It's easy to forget that each 'effortless' AI interaction leans on gigantic data centres -- buildings full of servers running nonstop, devouring electricity and gulping is World Environment Day, so it feels fitting to pull back the curtain on AI's unseen carbon footprint, even as we keep creating content with the help of AI, like the images used in this very POWER BEHIND THE MAGIC: WHY AI NEEDS HUGE DATA CENTRESAI models like ChatGPT and sophisticated image generators don't float on dreams. They rely on colossal data centres packed with racks of servers that rarely sleep. IAD71 Amazon Web Services data centre (as on July 17, 2024 in Ashburn, Virginia) () In 2024, global data centres -- including those powering AI -- used around 460 terawatt-hours (TWh) of electricity. That's roughly equal to Sweden's entire annual power consumption (International Energy Agency, 2024).Forecasts suggest this figure could double to 1,000 TWh by 2026, which would almost match Germany's yearly electricity per the US Department of Energy, a single data centre can draw over 100 megawatts (MW) of power -- enough to run 80,000 average American homes at now you can begin to see why AI has become a 'sizable' contributor to global emissions. AI's rapid expansion is a big driver behind this surge. CARBON COST OF CHATBOTS: ENERGY USED PER PROMPTTraining a large language model isn't a weekend project -- it's an electricity instance, powering up OpenAI's GPT-3 used about 1.3 gigawatt-hours (GWh) of electricity, enough to supply roughly 120 average American homes for a year (ScienceDirect, 2023).Once the model is live, every single prompt you send to GPT-4 still sips power -- about 0.0003 kilowatt-hours (kWh). That's like leaving a 60-watt light bulb on for 20 seconds (Epoch AI, 2025).Now imagine this at scale: ChatGPT answers over 1 billion queries a day. If every Google search worldwide -- around 9 billion a day -- were replaced with a ChatGPT query, global energy use would spike by an extra 10 TWh per year. That's enough to power all the homes in a city the size of San Francisco for a year (Hugging Face and Carnegie Mellon University, 2024).advertisementGenerating a single AI-created image isn't free either. It can use as much energy as fully charging your smartphone once. THE WATER-COOLING FACTOR: HOW AI IS DRAINING RESOURCESElectricity tells only half the story. Those thousands of servers run hot, and cooling them eats up alarming amounts of 2024, Google's data centres consumed nearly 6 billion gallons (22.7 billion litres) of water -- about one-third of Turkey's drinking water for an entire year, as per its 2024 Environment model training, every 10-20 prompts to Google's Bard chatbot reportedly required about 500 ml of water -- mostly for average, data centres use about 1.8 litres of water per kWh of electricity consumed (Meta Platforms, 2023). Put another way, a single large data centre can guzzle as much water in 24 hours as a small that thirst is rising -- industry water consumption grows by 8-20% each year, thanks to AI's surging role and stiff competition (University of California, Riverside, and University of Texas at Arlington, 2025). advertisementWHO'S DOING IT BETTER: GREEN AI INITIATIVES AND THEIR LIMITSSome big names in tech are trying to curb AI's environmental chew. Microsoft, for instance, has cut its water use per computing unit by 39% since 2021, saving about 125 million litres per data centre annually by using reclaimed water and smarter cooling methods (Meta Platforms, 2023).Alternative cooling strategies -- like air cooling or immersing servers in liquid -- are also under the microscope. But these options come with their own cooling cuts water use but can drive up energy needs for pumps or special fluids. Air cooling can be less water-dependent, yet efficiency dips when temperatures climb (Meta Platforms, 2023).Switching to renewables helps, but even 'green' electricity can't keep pace with AI's relentless demand. A recent MIT Lincoln Laboratory report warned that global demand for clean power is lagging far behind AI's growth many companies still don't fully share their energy or water metrics, making it near impossible to verify progress. advertisementTHE BOTTOM LINEAI's magic comes at a genuine cost. Data centres powering our digital conversations and image creations now rival entire nations in electricity use. They swig water comparable to small as AI continues its meteoric rise, this environmental toll only grows World Environment Day, it's worth asking: how can we keep enjoying the convenience of AI without letting it swallow our planet?'Green AI' efforts -- more efficient hardware, smarter cooling, renewable electricity -- are steps in the right direction. But the gap between AI's hunger and our ability to supply green power remains we don't face this challenge head-on, our planet will bear the brunt. Every prompt you send, every image you generate, leaves an unseen footprint that's getting harder to Watch


Mint
an hour ago
- Mint
Why Anthropic CEO Dario Amodei thinks a 10-year AI regulation freeze is dangerous
Dario Amodei, Chief Executive of AI firm Anthropic, has criticised a Republican proposal to halt state-level regulation of artificial intelligence for ten years, calling the measure "far too blunt" in a sharply worded opinion piece published inThe New York Times. The proposal, reportedly embedded within former President Donald Trump's revived tax cut bill, seeks to prevent individual US states from enforcing their own AI regulations. This move, backers say, is intended to create a unified national framework. However, it has faced significant pushback, including from a bipartisan coalition of state attorneys general who have already enacted safeguards against high-risk AI use. You may be interested in Amodei, whose company is backed by Amazon, argued that the rapid pace of AI advancement demands a more agile and balanced approach. 'A 10-year moratorium is far too blunt an instrument. AI is advancing too head-spinningly fast,' he wrote. 'Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.' Instead, Amodei called for a coordinated effort between the White House and Congress to develop a federal transparency standard. Such a framework, he suggested, would compel AI developers to openly disclose their testing methods, risk mitigation strategies, and national security considerations prior to releasing advanced models. Anthropic already shares these details publicly, Amodei noted, and rivals OpenAI and Google DeepMind have adopted similar practices. However, he cautioned that voluntary transparency may not be sustainable as AI systems become more powerful and commercial stakes rise. 'Legislative incentives may become necessary to ensure this openness continues,' he said. Amodei's intervention adds weight to calls for a federal framework that preserves public safety while ensuring that innovation is not stifled, a balance that may prove increasingly difficult to strike as AI tools become more sophisticated and deeply embedded in daily life. (With inputs from Reuters)