logo
Smart signals in place, NMC begins installing AI-powered speed radars

Smart signals in place, NMC begins installing AI-powered speed radars

Time of India21-07-2025
the first phase of the ₹197 crore
Integrated Intelligent Traffic Management System
(IITMS) project underway, the
Nagpur Municipal Corporation
(NMC), through implementing agency Keltron, has now begun deploying AI-powered speed radar systems on the city's major roads. This comes after Keltron completed integration of IITMS at 10 of the 171 identified traffic junctions in the city.
The pilot installation of the speed radar system was completed on the busy Omkar Nagar Square–Manewada Square stretch.
Equipped with radar sensors, ANPR (automatic number plate recognition) cameras, and high-intensity strobe flash units, the system is capable of detecting
over speeding vehicles
in real-time, capturing licence plates, and instantly transmitting data to the city's
traffic control command centre
.
"The radar system was successfully calibrated and integrated. Once a vehicle exceeds the speed limit, the system triggers the camera to capture high-resolution images of the number plate, even at night," said a senior Keltron official.
The AI-powered enforcement system is a major upgrade under the IITMS project, which aims to modernise
traffic management
, improve road safety, and bring down accident rates in the city.
The radar unit on the Omkar Nagar–Manewada stretch is the first of many planned across 32 major roads, where traffic volume and speeding are key concerns. At some junctions or routes, multiple radar units will be installed to cover both sides of the carriageway.
In parallel, Keltron has also initiated the conversion of 50 traffic intersections under IITMS. This includes the installation of smart traffic signals, vehicle actuated systems, surveillance cameras, and real-time traffic data integration at each junction. These upgrades are part of the wider plan to cover all 171 junctions with intelligent systems.
Each radar and junction system is connected to the NMC's centralised traffic control room, allowing real-time monitoring, data collection, and automated violation alerts.
Senior civic officials, including the municipal commissioner, are likely to inspect the radar installation site soon. Keltron's technical teams are continuously monitoring the functioning of systems on the ground.
The IITMS project is a critical part of Nagpur's
Smart City initiative
and is expected to transform how traffic is monitored, enforced, and managed in the city.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Is Wrecking an Already Fragile Job Market for College Graduates
AI Is Wrecking an Already Fragile Job Market for College Graduates

Hindustan Times

time13 minutes ago

  • Hindustan Times

AI Is Wrecking an Already Fragile Job Market for College Graduates

For a growing number of bosses, the answer is not much—AI can do the work instead. At Chicago recruiting firm Hirewell, marketing agency clients have all but stopped requesting entry-level staff—young grads once in high demand but whose work is now a 'home run' for AI, the firm's chief growth officer said. Dating app Grindr is hiring more seasoned engineers, forgoing some junior coders straight out of school, and CEO George Arison said companies are 'going to need less and less people at the bottom.' Bill Balderaz, CEO of Columbus-based consulting firm Futurety, said he decided not to hire a summer intern this year, opting to run social-media copy through ChatGPT instead. Balderaz has urged his own kids to focus on jobs that require people skills and can't easily be automated. One is becoming a police officer. Having a good job 'guaranteed' after college, he said, 'I don't think that's an absolute truth today any more.' There's long been an unwritten covenant between companies and new graduates: Entry-level employees, young and hungry, are willing to work hard for lower pay. Employers, in turn, provide training and experience to give young professionals a foothold in the job market, seeding the workforce of tomorrow. A yearslong white-collar hiring slump and recession worries have weakened that contract. Artificial intelligence now threatens to break it completely. That is ominous for college graduates looking for starter jobs, but also potentially a fundamental realignment in how the workforce is structured. As companies hire and train fewer young people, they may also be shrinking the pool of workers that will be ready to take on more responsibility in five or 10 years. Companies say they are already rethinking how to develop the next generation of talent. AI is accelerating trends that were already under way. With each new class after 2020, an ever-smaller share of graduates is landing jobs that require a bachelor's degree, according to a Burning Glass Institute analysis of labor data. That's happening across majors, from visual arts to engineering and mathematics. And unemployment among recent college graduates is now rising faster than for young adults with just high-school or associate degrees. Meanwhile, the sectors where graduate hiring has slowed the most—like information, finance, insurance and technical services—are still growing, a sign employers are becoming more efficient and see no immediate downside to hiring fewer inexperienced workers, said Matt Sigelman, Burning Glass's president. 'This is a more tectonic shift in the way employers are hiring,' Sigelman said. 'Employers are significantly more likely to be letting go of their workers at the entry level—and in many cases are stepping up their hiring of more experienced professionals.' After dancing around the issue in the 2½ years since ChatGPT's release upended the way almost all companies plan for their futures, CEOs are now talking openly about AI's immense capabilities likely leading to deep job cuts. Top executives at industry giants including Amazon and JPMorgan have said in recent weeks that they expect their workforces to shrink considerably. Ford CEO Jim Farley said he expects AI will replace half of the white-collar workforce in the U.S. For new graduates, this means not only are they competing for fewer slots but they are also increasingly up against junior workers who have been recently laid off. While many bosses say they remain committed to entry-level workers and understand their value, the data is increasingly stark: The overall national unemployment rate is at about 4%, but for new college graduates, it was 6.6% over the past 12 months ending in May. At large tech companies, which power much of the U.S. economy, the trend is perhaps more extreme. Venture-capital firm SignalFire found that among the 15 largest tech companies by market capitalization, the share of entry-level hires relative to total new hires has fallen by 50% since 2019. Recent graduates accounted for just 7% of new hires in 2024, down from 11% in 2022. A May report by the firm pointed to shrinking teams, fewer programs for new graduates and the growing influence of AI. Jadin Tate studied informatics at the University at Albany, hoping to land a job focused on improving the user experience of apps or websites. The week before graduation, his mentor leveled with him: That field is being taken over by AI. He warned it may not exist in five years. Tate has attended four conventions this year, networking with companies and asking if they are hiring. He has also applied to dozens of jobs, without success. Several of his college friends are working retail and food-service jobs as they apply for white-collar roles or before their start dates. 'It has been intimidating,' Tate said of his job search. Indeed, recent graduates and students are fighting over a smaller number of positions geared at entry-level workers. There were 15% fewer job postings to the entry-level job-search platform Handshake this school year than last, while the number of applications per job rose 30%, according to the platform. Internship postings and applications saw similar trend lines between 2023 and 2025. Less gruntwork, more mentoring The shift to AI presents huge risks to companies on skill development, even as they enjoy increased efficiency and productivity from fewer workers, said Chris Ernst, chief learning officer at the HR and finance software company Workday. Ernst said his research shows that workers mostly learn through experience, and then the remainder comes from relationships and development. When AI can produce in seconds a report that previously would have taken a young employee days or weeks—teaching that person critical skills along the way—companies will have to learn to train that person differently. 'Genuine learning, growth, adaptation—it comes from doing the hard work,' he said. 'It's those moments of challenge, of hardship—that's the crucible where people grow, they change, they learn most profoundly.' Among other things, Ernst said employers must be intentional about connecting young workers with colleagues and making time to mentor them. At the pipeline operator Williams, based in Tulsa, Okla., the company realized that thanks to AI young professionals were performing less of the drudgework like digging into corporate data that historically has taught them the core of the business. New employees at pipeline operator Williams go through a two-day orientation at the corporate headquarters in Tulsa, Okla. The company this year started a two-day onboarding program where veteran executives teach new hires the business fundamentals. Chief Human Resources Officer Debbie Pickle said that increased training will help new hires develop without loading them down with gruntwork. 'These are really bright, top talent people,' she said. 'We shouldn't put a cap on how we think they can add value for the company.' Still, Pickle said, the increased efficiency will allow the company to expand the business while keeping head count flat in the future. Some of the entry-level jobs most at risk are the most lucrative for recent graduates, including on Wall Street and in big law firms where six-figure starting salaries are the norm. But those jobs have also been famously menial for the first few years—until AI came along. The investment firm Carlyle now pitches to prospective hires that they won't be doing grunt work. Junior hires go through AI training and a program called 'AI University' in which employees share best practices and participate in pilot programs, said Lúcia Soares, the firm's chief information officer. In the past, she said, junior hires evaluating a deal would find articles on Google, request documents from companies, review that information manually, highlight details and copy and paste information from one document to another. Now, AI tools can do almost all of that. An employee poses for a headshot at the new-employee orientation at Williams. 'That analyst still has to go in and make sure the analysis is accurate, question it, challenge it,' she said. 'The nature of the brain work that needs to go into it is very much the same. It's just the speed at which these analysts can move.' She said Carlyle has maintained the same volume of entry-level hiring but said 90% of its staff has adopted generative AI tools that automate some work. 'Messy transition' Carlyle's reliance on young staff to check AI's output highlights what many users know to be true: it still struggles in some cases to do the work of humans effectively. Still, many executives expect that gap to close quickly. At the New York venture-capital firm Primary Venture Partners, Rebecca Price said she's encouraging CEOs of the firm's 100 portfolio companies to think hard about every hire and whether the role could be automated. She said it's not that there are no entry-level jobs, but that there's a gap between the skills companies expect out of their junior hires in the age of AI and what most new graduates are equipped with out of school. An engineer in a first job used to need basic coding abilities: now that same engineer needs to be able to detect vulnerabilities and have the judgment to determine what can be trusted from the AI models. New grads must also learn faster and think critically, she said—skills that many of the newest computer-science grads don't have yet. 'We're in this messy transition,' said Price, a partner at the firm. 'The bar is higher and the system hasn't caught up.' Students are seeing the transition in real time. Arjun Dabir, a 20-year-old applied math major at the University of California, Irvine, said when he applied for internships last year, companies asked for knowledge of coding languages. Now, they want candidates who are familiar with how AI 'agents' can automate certain tasks on behalf of humans—or 'agentic workflows' in the new vernacular. 'What is an intern going to do?' Dabir said as drones buzzed overhead nearby at an artificial intelligence convention in June in Washington, DC. The work typically done by interns, 'that task is no longer necessary. You don't need to hire someone to do it.' Venture capitalist Allison Baum Gates said young professionals will need to be more entrepreneurial and gain experience on their own without the standard track of starting as an analyst or a paralegal and working their way up. Her firm, SemperVirens, invests in healthcare startups, workforce technology companies and fintech firms, some of which are replacing entry-level jobs. 'Maybe I'm wrong and this leads to a wealth of new jobs and opportunities and that would be a great situation,' she said. 'But it would be far worse to assume that there's no adverse impact and then be caught without a solution.' Rosalia Burr, 25, is trying to avoid such an outcome. She graduated in 2022 and quickly joined Liberty Mutual Insurance, where she had interned twice during college at Arizona State University. She was laid off from her payroll job in December. Running has soothed her anxiety. This spring, however, she tore her hip flexor and had to rest to heal. Job rejections, as she was stuck inside, hit extra hard. 'I felt that I was failing.' Her goal now is to find a client-facing job. 'If you're in a business back-end role, you're more of a liability of getting laid off, or your job being automated,' she said. 'If you're client facing, that's something people can't really replicate' with AI. Write to Lindsay Ellis at and Katherine Bindley at AI Is Wrecking an Already Fragile Job Market for College Graduates

Sam Altmans says GPT 5 AI is so smart, it made him feel useless ahead of imminent launch
Sam Altmans says GPT 5 AI is so smart, it made him feel useless ahead of imminent launch

India Today

timean hour ago

  • India Today

Sam Altmans says GPT 5 AI is so smart, it made him feel useless ahead of imminent launch

AI is evolving at an unprecedented pace and getting smarter and smarter with each passing day. This advancement is also bringing on the fear of AI and its potential to replace humans. While that day hasn't arrived yet, even OpenAI CEO Sam Altman admits he's unsettled by what's coming next. Sharing a recent incident involving the upcoming GPT-5, Altman reveals that the capabilities of this advanced model have left him feeling almost 'useless.'advertisementSpeaking on comedian Theo Von's This Past Weekend podcast, Altman revealed that when he recently used GPT-5 for some work, the model demonstrated such intelligence that it left him feeling 'useless relative to the intelligence of AI.' Altman shared that he had fed the AI a question he didn't fully understand:'I put it in the model (a question I got in an email I received) – this is GPT-5 – and it answered it perfectly. I felt useless relative to the AI in this thing that I felt I should have been able to do, and I couldn't, and it was really hard. But the AI just did it like that. It was a weird feeling.' Altman described the moment as a realisation of how powerful GPT-5 has Altman has already confirmed on X (formerly Twitter) that GPT-5 will be released 'soon.' While the company has not yet shared the exact dates, the teaser and the fact that Altman shared GPT-5 is already being used are fuelling speculation that the next-generation model is just weeks away. According to rumours, the advanced AI model from OpenAI could arrive as early as August, with OpenAI planning to release mini and nano versions of the model through its has already been spotted in limited tests, further fuelling speculation about its arrival and intensifying interest in what OpenAI has called its most advanced system yet. In fact, Altman had previously described the model as 'a system that integrates a lot of our technology,' with enhanced reasoning capabilities built in. The so-called 'o3 reasoning' engine—originally planned as a standalone system—will now be integrated directly into GPT-5, making it a more unified and powerful on the same podcast, Altman also touched upon the raised concerns about the mental health impact of AI, particularly as more people are using AI for therapy, forming emotional attachments to AI companions, and relying on them extensively. Answering Von's question on whether AI will have the same negative effect that social media really has, Altman replies, 'I'm scared of that...I don't think we know quite the ways in which it's going to have those negative impacts, but I feel for sure it's going to have some and we'll have to". - Ends

AI in the military: Path to ethical and strategic leadership
AI in the military: Path to ethical and strategic leadership

Hindustan Times

timean hour ago

  • Hindustan Times

AI in the military: Path to ethical and strategic leadership

With defence emerging as one of AI's most sensitive applications, a blend of innovation and responsibility is vital. India is laying the groundwork for responsible use of AI in defence, which is being led by key institutions like the Defence Research and Development Organisation (DRDO), NITI Aayog, and ministry of electronics and IT (MeitY). AI(Unsplash) To ensure ethical oversight, the DRDO launched the ETAI framework for evaluating trustworthy AI. At its essence, the framework aims to tackle difficult questions such as legality, safety, and human control, right from the start of a system's development and through its lifecycle. The framework is modular, incorporating internal ethical assessments and formalised reviews based on the risk level of the technology. DRDO's goal here is to entrench responsibility into every stage of AI development and not treat it as an afterthought. The ETAI framework stands on five key pillars: Reliability, safety, transparency, fairness, and privacy: Reliability: System performance needs to be accurate, even in chaotic or unfamiliar battlefield scenarios. Safety: Guardrails against unintended consequences, particularly with autonomous machines. Transparency: Human commanders should have the ability trace and understand decision chains. Fairness: Algorithmic biases should not influence critical decisions. Privacy: Protects sensitive data from unauthorised access. The more autonomous and impactful an AI system is, the more rigorous its review. For example, a chatbot used for internal training would not require the same oversight as a drone with live targeting capabilities. In 2021, NITI Aayog also laid out a framework that aligns AI development with India's constitutional ethos, in particular Article 14 (Right to equality) and 21 (Right to life and personal liberty). The seven principles it emphasised are: * Safety and reliability * Equality * Inclusivity and non-discrimination * Privacy and security * Transparency * Accountability * Reinforcement of positive human values Although originally aimed at civilian sectors like health care, these principles are just as relevant for defence. After all, the ethical implications of facial recognition, predictive analytics, and autonomous systems remain just as relevant to military use. While the policy framed by DRDO and Niti Aayog appears to be robust and well—thought out, turning paper policy into practice is where the real challenge lies. Since these principles lack legal enforceability, co-ordination between agencies and ethical AI frameworks remains limited. As a result, many projects operate in silos, making comprehensive ethical oversight a challenge. More recently (in 2024), MeitY launched the IndiaAI Mission with over ₹10,000 crore in funding. This is meant to support researchers and companies who need heavy compute power for AI training. A big part of this initiative is (Artificial Intelligence Research, Analytics, and Knowledge Assimilation Platform (AIRAWAT), a public compute cloud designed to support India's AI needs, especially for startups and public agencies. In practice, AIRAWAT provides a cloud platform where developers can access massive computational power for training AI models. This can be directly beneficial for defence applications such as simulation training, image intelligence analysis, or building large language models for military use, which require supercomputing resources. While the ministry of defence, MeitY, and NITI Aayog are all making strides, their efforts are largely siloed. India needs a unified platform to align goals, streamline operations, and ensure ethical consistency. Key recommendations include: Setting up of a Defence AI Regulatory Authority with clear legal powers Making ethical and risk-based assessments mandatory in all defence AI procurements Expand AIRAWAT-like infrastructure specifically for secure military testing environments Build structured collaboration between DRDO, MeitY, and NITI Aayog It is clear that India has laid the foundation for responsible AI use in defence. It now needs to bring it all together. As countries in the Global South look for models that balance innovation with ethics, India has an opportunity to lead—not just by building powerful tools, but by using them wisely and lawfully. This article is authored by Zain Pandit, partner and Aashna Nahar, associate, JSA Advocates and Solicitors.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store