
Prahlada Ramarao: Proud father of Akash air defence
It was a deeply emotional moment for 78-year-old former aerospace engineer, missile designer and ex-Defence Research & Development Organisation (DRDO) scientist, Prahlada Ramarao. He was watching the highly impressive combat performance of the baby he had nurtured, which was protecting assets of India from incoming Pakistani drones and missiles between May 7 and 10, as 'Operation Sindoor' rolled on.
It was the indigenous missile air defence, Akash, a result of his sustained efforts for several years of his career spanning 44 years from 1971 till 2015. Tears welled up. He was speechless. It was a moment of pride and joy… and an overwhelming sense of accomplishment.
The Akash medium range surface-to-air missile system, with a range of about 30 km and an operational altitude ceiling of 18 km, can simultaneously engage multiple incoming targets, and is operational with the Indian Air Force and the Army. It provides air defence against multiple, simultaneous air threats to vulnerable forces or areas.
The system has cutting-edge features, including real-time multi-sensor data processing and threat evaluation, which enables simultaneous engagement of multiple targets from any direction. The entire system is flexible, configured on mobile platforms, upscalable and can be operated in group and autonomous modes, employing command guidance and relying on phased array guidance radar to guide the missile till it intercepts incoming targets.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Mint
5 hours ago
- Mint
Reddit says Anthropic used its community to train AI without permission, files lawsuit
Social media platform Reddit has launched legal proceedings against artificial intelligence company Anthropic, accusing the firm of unlawfully scraping user-generated content to train its AI assistant, Claude. Filed on Wednesday in the California Superior Court in San Francisco, the lawsuit alleges that Anthropic used automated tools to extract Reddit users' comments without permission, despite being explicitly told not to do so. According to Reddit, this content was then used to train the Claude chatbot, without user consent or proper licensing. You may be interested in 'AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,' said Reddit's Chief Legal Officer, Ben Lee. He emphasised the platform's commitment to safeguarding its community, which generates vast volumes of public discourse each day. Reddit, which went public last year, has previously struck licensing deals with AI developers such as OpenAI and Google. These arrangements, the company says, allow for transparent and lawful use of data while enabling user protections like content deletion and spam prevention. 'These partnerships allow us to enforce meaningful safeguards for our users,' Lee added, underlining the contrast between authorised data access and Anthropic's alleged misconduct. Anthropic, founded by ex-OpenAI executives in 2021 and now backed heavily by Amazon, denies the allegations. 'We disagree with Reddit's claims and will defend ourselves vigorously,' the company said in a brief statement. The legal complaint takes a different route from other recent cases involving AI firms. Rather than alleging copyright infringement, as seen in lawsuits from music publishers targeting Claude for replicating song lyrics, Reddit's filing focuses on breach of contract and unfair competition. Specifically, it accuses Anthropic of violating Reddit's terms of service by scraping data without a proper license. The lawsuit cites a 2021 research paper co-authored by Anthropic CEO Dario Amodei, in which Reddit's subforums were explicitly mentioned as high-quality sources for training AI systems. Forums covering topics like gardening, history, and personal advice were listed as especially valuable for modelling human language patterns. Anthropic has previously claimed, including in a 2023 letter to the US Copyright Office, that its methods of training Claude constitute lawful use, describing them as a form of statistical analysis rather than reproduction of content.


Economic Times
7 hours ago
- Economic Times
OpenAI finds more Chinese groups using ChatGPT for malicious purposes
OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticised US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within U.S. political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.
&w=3840&q=100)

Business Standard
7 hours ago
- Business Standard
OpenAI finds more Chinese groups using ChatGPT for malicious purposes
OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday. While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said. Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio. OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms. In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID. Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?". In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation. A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images. China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings. OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.