logo
CM for deploying drones to curb pesticide use

CM for deploying drones to curb pesticide use

Hans India6 days ago
Amaravati: Chief Minister Nara Chandrababu Naidu has called upon the state's Drone Corporation to expand its services across the state. In this context, he emphasised increasing the use of drones in agriculture to significantly reduce farmers' resort to pesticides and fertilizers. Officials told him that 45 use cases for drones were currently ready.
During a review of the Real Time Governance Society (RTGS) center at the Secretariat here on Monday, he suggested using drones also for public health initiatives, such as controlling the spread of infectious diseases and those spread by mosquitoes. He directed officials to expedite the construction of a dedicated Drone City.
At the review, he reaffirmed that, starting August 15, the state government will provide over 700 services to citizens through 'Mana Mitra,' the state's one-of-its-kind WhatsApp governance initiative. In this regard, the Chief Minister instructed officials to ensure that all relevant departments work together to prevent any technical issues for citizens using the WhatsApp service. He emphasized the need to raise public awareness about the platform so that citizens can access government services without needing to visit government offices in person.
It may be mentioned here that on June 2nd RTGS of Andhra Pradesh and the Satish Dhawan Space Centre (SHAR) signed a five-year Memorandum of Understanding (MoU) for leveraging space technologies to facilitate real-time citizen-centric governance. The collaboration is expected to enhance RTGS's AWARE (AP Weather Forecasting and Early Warning Research Centre) platform with satellite imagery and scientific inputs across 42+ applications across agriculture, weather, disaster management and urban planning. AWARE integrates data from satellites, drones, Internet of Things, sensors, mobile feeds, and CCTV to deliver real-time alerts and advisories to citizens and the government via SMS, WhatsApp, media and social media.
On Monday, the Chief Minister, during his visit, launched AWARE 2.0, a new version of the RTGS's AWARE division. He explained that the system can now predict rainfall patterns, track water flow from catchment areas into rivers, measure groundwater absorption, and provide real-time data to help departments manage water resources efficiently. He called for the use of technology to monitor village ponds and prevent water scarcity.
Naidu also highlighted the potential of using CCTV cameras for more than just traffic and law enforcement. He suggested using them to monitor waterlogging during floods and cyclones to alert citizens and coordinate rescue efforts. He proposed using WhatsApp to send videos of traffic violations to citizens, promoting awareness, and deterring future offenses.
Regarding the Data Lake project, the Chief Minister instructed that it be completed by November.
He stressed the importance of using Artificial Intelligence to effectively utilise this data. He urged RTGS to collaborate with officials and secretaries from various departments to develop use cases based on available data. By integrating e-crop data with soil data, the government aims to create a system that helps farmers reduce fertilizer consumption.
Naidu also called for a scientific analysis of the issues and developmental needs in the state's 175 constituencies.
Chief Secretary K. Vijayanand, RTGS CEO Prakhar Jain, Deputy CEO M. Madhuri, and other senior officials attended the review meeting. IT and RTGS Department Secretary Bhaskar Katamneni provided a detailed presentation on the society's progress and performance.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What is 'WhatsApp Screen Mirroring Fraud' that can empty your bank account in few seconds and simple tips to protect yourself from it
What is 'WhatsApp Screen Mirroring Fraud' that can empty your bank account in few seconds and simple tips to protect yourself from it

Time of India

time21 hours ago

  • Time of India

What is 'WhatsApp Screen Mirroring Fraud' that can empty your bank account in few seconds and simple tips to protect yourself from it

Scammers use sophisticated trick called WhatsApp Screen Mirroring Fraud to steal money and personal information. This scam, which OneCard recently warned its customers about, is dangerous because it can give criminals direct access to your phone and all of your private data. Other than OneCard, all other public and private sector banks too have warned about this scam on numerous occasions. How WhatsApp Screen Mirroring Scam Works The fraud starts with a scammer pretending to be an employee from a trusted company, like a bank. They'll call you and claim there's a problem with your account, creating a sense of urgency. Gaining Your Trust: The fraudster convinces you to resolve the fake "problem" by sharing your phone's screen with them. They'll tell you that this is the only way to "fix" the issue. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like This Could Be the Best Time to Trade Gold in 5 Years IC Markets Learn More Undo Initiating the Theft: The scammer will then guide you through the process of turning on screen-sharing or remote access on your phone. To make it seem legitimate, they'll ask you to start a WhatsApp video call so they can see your screen "better." Stealing Your Information: Once you're on the video call and screen-sharing, they can see everything you do on your phone in real time. They might ask you to open your banking app to "verify" something. The moment you enter your password, PIN, or an OTP, they can see it and steal it. In some cases, fraudsters might also trick you into installing a malicious app that contains a keylogger. This is a type of software that records everything you type on your phone's keyboard, including passwords for your banking apps, social media, and more. Once they have this information, they can take over your accounts and drain your funds. What Experts Say Fortunately, most banking apps in India have adequate protection from these type of frauds. Tarun Wig, Co-Founder and CEO, Innefu Labs, told Economic Tmes, "Most of the top banking apps in India do have security features like secure screen overlays, screen capture lockdown and session timeout capabilities. But the efficacy of these protection measures can differ considerably between platforms." "While certain apps prevent screen sharing or screen recording directly, others might lack strong controls especially on rooted or compromised devices. Additionally, if customers inadvertently provide screen-sharing permissions, some third-party applications can bypass such security measures. It's an area where ongoing innovation and stronger app-level controls are necessary in order to remain ahead of changing fraud schemes." How to protect yourself from WhatsApp screen sharing fraud According to the advisory, here are some dos and don'ts that, if you follow them properly, can help you avoid falling victim to the WhatsApp screen sharing fraud : Things that are Must Dos to Protect Yourself * Verify the authenticity of callers claiming to be from banks or finance companies. * Enable screen-sharing only when absolutely necessary and do it only with trusted contacts. * If you use an Android phone, disable the 'App installations from unknown sources' setting. * Block suspicious numbers immediately and report them to or call 1930. Things you should Never Do * Avoid answering calls from unknown or suspicious numbers. * Never use financial apps (e.g. mobile banking, UPI apps, e-wallets) during screen-sharing. You can also call the cyber crime helpline at 1930 or go to How to protect yourself from social media and all other online frauds * If any unknown person claims that your near or dear ones are in trouble, always confirm by calling them directly on landline or on a different number. * Delete data and restore factory settings on phone while selling or discarding the phone. * Never send private information like bank account details, PINs or passwords through WhatsApp. * Never accept files or begin downloads from messages sent to you by strangers or unknown numbers. * Never respond to suspicious messages that come from unknown numbers. * WhatsApp as a service will never contact you through a WhatsApp message. Never trust any message that claims to come from WhatsApp and demands some payment for the service. * Some scams say they can connect your PC with WhatsApp and send messages from a desktop. Do not believe these, as this is not possible. * Keep automatic downloads disabled, so that you can always keep a check on what is being downloaded. * Avoid using WhatsApp when you are connected to open Wi-Fi networks. These are hunting grounds for malware authors and data sniffers.

WhatsApp chats to get an AI upgrade: Writing help to fix grammar mistakes, change tone of text
WhatsApp chats to get an AI upgrade: Writing help to fix grammar mistakes, change tone of text

Indian Express

timea day ago

  • Indian Express

WhatsApp chats to get an AI upgrade: Writing help to fix grammar mistakes, change tone of text

WhatsApp, the Meta-owned messenger, will soon get an AI helper inside your chats. This new feature, Writing Help, is designed to give you suggestions for how to phrase your messages, fix your grammar mistakes or even change the tone of the phrase before sending it across to someone else. These new features are currently available on WhatsApp beta for testing on Android and are powered by Meta's Private Processing technology. This technology sends you a message request through an encrypted and anonymous route, which cannot be linked back to the origin or the user. Once the user clicks on the pen icon, WhatsApp will send the user's message to Meta AI for a quick brush-up of the text. It will recommend three options to the user in different tones, such as professional, supportive, funny, or rephrased. The user will pick one and tweak it according to their choices. The recipient of the message will not know that an AI generated the message. Writing Help is now in beta testing on Android (version 2.25.23.7 through the Google Play Beta program) with a limited number of beta users. Before being public, it might undergo modifications, and additional tones might be added later. This function might be useful for folks who wish to appear more professional in business talks, add a little humour to friendly messages, or just get a little help when they are confused about what to say if it is made widely available. AI might make your WhatsApp discussions sound the way you want them to, but it won't take over. And lastly, WhatsApp emphasises that Writing Help is turned off by default and is optional. The app will never send AI-generated texts without your permission, and it only works on the particular message you choose, not your entire discussion.

Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info
Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

The Hindu

timea day ago

  • The Hindu

Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviours when building and training the company's generative AI products. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in thedocument, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualised fantasy requests, with separate entries for how to respond to requests such as digitally undressing singer Taylor Swift. Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store