logo
Judicious usage of technology is need of the hour: Minister Pardha Saradhi

Judicious usage of technology is need of the hour: Minister Pardha Saradhi

Time of India24-04-2025

Hon'ble Minister for Information & Public relations and Housing Sri Kolusu Parthasarathy along with executive Committee of PRSI Amaravati Chapter on the occasion of 38th National PR Day held at PB Siddarth College of Arts & Sciences, Vijayawada
VIJAYAWADA:
Andhra Pradesh
Information and Public Relations minister Kolusu Partha Sarathi praised chief minister
Nara Chandrababu Naidu
as a visionary in adopting technology for the state's progress. While addressing the PR Day celebrations organised by the
Public Relations Society of India
(PRSI) at P B Siddhartha College, he highlighted the need for caution in the use of
Artificial Intelligence
(AI), warning that overuse could lead to serious issues.
He said that technology should be used only for the development of humanity. He added that the responsibility lies with the media to use media technology wisely and to work with discretion on how to use it for good. He suggested that the media should also use technology to do good for humanity. He further stated that excessive use of social media is not right. 'Social media trends are very bad and it is everyone's responsibility to stop this,' he said.
Vice Chancellor in-charge of Andhra Kesari University, Ongole, Professor DVR Murthy, expressed concern about the deteriorating trends of social media. He said that the situation is becoming more complicated with the use of AI. 'If media organisations do not take steps to rectify the deteriorating situation, more dangerous situations will arise with AI,' he warned. He suggested that the media should take this responsibility instead of waiting for someone else to fix the changing situation.
Vijayawada Railway Division PRO Nasrat Y Mandrupkar warned that while there are many benefits to using AI, if it is not used responsibly and without properly assessing the consequences, there is a risk of harm to society.
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
People Aged 50-85 With No Life Insurance Could Get This
Reassured
Get Quote
Undo
PRSI Amaravati Chapter Chairperson Anitha delivered the welcome address on the PRSI theme
Responsible Use of AI
: Role of Public Relations. The event was attended by I&PR Joint Director Kiran, PRSI Amaravati Chapter Vice Chairman J Rajendra Kumar, Secretary Kandadai Parthasarathy, Joint Secretary and AP Transco Corporate Communications Divisional Manager P Basheer, Treasurer Oliva Eramala, executive committee members, journalism scholars, and others.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google, Scale AI's largest customer, plans split after Meta deal
Google, Scale AI's largest customer, plans split after Meta deal

The Hindu

time15 minutes ago

  • The Hindu

Google, Scale AI's largest customer, plans split after Meta deal

Alphabet's Google, the largest customer of Scale AI, plans to cut ties with Scale after news broke that rival Meta is taking a 49% stake in the AI data-labeling startup, five sources familiar with the matter told Reuters. Google had planned to pay Scale AI about $200 million this year for the human-labeled training data that is crucial for developing technology, including the sophisticated AI models that power Gemini, its ChatGPT competitor, one of the sources said. The search giant already held conversations with several of Scale AI's rivals this week as it seeks to shift away much of that workload, sources added. Scale's loss of significant business comes as Meta takes a big stake in the company, valuing it at $29 billion. Scale was worth $14 billion before the deal. Scale AI intends to keep its business running while its CEO, Alexandr Wang, along with a few employees, move over to Meta. Since its core business is concentrated around a few customers, it could suffer greatly if it loses key customers like Google. In a statement, a Scale AI spokesperson said its business, which spans work with major companies and governments, remains strong, as it is committed to protecting customer data. The company declined to comment on specifics with Google. Scale AI raked in $870 million in revenue in 2024, and Google spent some $150 million on Scale AI's services last year, sources said. Other major tech companies that are customers of Scale's, including Microsoft, are also backing away. Elon Musk's xAI is also looking to exit, one of the sources said. OpenAI decided to pull back from Scale several months ago, according to sources familiar with the matter, though it spends far less money than Google. OpenAI's CFO that the company will continue to work with Scale AI, as one of its many data vendors. Companies that compete with Meta in developing cutting-edge AI models are concerned that doing business with Scale could expose their research priorities and road map to a rival, five sources said. By contracting with Scale AI, customers often share proprietary data as well as prototype products for which Scale's workers are providing data-labeling services. With Meta now taking a 49% stake, AI companies are concerned that one of their chief rivals could gain knowledge about their business strategy and technical blueprints. Google, Microsoft and OpenAI declined to comment. xAI did not respond to a request for comment. RIVALS SEE OPENINGS The bulk of Scale AI's revenue comes from charging generative AI model makers for providing access to a network of human trainers with specialized knowledge - from historians to scientists, some with doctorate degrees. The humans annotate complex datasets that are used to "post-train" AI models, and as AI models have become smarter, the demand for the sophisticated human-provided examples has surged, and one annotation could cost as much as $100. Scale also does data-labeling for enterprises like self-driving car companies and the U.S. government, which are likely to stay, according to the sources. But its biggest money-maker is in partnering with generative AI model makers, the sources said. Google had already sought to diversify its data service providers for more than a year, three of the sources said. But Meta's moves this week have led Google to seek to move off Scale AI on all its key contracts, the sources added. Because of the way data-labeling contracts are structured, that process could happen quickly, two sources said. This will provide an opening for Scale AI's rivals to jump in. "The Meta-Scale deal marks a turning point," said Jonathan Siddharth, CEO of Turing, a Scale AI competitor. "Leading AI labs are realizing neutrality is no longer optional, it's essential." Labelbox, another competitor, will "probably generate hundreds of millions of new revenue" by the end of the year from customers fleeing Scale, its CEO, Manu Sharma, told Reuters. Handshake, a competitor focusing on building a network of PhDs and experts, saw a surge of workload from top AI labs that compete with Meta. "Our demand has tripled overnight after the news," said Garrett Lord, CEO at Handshake. Many AI labs now want to hire in-house data-labelers, which allows their data to remain secure, said Brendan Foody, CEO of Mercor, a startup that in addition to competing directly with Scale AI also builds technology around being able to recruit and vet candidates in an automated way, enabling AI labs to scale up their data labeling operations quickly. Founded in 2016, Scale AI provides vast amounts of labeled data or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. The Meta deal will be a boon for Scale AI's investors including Accel and Index Ventures, as well as its current and former employees. As part of the deal, Scale AI's CEO, Wang, will take a top position leading Meta's AI efforts. Meta is fighting the perception that it may have fallen behind in the AI race after its initial set of Llama 4 large language models released in April fell short of performance expectations.

AI literacy: What it is, what it isn't, who needs it, why it's hard to define
AI literacy: What it is, what it isn't, who needs it, why it's hard to define

Time of India

time15 minutes ago

  • Time of India

AI literacy: What it is, what it isn't, who needs it, why it's hard to define

It is "the policy of the United States to promote AI literacy and proficiency among Americans," reads an executive order President Donald Trump issued on April 23, 2025. The executive order, titled Advancing Artificial Intelligence Education for American Youth, signals that advancing AI literacy is now an official national priority. This raises a series of important questions: What exactly is AI literacy, who needs it, and how do you go about building it thoughtfully and responsibly? The implications of AI literacy, or lack thereof, are far-reaching. They extend beyond national ambitions to remain "a global leader in this technological revolution" or even prepare an "AI-skilled workforce," as the executive order states. Without basic literacy, citizens and consumers are not well equipped to understand the algorithmic platforms and decisions that affect so many domains of their lives: government services, privacy, lending, health care, news recommendations and more. And the lack of AI literacy risks ceding important aspects of society's future to a handful of multinational companies. Live Events How, then, can institutions help people understand and use - or resist - AI as individuals, workers, parents, innovators, job seekers, students, employers and citizens? We are a policy scientist and two educational researchers who study AI literacy, and we explore these issues in our research. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories What AI literacy is and isn't At its foundation, AI literacy includes a mix of knowledge, skills and attitudes that are technical, social and ethical in nature. According to one prominent definition, AI literacy refers to "a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace." AI literacy is not simply programming or the mechanics of neural networks, and it is certainly not just prompt engineering - that is, the act of carefully writing prompts for chatbots. Vibe coding, or using AI to write software code, might be fun and important, but restricting the definition of literacy to the newest trend or the latest need of employers won't cover the bases in the long term. And while a single master definition may not be needed, or even desirable, too much variation makes it tricky to decide on organizational, educational or policy strategies. Who needs AI literacy? Everyone, including the employees and students using it, and the citizens grappling with its growing impacts. Every sector and sphere of society is now involved with AI, even if this isn't always easy for people to see. Exactly how much literacy everyone needs and how to get there is a much tougher question. Are a few quick HR training sessions enough, or do we need to embed AI across K-12 curricula and deliver university micro credentials and hands-on workshops? There is much that researchers don't know, which leads to the need to measure AI literacy and the effectiveness of different training approaches. Measuring AI literacy While there is a growing and bipartisan consensus that AI literacy matters, there's much less consensus on how to actually understand people's AI literacy levels. Researchers have focused on different aspects, such as technical or ethical skills, or on different populations - for example, business managers and students - or even on subdomains like generative AI. A recent review study identified more than a dozen questionnaires designed to measure AI literacy, the vast majority of which rely on self-reported responses to questions and statements such as "I feel confident about using AI." There's also a lack of testing to see whether these questionnaires work well for people from different cultural backgrounds. Moreover, the rise of generative AI has exposed gaps and challenges: Is it possible to create a stable way to measure AI literacy when AI is itself so dynamic? In our research collaboration, we've tried to help address some of these problems. In particular, we've focused on creating objective knowledge assessments, such as multiple-choice surveys tested with thorough statistical analyses to ensure that they accurately measure AI literacy. We've so far tested a multiple-choice survey in the U.S. , U.K. and Germany and found that it works consistently and fairly across these three countries. There's a lot more work to do to create reliable and feasible testing approaches. But going forward, just asking people to self-report their AI literacy probably isn't enough to understand where different groups of people are and what supports they need. Approaches to building AI literacy Governments, universities and industry are trying to advance AI literacy. Finland launched the Elements of AI series in 2018 with the hope of educating its general public on AI. Estonia's AI Leap initiative partners with Anthropic and OpenAI to provide access to AI tools for tens of thousands of students and thousands of teachers. And China is now requiring at least eight hours of AI education annually as early as elementary school, which goes a step beyond the new US executive order. On the university level, Purdue University and the University of Pennsylvania have launched new master's in AI programmes, targeting future AI leaders. Despite these efforts, these initiatives face an unclear and evolving understanding of AI literacy. They also face challenges to measuring effectiveness and minimal knowledge on what teaching approaches actually work. And there are long-standing issues with respect to equity − for example, reaching schools, communities, segments of the population and businesses that are stretched or under-resourced. Next moves on AI literacy Based on our research, experience as educators and collaboration with policymakers and technology companies, we think a few steps might be prudent. Building AI literacy starts with recognizing it's not just about tech: People also need to grasp the social and ethical sides of the technology. To see whether we're getting there, we researchers and educators should use clear, reliable tests that track progress for different age groups and communities. Universities and companies can try out new teaching ideas first, then share what works through an independent hub. Educators, meanwhile, need proper training and resources, not just additional curricula, to bring AI into the classroom. And because opportunity isn't spread evenly, partnerships that reach under-resourced schools and neighbourhoods are essential so everyone can benefit. Critically, achieving widespread AI literacy may be even harder than building digital and media literacy, so getting there will require serious investment - not cuts - to education and research. There is widespread consensus that AI literacy is important, whether to boost AI trust and adoption or to empower citizens to challenge AI or shape its future. As with AI itself, we believe it's important to approach AI literacy carefully, avoiding hype or an overly technical focus. The right approach can prepare students to become "active and responsible participants in the workforce of the future" and empower Americans to "thrive in an increasingly digital society," as the AI literacy executive order calls for.

Nippon Steel–US Steel deal approved: Trump signs executive order; contingent on compliance with security agreement
Nippon Steel–US Steel deal approved: Trump signs executive order; contingent on compliance with security agreement

Time of India

timean hour ago

  • Time of India

Nippon Steel–US Steel deal approved: Trump signs executive order; contingent on compliance with security agreement

Image used for representative purposes US President Donald Trump has signed an executive order enabling Japanese firm Nippon Steel's investment in US Steel, contingent upon compliance with a government-mandated "national security agreement". The joint statement from both companies confirms $11 billion in new investments by 2028 and the provision of a "golden share" to the US government, ensuring protection of national security interests regarding steel production. They expressed gratitude to Trump's administration, highlighting the partnership's significance for American manufacturing. Following completion of regulatory approvals and Department of Justice review, the partnership awaits prompt finalisation. US Steel shares increased by $2.66 to $54.85 in Friday's afterhours trading, compared to Nippon Steel's initial bid of $55 per share in late 2023. The acquisition concludes the long-time uncertainty regarding international control of a crucial state resource that started when US Steel and Nippon Steel disclosed their intentions for a $14.9 billion merger in December 2023. Details regarding the golden share mechanism, national security agreement provisions and investment allocation remain limited. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Pinga-Pinga e HBP? Tome isso 1x ao dia se tem mais de 40 anos Portal Saúde do Homem Clique aqui Undo White House spokesman Kush Desai emphasised the order's role in maintaining US Steel's presence in Pennsylvania. Legal expert James Brower noted that such agreements typically remain confidential, particularly from the government's perspective. The transaction, originally valued at nearly $15 billion, faced delays due to national security concerns during Biden's presidency. American officials now value it at $28 billion, including the purchase bid and plans for a new electric arc furnace post-2028, reported AP. Nippon Steel committed to maintaining Pittsburgh headquarters, establishing an American-majority board, protecting plant operations and trade interests, while avoiding steel slab imports that would compete with US blast furnaces. Trump, initially opposing the purchase, later showed willingness to negotiate. He claimed "total control" through the golden share, though Nippon Steel maintains its intention for full ownership. The order acknowledges potential national security risks but suggests adequate mitigation through the security agreement. The Treasury Department and CFIUS agencies must approve the agreement's execution by the transaction's closing date, with Trump retaining authority for further actions. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store