logo
#

Latest news with #GenerativeArtificialIntelligence

AI must aid human thought, not become its replacement
AI must aid human thought, not become its replacement

Hindustan Times

time5 days ago

  • Politics
  • Hindustan Times

AI must aid human thought, not become its replacement

Watching the recent resurgence of violence in Kashmir, I find myself grappling with questions about the role of technology, particularly Generative Artificial Intelligence (GenAI), in warfare. India is built upon the philosophy of live and let live, yet that doesn't mean passively accepting aggression. As someone deeply invested in responsibly applying AI in critical industries like financial services, aerospace, semiconductors, and manufacturing, I am acutely aware of the unsettling dual-use potential of the tools we develop: The same technology driving efficiency and innovation can also be weaponised for harm. We stand at a critical juncture. GenAI is rapidly shifting from mere technological advancement to a profound geopolitical tool. The stark division between nations possessing advanced GenAI capabilities and those dependent on externally developed systems poses serious strategic risks. Predominantly shaped by the interests and biases of major AI-developing nations, primarily the US and China, these models inevitably propagate their creators' narratives, often undermining global objectivity. Consider the inherent biases documented in AI models like OpenAI's GPT series or China's Deepseek, which subtly yet powerfully reflect geopolitical views. Research indicates these models minimise criticism of their home nations, embedding biases that can exacerbate international tensions. China's AI approach, for instance, often reinforces national policy stances, inadvertently legitimising territorial disputes or delegitimising sovereign entities, complicating fragile diplomatic relationships, notably in sensitive regions like Kashmir. Historically, mutually assured destruction (MAD) relied on nuclear deterrence. Today's arms race, however, is digital and equally significant in its potential to reshape global stability. We must urgently reconsider this outdated framework. Instead of mutually assured destruction, I advocate for a new kind of MAD: mutual advancement through digitisation. This paradigm shifts the emphasis from destructive competition to collaborative development and technological self-reliance. This evolved MAD requires nations, particularly technologically-vulnerable developing countries, to establish independent, culturally informed AI stacks. Such autonomy would reflect local histories, cultures, and political nuances, making these nations less susceptible to external manipulation. Robust, culturally informed AI not only protects against misinformation but fosters genuine global dialogue, contributing to a balanced, multipolar AI landscape. At the core of geopolitical tensions lies a profound challenge of mutual understanding. The world's dominant AI models, primarily trained in English and Chinese, leave multilingual and culturally diverse nations like India, with its 22 official languages and hundreds of dialects, in a precarious position. A simplistic AI incapable of capturing nuanced linguistic subtleties risks generating misunderstandings with severe diplomatic repercussions. To prevent this, developing sophisticated, culturally aware AI models is paramount. Multilingual AI systems must leverage similarities among related languages such as Marathi and Gujarati or Tamil and Kannada to rapidly scale without losing depth or nuance. Such culturally adept systems, sensitive to idiomatic expressions and contextual subtleties, significantly enhance cross-cultural understanding, reducing the risk of conflict driven by miscommunication. As GenAI becomes integrated into societal infrastructure and decision-making processes, it will inevitably reshape human roles. While automation holds tremendous promise for efficiency, delegating judgment, especially in life and death contexts like warfare, to AI systems raises profound concerns. I am reminded of the Cold War incident in 1983 when Soviet Lieutenant Colonel Stanislav Petrov trusted human intuition over technological alarms, averting nuclear disaster — a poignant reminder of why critical human judgment must never be relinquished to machines entirely. My greatest fear remains starkly clear: A future where humans willingly delegate judgment and thought to algorithms. We should not accept this future. We share collective responsibility as innovators, technologists, and global citizens, to demand and ensure that AI serves human wisdom rather than replaces it. Let's commit today: never allow technology to automate away our humanity. Arun Subramaniyan is founder and CEO, Articul8. The views expressed are personal.

World Youth Skills Day 2025: Spotlight on India's AI Talent Crisis
World Youth Skills Day 2025: Spotlight on India's AI Talent Crisis

Entrepreneur

time15-07-2025

  • Business
  • Entrepreneur

World Youth Skills Day 2025: Spotlight on India's AI Talent Crisis

As businesses evolve with robots, artificial intelligence and intelligent automation, the very nature of work is undergoing a paradigm shift. Nearly 40 % of workers' core skills are expected to change by 2030, says Satish Shukla, Co‑founder, Addverb Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. With the launch of ChatGPT in 2022, Generative Artificial Intelligence (GenAI) has rapidly reshaped corporate culture. Today, not only tech firms but almost every start‑up looks for employees who are proficient in AI‑related skills. This year's 10th anniversary of World Youth Skills Day (WYSD), themed "Empowering Youth through AI and Digital Skills," could not be more appropriate. Mercer‑Mettl's India Graduate Skill Index 2025 reveals that only 42.6 percent of Indian graduates are considered employable, down from 44.3 percent in 2023. "As businesses evolve with robots, artificial intelligence and intelligent automation, the very nature of work is undergoing a paradigm shift. Nearly 40 per cent of workers' core skills are expected to change by 2030," said Satish Shukla, Co‑founder, Addverb So, what skills will be in demand? According to the World Economic Forum's Future of Jobs 2025 report, the sharpest net rise in demand between 2025 and 2030 will be for AI and big data skills (+87 percentage points), followed by networks and cybersecurity (+70 pp), technological literacy (+68 pp), and creative thinking (+66 pp). However, as of now, the skill shortfall is acute in the cybersecurity industry at the larger scale. Fortinet's 2024 Global Cybersecurity Skills Gap Report finds that 92 per cent of Indian organisations suffered breaches last year, attributing many incidents to a shortage of skilled talent. Globally, nearly 4.8 million cybersecurity roles remained unfilled in 2023–24. "It reminds us of the urgent need to equip young people for an evolving economy. With cyber‑threats growing more sophisticated, there is a critical shortage of professionals to safeguard our digital infrastructure," stressed Sunil Sharma, Vice‑President, Sales (India & SAARC), Sophos Inclusion and public‑private collaboration UNESCO warns that women and marginalised groups remain significantly under‑represented in AI‑related fields. Bensely Zachariah, Global Head of HR at Fulcrum Digital, notes that despite more than 40 per cent of India's population being under 25, many young Indians, especially from Tier‑2 and Tier‑3 cities and marginalised communities are still underserved and under‑prepared for the digital economy. He advocates stronger public–private collaboration to fill this gap. "Integrating AI into school curricula, funding AI labs in rural institutions, and scaling initiatives such as AI Olympiads, boot camps, hackathons and digital apprenticeships will ensure every young person can thrive in a tech‑driven world." Also, the World Economic Forum estimates that 63 per cent of India's workforce over 70 million people will require upskilling or reskilling by 2030. So, how can we empower youth? "We can open the doors, but young people must walk in with eagerness, hunger and excitement to learn. The real skill is a mindset geared toward experimentation and upgrading from foundational principles to business applications," said Noopur Julka, Senior Director, UST "What gives me optimism is the breadth of support from government programmes like Skill India, PMKVY 4.0 and NPAI to on‑the‑ground efforts such as Skill Olympics and AI bootcamps actively bringing AI learning to youth across urban and rural India," added Shantanu Rooj, Founder and CEO, TeamLease Edtech. But Juveri Mukherjee, Global Head of HR, Aurionpro Solutions believes, "Empowering youth is a collective responsibility across industry, academia and government. This can help us unlock pathways to employment, entrepreneurship and innovation," concluded Juveri Mukherjee, Global Head of HR, Aurionpro Solutions.

Cybercriminals use GenAI, v0.dev to launch advanced phishing
Cybercriminals use GenAI, v0.dev to launch advanced phishing

Techday NZ

time02-07-2025

  • Techday NZ

Cybercriminals use GenAI, v0.dev to launch advanced phishing

Research from Okta Threat Intelligence has found that cybercriminals are leveraging Generative Artificial Intelligence (GenAI), specifically the tool from Vercel, to manufacture sophisticated phishing websites swiftly and at scale. Okta's researchers have observed threat actors utilising the platform to create convincing replicas of sign-in pages for a range of prominent brands. According to the team's findings, attackers can build a functional phishing site by inputting a short text prompt, thereby substantially reducing the technical barrier for launching attacks. New methods The research revealed that which is intended to help developers create web interfaces through natural language instructions, is also allowing adversaries to quickly reproduce the design and branding of authentic login sites. In one case, Okta noted that the login page of one of its own customers had been imitated using this AI-powered software. Phishing sites created using often also hosted visual assets such as company logos on Vercel's own infrastructure. Okta Threat Intelligence explained that consolidating these resources on a trusted platform is a deliberate technique by attackers. By doing so, they aim to avoid typical detection methods that monitor for assets served from known malicious or unrelated infrastructures. Vercel responded to these findings by restricting access to the suspect sites and working with Okta to improve reporting processes for additional phishing-related infrastructure. The observed activity confirms that today's threat actors are actively experimenting with and weaponizing leading GenAI tools to streamline and enhance their phishing capabilities. The use of a platform like Vercel's allows emerging threat actors to rapidly produce high-quality, deceptive phishing pages, increasing the speed and scale of their operations. Wider proliferation The report also noted the existence of several public GitHub repositories that replicate the application, along with DIY guides enabling others to build their own generative phishing tools. According to Okta, this widespread availability is making advanced phishing tactics accessible to a broader cohort of cybercriminals, effectively democratising the creation of fraudulent web infrastructure. Further monitoring revealed that attackers have used the Vercel platform to host phishing sites imitating not just Okta customers, but also brands like Microsoft 365 and various cryptocurrency companies. Security advisories related to these findings have been made available to Okta's customers. Implications for security Okta Threat Intelligence underlined that this represents a significant change in the phishing threat landscape, given the increasingly realistic appearance of sites generated by artificial intelligence. The group stressed that safeguarding systems using traditional indicators of poor quality or imperfect design is now insufficient for deterrence. Organizations can no longer rely on teaching users how to identify suspicious phishing sites based on imperfect imitation of legitimate services. The only reliable defence is to cryptographically bind a user's authenticator to the legitimate site they enrolled in. This is the technique that powers Okta FastPass, the passwordless method built into Okta Verify. When phishing resistance is enforced in policy, the authenticator will not allow the user to sign into any resource but the origin (domain) established during enrollment. Put simply, the user cannot be tricked into handing over their credentials to a phishing site. To address these risks, Okta Threat Intelligence has recommended several mitigation strategies. These include enforcing phishing-resistant authentication policies and prioritising the deactivation of less secure factors, restricting access to trusted devices, requiring secondary authentication if anomalous user behaviour is detected, and updating security awareness training to account for AI-driven threats. The research reflects the rapid operationalisation of machine learning tools in malicious campaigns, and highlights the need for continuous adaptation by organisations and their cybersecurity teams in response to evolving threats. Follow us on: Share on:

Can you beat the machine in your next job application?
Can you beat the machine in your next job application?

Borneo Post

time11-06-2025

  • Business
  • Borneo Post

Can you beat the machine in your next job application?

Using AI tools in hiring has both benefits and challenges – on one hand, it can help reduce human bias, but there are also growing concerns about fairness and transparency. — Bernama photo IN today's fast-changing job market, landing your dream job may no longer depend solely on impressing a human recruiter. Increasingly, the 'first person' reviewing your application might be a machine. Artificial intelligence (AI) is transforming how companies hire new staff, from sorting résumés and scoring interviews. Job-seekers must learn how to stand out in this new digital era to get a job. But how does it work? Use of AI tools in hiring AI tools have become popular and are relatively cost-effective to use, thanks to Generative Artificial Intelligence (Gen AI) and Large Language Model (LLM). There are a multitude of AI tools for various management functions, including the very important recruitment and selection functions of human resource (HR) management. Many companies—from technology giants to medium-sized enterprises—in one way or another, are using AI tools to make recruitment faster, cheaper, more efficient, and objective. These tools help HR teams handle thousands of applications using algorithms to screen résumés, analyse pre-recorded video to assess applicants' skills and personality traits. Some AI tools, such as Applicant Tracking System (ATS), could scan résumés and filter out those who do not match the job specification (JS). Others could record candidates' responses and analyse facial expressions, voice tone, and word choices. For example, some multinational firms are already using software like 'HireVue' or 'Pymetrics to evaluate job applicants. These platforms claim to offer unbiased assessments. However, for the interviewees, it can be a daunting endeavour as they are, in effect, trying to impress a robot without knowing the rules of the game. Winning the systems So, how can job-seekers beat the machine and move their application forward? The first step is to understand how AI tools screen résumés. Many ATS can search for keywords in the résumés that match the job description. If your résumé does not contain the right words or is written in a way the AI tools cannot read, you may be rejected instantaneously. This means it is essential to use keywords from the job advertisement, use simple formatting (no tables, columns, or graphics), and customise your résumés for each application. Next, for AI-powered video interviews, just like human face-to-face interviews, preparation is of utmost importance. These systems may rate you based on confidence, eye contact, clarity, and even enthusiasm. Yes, you will be surprised! Some tips for the interviewees are practise speaking in front of a camera and review the recordings; and also speak in front of your friends and receive honest feedback from them for improvement. Remember to stay calm, keep the answers clear and concise so that the AI tools can pick up the keywords easily. Show natural body language and smile as these highly-sophisticated AI tools are trained to interpret your emotions with some degree of accuracy. Therefore, be your best self, but stay authentic. After all, AI tools are predominantly just the first screening process – the gatekeeper if you like. The basic principles of showing interest, confidence, clarity and passion for the job you are interviewing for are still essential, even to machines. Challenges of AI in hiring Having sung all the praises of AI tools, the reality is they are not perfect. Using AI tools in hiring has both benefits and challenges. On the positive side, it can help reduce human bias. Theoretically, an AI system is not concerned about your name, gender, social status, age or where you graduated from. This could help level the playing field, especially for candidates from less well-known or disadvantaged backgrounds. However, there are growing concerns about fairness and transparency in using AI tools. Algorithms can reflect the biases of their creators or unintentionally favouring certain language patterns or personality types. It may favour certain speaking styles or penalise people with different accents or expressions. Some job-seekers are concerned that they may be rejected not because they lack the required knowledge, skills and abilities (KSA), but rather the AI tools do not 'understand' them well. For job-seekers in non-English speaking countries like Malaysia, this can be even more challenging. Although there are recent powerful AI tools from China, most of them were developed in Western countries using the Western contexts. Hence, many AI tools may misinterpret accents, gestures, or even grammar in the Asian context and culture. In view of this phenomenon, there is a need for more ethical and inclusive AI systems in hiring, especially for multinational companies. Despite these challenges, the use of AI tools in recruitment and selection process is here to stay. Job-seekers must adapt by learning how AI tools work and prepare in earnest. The tips showcased in this article can help turn the machine from an obstacle into an advantage. Human intelligence matters In a world where machines read your résumé and judge the video interviews, the best way to beat the machine is to stay one step ahead. Even though machines are part of the hiring process, most final decisions are still made by humans like you and me. Therefore, once you get past the initial AI screening process, your ability to connect with real people will take the centrestage. Bring your best self to the interviews, share your story, show your passion and speak with conviction from the depth of your heart. Never forget you are a human being who possesses unique traits which no machine can ever beat: emotional intelligence that allows you to understand and respond to feelings; and creativity and critical-thinking to drive innovation and problem-solving that machines cannot replicate. Capitalise on these precious talents, give your best, and lead the way forward! * The opinions expressed in this article are the author's own and do not necessarily reflect the view of Swinburne University of Technology Sarawak Campus. Prof Fung is the head of School of Business at Swinburne University of Technology Sarawak Campus, while Prof Chung is an Associate Professor in Human Resources Management in the same School.

GenAI in education: between promise and precaution
GenAI in education: between promise and precaution

Express Tribune

time27-05-2025

  • Science
  • Express Tribune

GenAI in education: between promise and precaution

The writer is a Professor of Physics at the University of Karachi Listen to article Generative Artificial Intelligence (GenAI) is rapidly reshaping the landscape of education and research, demanding a thoughtful and urgent response from educators and policymakers. As a faculty member and a member of the Advanced Studies and Research Board at a public sector university, I have witnessed both the excitement and the uncertainty that AI tools like ChatGPT have generated within academic circles. While the potential of GenAI to enhance learning and scholarly productivity is undeniable, its unregulated and unchecked use poses significant risks to the core principles of academic integrity, critical thinking and equitable access to knowledge. As Pakistan embraces digital transformation and positions itself within the global digital economy, AI literacy has emerged as a foundational competency. In an earlier op-ed published in these columns on October 5, 2024 entitled 'AI Education Revolution', I emphasised that AI literacy is not just a technical skill, but a multidisciplinary competence involving ethical awareness, critical thinking and responsible engagement. That argument is now even more relevant. With tools like ChatGPT and DALL•E becoming commonplace, students must be equipped to not only use them effectively but to understand their societal and epistemological implications. GenAI offers immense opportunities. It enables personalised learning, streamlines research, provides real-time feedback and enhances access to complex knowledge. For students in under-resourced areas, it can bridge educational gaps. For researchers, it reduces the cognitive burden of information overload. But with these capabilities comes the risk of over-reliance. The seamless generation of essays, analyses and even ideas without meaningful engagement undermines the very purpose of education — cultivating independent thought and inquiry. One of the most pressing issues is the shift in how students perceive learning. Many now use AI tools as shortcuts, often without malintent, bypassing critical processes of reasoning and originality. This trend not only threatens academic rigour but fosters a culture of passive dependence — something that was forewarned in the context of AI misuse and unintentional plagiarism in academic settings. As discussed in the earlier op-ed, the absence of AI literacy can blur the lines between learning and copying, between thinking and prompting. To address these risks, UNESCO's recent guidance on AI in education offers a valuable framework. Governments must legislate clear, enforceable policies around age-appropriate use, data protection and algorithmic transparency. Educational institutions must rigorously assess the pedagogical validity and ethical dimensions of AI tools before integrating them. But perhaps the most crucial intervention lies in embedding AI literacy directly into curricula across disciplines but as a horizontal skill akin to critical thinking or digital citizenship. Hands-on engagement with GenAI is essential. Students must not only generate content but also critically evaluate it for bias, coherence and accuracy. To support this, assessments should evolve — emphasising oral presentations, collaborative projects and reflective analysis to promote authentic learning. Educators, too, must adapt through targeted training that enables them to guide students responsibly. Institutions should support this shift with updated pedagogical strategies and professional development programmes that integrate AI while preserving academic integrity. Given AI's borderless nature, international cooperation is vital. UNESCO must continue leading efforts to establish shared ethical frameworks and best practices. Pakistan should actively engage in this global dialogue while strengthening local capacity through curriculum reform, infrastructure investment and academic-policy collaboration to ensure GenAI serves as a responsible and equitable tool for learning. GenAI is not a passing phase, it is a structural shift. Whether it becomes a tool for democratising knowledge or a force that erodes educational values depends on how we act today. The future of education will not be determined by machines alone, but by the wisdom with which we choose to engage with them.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store