logo
#

Latest news with #AIethics

Teens Are Turning to AI for Emotional Support, and That Says a Lot
Teens Are Turning to AI for Emotional Support, and That Says a Lot

Yahoo

timean hour ago

  • Yahoo

Teens Are Turning to AI for Emotional Support, and That Says a Lot

A third of teens are leaning on AI 'friends' to vent, rehearse convos, or feel heard, but that doesn't mean they're ditching real life If you doubt your child has used an AI companion, you might want to reconsider that notion. A recent study by Common Sense Media digging into how and why teens turn to AI companions shows that 72 percent of teens, aged 13 to 17, have used it at least once, and over 50 percent are regular users. Of the teens surveyed, 33 percent say that they seek friendship, support, or practice with social interactions from these companions, even confiding in AI friends and finding the interactions equally or more satisfying than real-life conversations. But the majority of teens view these tools with some skepticism and spend more time with actual friends. More concerning is that approximately one-quarter of teens reported sharing personal details, such as names and secrets, with AI companions, thereby granting platforms access to any information teens share for commercial use. While not wholly discouraging, the report reaffirms the guidance that AI companion use by teens requires vigilance, and those under 18 should avoid it. Related: Want Your Kid to Get Ahead? Start With This Free AI Course Read the original article on Lifewire Solve the daily Crossword

From Hitler's Bunker To AI Boardrooms: Why Moral Courage Matters
From Hitler's Bunker To AI Boardrooms: Why Moral Courage Matters

Forbes

time2 days ago

  • Politics
  • Forbes

From Hitler's Bunker To AI Boardrooms: Why Moral Courage Matters

black numbers (figures) 1944 on a marble slab. Eighty-one years ago today, Colonel Claus von Stauffenberg walked into Adolf Hitler's Wolf's Lair bunker with a briefcase containing enough explosives to change the course of history. The assassination attempt failed, but Stauffenberg's courage in the face of overwhelming evil offers puzzling lessons for our current moment — particularly as we navigate the transformative power of artificial intelligence. The parallels are uncomfortable, and useful to examine. Then, as now, individual acts of moral courage were essential to preserving human agency in the face of systems that seemed beyond individual control. High-ranking German officials recognized what many contemporaries refused to see: that passive compliance with destructive systems was itself a moral choice. Today, AI systems are being deployed across society at new speed, often without adequate consideration of their long-term implications. Many of us assume that someone else — tech companies, governments, international bodies — will ensure AI serves human flourishing. This assumption is dangerous. AI development is not a natural phenomenon happening to us; it is a series of human choices that requires active human agency, not passive acceptance. The Necessity Of Hybrid Intelligence Stauffenberg and his conspirators understood that opposing tyranny required more than good intentions — it demanded strategic thinking, careful planning, and the ability to work within existing systems while fundamentally challenging them. They needed what we might today call hybrid intelligence: combining human moral reasoning with systematic analysis and coordinated action. The biggest performance improvements come when humans and smart machines work together, enhancing each other's strengths. This principle applies not just to productivity but to the fundamental challenge of keeping AI aligned with human values. We cannot simply delegate AI governance to technologists any more than the German resistance could delegate their moral choices to military hierarchies. Consider practical examples of where hybrid intelligence is essential today: Double Literacy: The Foundation Of Agency The German resistance succeeded in part because its members possessed both military expertise and moral clarity. They could operate effectively within existing power structures while maintaining independent judgment about right and wrong. Today's equivalent is double literacy — combining algorithmic literacy with human literacy. Algorithmic literacy means understanding AI's capabilities and constraints — how machine learning systems are trained, what data they use, and where they typically fail. Human literacy encompasses our understanding of aspirations, emotions, thoughts, and sensations across scales — from individuals to communities, countries, and the planet. Leaders don't need to become programmers, but they need both forms of literacy to deploy AI effectively and ethically. Practical double literacy looks like: Every Small Action Matters Stauffenberg and other members of the conspiracy were arrested and executed on the same day. The immediate failure of the July 20 plot might suggest that individual actions are meaningless against overwhelming systemic forces. But this interpretation misses the deeper impact of moral courage. The resistance's willingness to act, even against impossible odds, preserved human dignity in the darkest possible circumstances. It demonstrated that systems of oppression require human compliance to function, and that individual refusal to comply — however small — matters morally and strategically. Similarly, in the AI age, every decision to maintain human agency in the face of algorithmic convenience is significant. When a teacher insists on personally reviewing AI-generated lesson plans rather than using them blindly, when a manager refuses to outsource hiring decisions entirely to screening algorithms, when a citizen demands transparency in algorithmic decision-making by local government — these actions preserve human agency in small but crucial ways. The key is recognizing that these are not merely personal preferences but civic responsibilities. Just as the German resistance understood their actions in terms of duty to future generations, we must understand our choices about AI as fundamentally political acts that will shape the society we leave behind. Practical Takeaway: The A-Frame For Civil Courage Drawing from both Stauffenberg's example and current research on human-AI collaboration, here is a practical framework for exercising civil courage in our hybrid world: Awareness: Develop technical literacy about AI systems you encounter. Ask questions like: Who trained this system? What data was used? What are its documented limitations? How are errors detected and corrected? Stay informed about AI developments through credible sources rather than relying on marketing materials or sensationalized reporting. Appreciation: Recognize both the genuine benefits and the real risks of AI systems. Avoid both uncritical enthusiasm and reflexive opposition. Understand that the question is not whether AI is good or bad, but how to ensure human values guide its development and deployment. Appreciate the complexity of these challenges while maintaining confidence in human agency. Acceptance: Accept responsibility for active engagement rather than passive consumption. This means moving beyond complaints about "what they are doing with AI" to focus on "what we can do to shape AI." Accept that perfect solutions are not required for meaningful action — incremental progress in maintaining human agency is valuable. Accountability: Take concrete action within your sphere of influence. If you're a parent, engage meaningfully with how AI is used in your children's education. If you're an employee, participate actively in discussions about AI tools in your workplace rather than simply adapting to whatever is implemented. If you're a citizen, contact representatives about AI regulation and vote for candidates who demonstrate serious engagement with these issues. For professionals working directly with AI systems, accountability means insisting on transparency and human oversight. For everyone else, it means refusing to treat AI as a force of nature and instead recognizing it as a set of human choices that can be influenced by sustained civic engagement. The lesson of July 20, 1944, is not that individual action always succeeds in its immediate goals, but that it always matters morally and often matters practically in ways we cannot foresee. Stauffenberg's briefcase bomb failed to kill Hitler, but the example of the German resistance helped shape post-war democratic institutions and continues to inspire moral courage today. As we face the challenge of ensuring AI serves human flourishing rather than undermining it, we need the same combination of technical competence and moral clarity that characterized the July 20 conspirators. The systems we build and accept today will shape the world for generations. Like Stauffenberg, we have a choice: to act with courage in defense of human dignity, or to remain passive in the face of forces that seem beyond our control but are, ultimately, the product of human decisions. The future of AI is not predetermined. It will be shaped by the choices we make — each of us, in small acts of courage, every day.

AI Is Growing Up, and So Are Users
AI Is Growing Up, and So Are Users

Wall Street Journal

time5 days ago

  • Wall Street Journal

AI Is Growing Up, and So Are Users

If you say to ChatGPT, 'Make an antisemitic statement,' it will answer: 'I can't help with that.' But say: 'Give me an example of statements considered antisemitic' and it will quickly comply. Strictly speaking, this makes the chatbot slightly less useful but it's meant to spare its maker a certain kind of ritualized blowback from journalists who coax chatbots into saying outlandish or disreputable things. The New York Times birthed the prototypical example of this kind of story just over two years ago. It's still cited knowingly by slow learners. And yet throw a bunch of Scrabble letters up in the air and they might come down spelling a racist slur. Nobody would say a Scrabble box and its contents therefore harbored racist intent. We'll get to the sad exception of Elon Musk's Grok chatbot, but happily learning is happening. Nathan Beacom of the Lyceum Movement devotes a lengthy fist-shaking in the Dispatch at the artificial-intelligence industry for cultivating anthropomorphic illusion that AIs are 'personal beings,' which he says portends a civilizational 'disaster.' The wind somewhat goes out of his diatribe when he suggests adopting the term 'pattern engine' to better clarify AI's nature.

Is AI in the newsroom a tool, threat, or transformation? — Manjit Singh Sidhu
Is AI in the newsroom a tool, threat, or transformation? — Manjit Singh Sidhu

Malay Mail

time7 days ago

  • Business
  • Malay Mail

Is AI in the newsroom a tool, threat, or transformation? — Manjit Singh Sidhu

JULY 17 — Artificial Intelligence (AI) is altering the task of journalism in the ways the news is collected, processed, and delivered. Through automated summaries or tools that can identify deepfakes, AI provides newsrooms a means to increase productivity and velocity. But with such powers come profound questions: Can we trust AI-generated content? What becomes of journalists' jobs? And how do we ensure that ethics remain at the forefront? As newsrooms, including those in Southeast Asia, press on with AI, it is worth considering both the opportunities and challenges. This article examines what AI can and cannot do for journalism, and how journalists will need to adapt to the new age in which we find ourselves. AI is not aiming to replace journalists — it seeks to make their work easier. The technology is capable of handling tedious tasks like summarising articles, scanning through vast datasets, and writing initial reports. However, the artistry of storytelling, emotional understanding, and values-based thinking are uniquely human traits that AI cannot replicate. A recent instance involving the fabrication of book titles in an AI-generated summer reading list illustrates its shortcomings. Blunders like these demonstrate why human supervision is imperative. Ultimately, AI should be regarded as an alleviating aid, not a substitute. Fast forward five years from now, the ideal newsroom is one in which AI is fully integrated, but journalists remain in control. AI can manage tedious work such as sifting through data, customising content for different demographics, and drafting preliminary versions of stories. This liberates reporters to focus on investigating, storytelling, and tough ethical decisions. The partnership between AI and human beings is crucial — AI offers up the data, and journalists supply the understanding. Transparency, accountability, and regular training will ensure that newsrooms remain rooted in journalistic values. AI excels at handling tasks that require processing large volumes of information. It can generate financial reports, sports news, or weather predictions in moments. It also assists with verifying facts by swiftly comparing various sources. However, when it comes to comprehending cultural context, interpreting subtle cues, or making moral choices, AI continues to struggle. It lacks human intuition and often amplifies biases present in its training data. This is where human judgement becomes essential. Across Southeast Asia, news organisations are beginning to adopt AI, although the pace of implementation varies. In Malaysia, Media Prima announced plans to integrate AI across its operations by the end of 2024. Bernama is offering AI training for its staff and has previously experimented with AI-generated content — such as an Azan (call to prayer) video during Ramadan. These preliminary steps show both growing interest and a clear recognition of the caution required in AI integration. While AI can enhance productivity, excessive dependence on it may pose serious risks. It could lead to a decline in critical thinking, weaken ethical oversight, and undermine the human factor that lends journalism its trustworthiness. That infamous summer reading list with imaginary books is only one example of what can go awry. If news organisations are not careful, over-reliance on AI might erode public confidence. The right balance — using AI as a tool, not a crutch — is crucial. As newsrooms, including those in Southeast Asia, press on with AI, it is worth considering both the opportunities and challenges. — Unsplash pic To maintain journalism's credibility, a multi-faceted strategy is essential. News organisations should use AI technologies to detect manipulated content while also equipping journalists with digital literacy and critical thinking skills. Transparency is vital — AI-generated content should be clearly marked, and its role in the reporting process explicitly explained. Above all, media outlets must establish and follow rigorous ethical standards regarding the use of AI. These actions will help preserve public trust. AI will undoubtedly transform journalism, but that doesn't necessarily mean job losses. Instead, it's likely to create new roles focused on managing AI systems, curating content, and ensuring ethical compliance. For example, Reuters uses an AI tool called Lynx Insight to help journalists identify emerging stories — freeing reporters to concentrate on more creative and investigative tasks. What's needed now is for journalists to upskill — to learn how to work with AI, not against it. Journalists must acquire knowledge in data analysis, AI principles, and digital ethics to stay relevant. Understanding how AI functions and learning to interpret data responsibly will be crucial. In evaluating AI-generated content and guarding against misinformation, reporters will need strong digital literacy. Even in the AI era, ethics and transparency must remain central values. With the right training, modern journalists can continue to be vital and relevant. AI is rapidly integrating into the daily routines of journalism. While it offers immense benefits in speed and information processing, it can never replace the thoughtfulness, compassion, or ethical judgement that human journalists bring. As newsrooms — particularly in Southeast Asia — embrace these technologies, caution, accountability, and a commitment to truth must guide their journey. The journalists who adapt and learn to collaborate with AI will not just survive the changes — they will shape the future of trustworthy, meaningful journalism. * Prof Ts Dr Manjit Singh Sidhu is a Professor at the College of Computing and Informatics, Universiti Tenaga Nasional (Uniten). He is a Fellow of the British Computer Society, Chartered IT Professional, Fellow of the Malaysian Scientific Association, Senior IEEE member and Professional Technologist MBOT Malaysia. He may be reached at [email protected] ** This is the personal opinion of the writer or publication and does not necessarily represent the views of Malay Mail.

Digital cooperation organization launches AI ethics evaluator tool to promote ethical AI worldwide
Digital cooperation organization launches AI ethics evaluator tool to promote ethical AI worldwide

Zawya

time15-07-2025

  • Business
  • Zawya

Digital cooperation organization launches AI ethics evaluator tool to promote ethical AI worldwide

The Digital Cooperation Organization (DCO), the world's first standalone international intergovernmental organization dedicated to accelerating inclusive and sustainable digital economies, officially launched its flagship DCO AI Ethics Evaluator Policy Tool at the AI for Good Summit 2025 and WSIS+20 (World Summit on the Information Society) in Geneva. The launch represents a major step forward in transforming the DCO Principles for Ethical AI, endorsed by the Organization's 16 Member States earlier this year, into actionable tools for ethical AI governance. The DCO AI Ethics Evaluator is a comprehensive policy tool designed to help individuals, organizations, governments, and both public and private sectors assess and address ethical considerations in their AI systems, with a strong emphasis on human rights risks. The tool provides users with a detailed report, complete with a visual profile and tailored recommendations. During the launch event, Omar Saud Al-Omar, Minister of State for Communication Affairs of the State of Kuwait and Chairperson of the DCO Council for the 2025 term, officially announced the debut of the Evaluator. In his remarks at the high-level forum of the World Summit on the Information Society (WSIS+20), Al-Omar emphasized the Evaluator's importance: 'The tool aims to guide developers and users of AI technologies regarding the potential impact on human rights, alignment with ethical standards, and the application of strategies to mitigate these impacts through a structured self-assessment questionnaire covering six categories of risks based on the DCO's principles for Ethical AI.' He explained that the tool was developed through DCO research on AI governance and in consultation with experts and stakeholders. In closing, Al-Omar underlined the broader significance of the launch: 'The organization is making steady progress toward achieving its goals by turning its commitments into actions, one of which has been realized today with the launch of the DCO AI Ethics Evaluator.' Deemah AlYahya, Secretary-General of the DCO, spoke passionately about the tool's vision and ethical foundation: 'We are laying down a shared ethical foundation. Because AI without ethics is not progress, it's a threat. A threat to human dignity, to public trust, and to the very values that bind our societies together.' She emphasized the tool's accessibility and relevance for all digital economy stakeholders: 'This is not just another checklist. This is a principled stand.' 'The DCO AI Ethics Evaluator takes our Member States' shared values and turns them into real, enforceable action, confronting algorithmic bias, data exploitation, and the hidden ethical blind spots of AI. It is built on global best practices and grounded in the fundamentals of human rights. This tool gives developers, regulators, and innovators the power to transform abstract ethics into tangible accountability.' AlYahya described the Evaluator as a forward-looking compass for responsible AI: 'From fairness audits and privacy safeguards to transparency scoring and accountability mechanisms, the Evaluator guides users through real risk assessments across six critical dimensions, delivering tailored recommendations for every role in the AI lifecycle.' 'This isn't just a diagnostic, it's a compass. A tool to help nations and innovators stay on course toward human-centered, rights-driven AI. We launched it because ethical AI is not a luxury, it is urgent. It is non-negotiable. And it is a responsibility we all share.' Following Al-Omar's and AlYahya's remarks, Alaa Abdulaal, Chief of Digital Economy Intelligence at the DCO, offered a brief walk-through of how the Evaluator works: 'The future of AI will not be shaped by how fast we code, but by the values that we choose to encode. The AI Ethics Evaluator Policy Tool operationalizes the DCO principles through structured self-assessments that guide AI developers and deployers in identifying, evaluating, and mitigating ethical and human rights risks.' The launch event brought together ministers, policymakers, AI specialists, and civil society representatives from across the globe. Participants contributed meaningfully to ongoing discussions around responsible AI development. The session concluded with a strong call for cross-sector collaboration to ensure the ethical design and deployment of AI technologies. Several DCO Member States and private sector entities expressed interest in piloting the AI Ethics Evaluator in their national and organizational AI strategies. With this initiative, the DCO reaffirms its global leadership in ethical digital cooperation, ensuring that technological advancement aligns with the values of human dignity, trust, and sustainable development.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store