logo
#

Latest news with #YoshuaBengio

Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained
Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained

News18

time16 hours ago

  • News18

Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained

Last Updated: AI veganism is abstaining from using AI systems due to ethical, environmental, or wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient Even as the world goes gaga over artificial intelligence (AI) and how it could change the way the world and jobs function, there are some who are refraining from using it. They are the AI vegans. Why is it? What are their reasons? AI veganism explained. What is AI veganism? The term refers to applying principles of veganism to AI — either by abstaining from using AI systems due to ethical, environmental, or personal wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient. Some view AI use as potentially exploitative — paralleling harm to animals via farming. Is AI so bad that we need to abstain from it? Here's what studies show: A 2024 Pew study showed that a fourth of K-12 teachers in the US thought AI was doing more harm than good. A Harvard study from May found that generative AI, while increasing the productivity of workers, diminished their motivation and increased their levels of boredom. A Microsoft Research study found that people who were more confident in using generative AI showed diminished critical thinking. Time reports growing concerns over a phenomenon labeled AI psychosis, where prolonged interaction with chatbots can trigger or worsen delusions in vulnerable individuals—especially those with preexisting mental health conditions. A study by the Center for Countering Digital Hate found that ChatGPT frequently bypasses its safeguards, offering harmful, personalized advice—such as suicide notes or instructions for substance misuse—to simulated 13-year-old users in over half of monitored interactions. Research at MIT revealed that students using LLMs like ChatGPT to write essays demonstrated weaker brain connectivity, lower linguistic quality, and poorer retention compared to peers relying on their own thinking. A study from Anthropic and Truthful AI found that AI models can covertly transmit harmful behaviors to other AIs using hidden signals—these actions bypass human detection and challenge conventional safety methods. A global report chaired by Yoshua Bengio outlines key threats from general-purpose AI: job losses, terrorism facilitation, uncontrolled systems, and deepfake misuse—and calls for urgent policy attention. AI contributes substantially to global electricity and water use, and could add up to 5 million metric tons of e-waste by 2030—perhaps accounting for 12% of global e-waste volume. Studies estimate AI may demand 4.1–6.6 billion cubic meters of water annually by 2027—comparable to the UK's total usage — while conceptually exposing deeper inequities in AI's extraction and pollution impacts. A BMJ Global Health review argues that AI could inflict harm through increased manipulation/control, weaponization, labour obsolescence, and—at the extreme—pose existential risks if self-improving AGI develops unchecked. What is the basis of the concept? Ethical Concerns: Many AI models are trained on creative work (art, writing, music) without consent from original creators. Critics argue this is intellectual theft or unpaid labor. Potential Future AI Sentience: Some fear that sentient AI might eventually emerge, and using it today could normalise treating it as a tool rather than a being with rights. Environmental Impact: AI systems — especially large language models—consume massive resources which contribute to carbon emissions and water scarcity. Cognitive and Psychological Health: Some believe overuse of AI weakens our ability to think, write, or create independently. The concern is about mental laziness or 'outsourcing" thought. Digital Overwhelm: AI makes everything faster, more accessible—sometimes too fast, leading to burnout, distraction, or dopamine addiction. Social and Cultural Disruption: AI threatens job markets—especially in creative fields, programming, and customer service. Why remaining an AI vegan may be tough? AI is deeply embedded in many systems — from communication to healthcare—making total abstinence unrealistic for most. Current AI lacks consciousness, so overlaying moral concerns meant for animals onto machines may distract from real human and animal rights issues. Potential overreach: Prioritising hypothetical sentient AI ethics could divert attention from pressing societal challenges. With Agency Inputs view comments Location : New Delhi, India, India First Published: August 10, 2025, 18:08 IST News explainers Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Government of Canada partners with United Kingdom to invest in groundbreaking AI alignment research Français
Government of Canada partners with United Kingdom to invest in groundbreaking AI alignment research Français

Cision Canada

time30-07-2025

  • Business
  • Cision Canada

Government of Canada partners with United Kingdom to invest in groundbreaking AI alignment research Français

OTTAWA, ON, July 30, 2025 /CNW/ - Investing in artificial intelligence (AI) is key to unlocking Canada's prosperity, resiliency and security as well as strengthening the country's leadership. The Government of Canada is committed to scaling up Canada's AI ecosystem, building AI infrastructure, increasing the adoption of AI systems and strengthening trust. In doing so, it is essential to develop AI in a safe and responsible manner so that it benefits all Canadians. Today, the Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario, announced that the Canadian AI Safety Institute, through the Canadian Institute for Advanced Research, will contribute $1 million to the UK AI Security Institute 's Alignment Project, a cutting-edge initiative to advance research on AI alignment. This critical field is focused on making advanced AI systems operate in a reliable and beneficial way, without unintended or harmful actions. The Alignment Project is backed by a CAN$29 million (£15.9 million) investment from an international coalition that includes Schmidt Sciences, Amazon Web Services, Halcyon Futures, the Safe AI Fund and the Advanced Research and Invention Agency. The project will support pioneering work to keep advanced systems safe by maintaining transparency, predictability and responsiveness to human oversight. Through its collaborative approach, the project will remove key barriers that have previously limited alignment research by offering three distinct support streams: Grant funding: Up to $1.8 million for researchers across disciplines, including computer science and cognitive science Compute access: Dedicated compute resources, enabling technical experiments beyond typical academic reach Venture capital: Investment from private funders to accelerate commercial alignment solutions The project will be guided by a world-class advisory board that includes Canadian AI expert Yoshua Bengio as well as Zico Kolter, Shafi Goldwasser and Andrea Lincoln. This partnership will allow Canada to navigate this pivotal period of rapid technological advancement alongside profound geopolitical shifts and to position our country and its partners for success. Quotes "We are at a hinge moment in the story of AI, where our choices today will shape Canada's economic future and influence the global trajectory of this technology. By investing strategically in scale, infrastructure and adoption, we're not just fuelling opportunity for Canadians—we're making sure progress is matched by purpose and responsibility. That's why this partnership, uniting the Canadian AI Safety Institute and the Canadian Institute for Advanced Research with the UK AI Security Institute, matters. Together, we're advancing cutting-edge research to ensure next generation of AI systems are not only powerful but also reliable—serving societies here at home and around the world." – The Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario "CIFAR is proud to partner in this vital international effort to ensure advanced AI systems remain aligned with human values. CIFAR's mandate is to convene the world's top researchers to address the most pressing challenges facing humanity, and few challenges are more urgent than ensuring AI is safe, predictable and beneficial for all. Through our leadership of the Canadian AI Safety Institute Research Program at CIFAR, we are advancing foundational research that will help safeguard the transformative potential of AI while protecting the public interest." – Elissa Strome, Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research Quick facts Established in November 2024, the Canadian AI Safety Institute (CAISI) seeks to advance scientific understanding of the risks associated with advanced AI systems, develop measures to reduce those risks and build trust to foster AI innovation. CAISI is partnering with counterparts around the world, including the UK AI Security Institute, to advance common understandings of and responses to safety risks. CAISI leverages the robust Canadian AI research ecosystem and advances AI safety research through two research streams: investigator-led research via the Canadian Institute for Advanced Research (CIFAR) and government-directed projects led by the National Research Council of Canada. CIFAR is a globally influential research organization based in Canada. It mobilizes experts from across disciplines and at various career stages to advance transformative knowledge and solve complex problems. The Government of Canada has also launched other initiatives to support the safe and responsible development and deployment of AI systems and safe AI adoption across the Canadian economy, including the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems and a guide for managers on implementing the Code of Conduct. Stay connected Find more services and information on the Innovation, Science and Economic Development Canada website. For easy access to government programs for businesses, download the Canada Business app. SOURCE Innovation, Science and Economic Development Canada Contacts: Sofia Ouslis, Press Secretary, Office of the Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario, [email protected]; Media Relations, Innovation, Science and Economic Development Canada, [email protected]

Canada commits funding to joint AI safety effort with the U.K.
Canada commits funding to joint AI safety effort with the U.K.

Calgary Herald

time30-07-2025

  • Business
  • Calgary Herald

Canada commits funding to joint AI safety effort with the U.K.

Canada is collaborating with the U.K. on a new artificial intelligence safety initiative as the U.S. pushes its vision of a zero-sum AI race that will see it remove guardrails on development and deployment of the burgeoning technology. Article content On Wednesday, Canada announced that it will commit $1 million to a $29 million joint AI safety effort with the U.K. that will bankroll research and commercial projects focused on keeping advanced AI systems in line — in other words, ensuring that they operate safely, reliably, and in a useful way, and without unintended or harmful actions. Article content Article content Article content 'Together, we're advancing cutting-edge research to ensure (the) next generation of AI systems are not only powerful but also reliable — serving societies here at home and around the world,' said minister of AI and digital innovation Evan Solomon in a statement. Article content The initiative, called the AI Alignment Project, is spearheaded by the U.K.'s AI Security Institute (AISI) and will involve the Canadian AI Safety Institute (CAISI) under the umbrella of the Canadian Institute for Advanced Research (CIFAR), a research organization focussed on innovation and deep tech. Other financial backers include enterprise partners such as Amazon Web Services, Inc. and venture firm the Safe AI Fund, and non-profits such as New-York headquartered Schmidt Sciences, LLC. Article content The project will provide grant funding for researchers across disciplines including computer and cognitive science; help organizations and individuals access venture capital investment; and secure 'compute,' the computational power needed to train and run AI models. The call for proposals, which launches Wednesday, will be open until September. Article content Article content Prominent AI experts including Canadian-French computer scientist Yoshua Bengio, known as one of the 'godfathers of AI,' will serve on the advisory board to help steer the effort and to select the successful proposals by November. Article content Article content Canada's participation in the U.K.-led AI safety effort fits into Ottawa's broader AI vision for Canada — one that involves building trust in, and adoption of, the technology — and lets Canadian researchers have a seat at the table when it comes to global efforts on AI safety, according to Elissa Strome, executive director of pan-Canadian AI strategy at CIFAR. There are 'few challenges more urgent than ensuring AI is safe, predictable, and beneficial for all,' she said. Article content The launch of the new initiative takes place amid broader shifts in the conversation on AI. As the tech becomes an increasingly important feature of national security and economic competition, the U.S. has peddled a light-touch regulatory approach, while jurisdictions such as the EU have championed tougher-on-tech rules. Industry groups and AI safety scholars have also clashed over how AI should be regulated, developed and deployed.

Trump is undermining his own ‘action plan' for AI, experts say
Trump is undermining his own ‘action plan' for AI, experts say

Los Angeles Times

time24-07-2025

  • Business
  • Los Angeles Times

Trump is undermining his own ‘action plan' for AI, experts say

WASHINGTON — President Trump revealed an 'action plan' for artificial intelligence on Wednesday ostensibly designed to bolster the United States in its race against China for AI superiority. But experts in the field warn the administration is sidestepping safety precautions that sustain public trust, and is ignoring the impacts of research funding cuts and visa restrictions for scientists that could hold America back. Trump introduced the new policy with an address in Washington, a new government website and a slew of executive actions, easing restrictions on the export of AI technology overseas and greasing the wheels for infrastructural expansion that would accommodate the computing power required for an AI future — both top requests of American AI companies. The plan also calls for AI to be integrated more thoroughly across the federal government, including at the Pentagon, and includes a directive targeting 'woke' bias in large language models. The new website, says the United States 'is in a race to achieve global dominance in artificial intelligence,' and lays out three pillars of its plan for success: 'Accelerating Innovation, Building AI Infrastructure, and Leading International Diplomacy and Security.' Scholars of machine learning and AI believe that whichever country loses the race — toward general artificial intelligence, where AI has capabilities similar to the human mind, and ultimately toward superintelligence, where its abilities exceed human thought — will be unable to catch up with the exponential growth of the winner. Today, China and the United States are the only powers with competitive AI capabilities. 'Whether we like it or not, we're suddenly engaged in a fast-paced competition to build and define this groundbreaking technology that will determine so much about the future of civilization itself, because of the genius and creativity of Silicon Valley — and it is incredible, incredible genius, without question, the most brilliant place on Earth,' Trump said on Wednesday in his policy speech on AI. 'America is the country that started the AI race. And as president of the United States, I'm here today to declare that America is going to win it,' he added. 'We're going to work hard — we're going to win it. Because we will not allow any foreign nation to beat us. Our children will not live in a planet controlled by the algorithms of the adversary's advancing values.' Yoshua Bengio, founder of Mila-Quebec AI Institute and a winner of the Turing Award for his work on deep learning, told The Times that the urgency of the race is fueling concerning behavior from both sides. 'These technologies hold enormous economic potential,' Bengio said, 'but intense competition between countries or companies can create dangerous incentives to cut corners on safety in order to stay ahead.' Silicon Valley may be getting much of what it wants from Trump — but the administration's continued assault on the student visa program remains a significant concern for the very same tech firms Trump aims to empower. Yolanda Gil, senior director of AI and data science initiatives at the USC Viterbi School of Engineering, said that the Trump administration's reductions in funding and visas 'will reduce U.S. competitiveness in AI and all technology areas, not just in the near future but for many years to come,' noting that almost 500,000 international students in science and engineering are currently enrolled in U.S. universities. The majority of America's top AI companies have been founded by first- or second-generation immigrants, and 70% of full-time graduate students at U.S. institutions working in AI-related fields have come from abroad. Yet the administration's revocation and crackdown on F-1 visas risks crippling the talent pipeline the industry views as essential to success against China. Funding cuts to research institutions, too, threaten the stability of programs and their attractiveness to the best foreign minds, said Sheila Jasanoff, a professor of science and technology studies at the Harvard Kennedy School. 'Our openness to ideas and people, combined with steadiness of funding, drew bright talents from around the globe and science prospered,' Jasanoff said. 'That achievement is in a precarious state through the Trump administration's unpredictable and exclusionary policies that have created an atmosphere in which young scientists are much less comfortable coming to do their science in America.' 'Why would a talented young person wish to invest in a U.S. graduate program if there is a risk their visa could be canceled overnight on poorly articulated and unprecedented grounds? It's clear that other countries, including China, are already trying to benefit from our suddenly uncertain and chaotic research environment,' she added. 'We seem to be heading into an era of self-inflicted ignorance.' Teddy Svoronos, also at Harvard as a senior lecturer in public policy, said that the president is deregulating the AI industry 'while limiting its ability to recruit the highest-quality talent from around the world and de-incentivizing research that lacks immediate commercial use.' 'His policies thus far convince me that the future of the U.S. will certainly have more AI,' Svoronos said, 'but I don't see a coherent strategy around creating more effective or more aligned AI.' Aligned AI, in simple terms, refers to artificial intelligence that is trained to do good and avoid harm. Trump's action plan doesn't include the phrase, but repeatedly emphasizes the need to align AI development with U.S. interests. The deregulatory spirit of Trump's plan could help expedite AI development. But it could also backfire in unexpected ways, Jasanoff said. 'It's not clear that technology development prospers without guardrails that protect scientists and engineers against accidents, overreach and public backlash,' she added. 'The U.S. biotech industry, for example, has actively sought out ethical and policy clarification because missteps could endanger entire lines of research.' The plan also has the United States encouraging the development of open-source and open-weight AI models, allowing public access to code and training data. It is a decision that will allow AI to be more readily adopted throughout the U.S. economy — but also grants malicious actors, such as terrorist organizations, access to AI tools they could use to threaten national security and global peace. It is the sort of compromise that Bengio feared would emerge from the U.S.-China race. 'This dynamic poses serious public safety and national security risks, including AI-enabled cyberattacks, biological threats and the possibility of losing human control over advanced AI — outcomes with no winners,' Bengio said. 'To realize the full benefits of these technologies,' he added, 'safety and innovation must go hand in hand, supported by strong technical and societal safeguards.' The must-read: National Guard came to L.A. to fight unrest. Troops ended up fighting boredomThe deep dive: Hollywood's being reshaped by generative AI. What does that mean for screenwriters?The L.A. Times Special: As west Altadena burned, L.A. County fire trucks stayed elsewhere More to come,Michael Wilner—Was this newsletter forwarded to you? Sign up here to get it in your inbox.

What are AI agents and which jobs are they coming for first?
What are AI agents and which jobs are they coming for first?

Calgary Herald

time09-07-2025

  • Business
  • Calgary Herald

What are AI agents and which jobs are they coming for first?

Article content Some in the industry are already working toward establishing guardrails to advance the development of safe and ethical AI. Canadian-French computer scientist Yoshua Bengio, considered one of the 'godfathers of AI,' recently launched a Montreal-based non-profit called LawZero focused on AI systems that will filter out certain traits like dishonesty. He aims to create a tool to de-risk AI agents and keep them in line. 'I'm deeply concerned by the behaviours that unrestrained agentic AI systems are already beginning to exhibit — especially tendencies toward self-preservation and deception,' Bengio wrote in a June 2025 blog post. Article content What is the timeline for rolling out agentic AI? Article content Agentic AI technology is nascent, but developing rapidly, Filan said. 'I'm not even thinking about what it would look like 10 years from now. Most of the conversations occurring now are between a two-to-five-year period,' he said. Article content Article content A growing number of startups are now developing AI agents customized for different professional and personal needs. Partners at Silicon Valley's fabled startup accelerator Y Combinator LLC, recently said that they have been bombarded with a wide range of AI agent proposals in fields ranging from marketing to recruitment and debt collection. Article content Silicon Valley leaders have warned that job displacement is coming rapidly. Anthropic PBC chief executive Dario Amodei told Axios in May 2025 that he thought AI could eliminate half of entry-level white-collar jobs and push unemployment to 10 to 20 per cent in the next one to five years. Article content Others say agentic AI technology has a ways to go. Article content Article content A May 2025 Carnegie Mellon paper showed that Google LLC's Gemini 2.5 Pro, the top-performing AI agent, was unsuccessful 70 per cent of the time in completing real-world office tasks. Other rival agents created by tech giants like OpenAI and Meta Platforms Inc. had failure rates of over 90 per cent. Article content 'Right now, we're seeing early glimpses: AI agents can already analyze data, predict trends, and automate workflows to some extent. But building AI agents that can autonomously handle complex decision-making will take more than just better algorithms. We'll need big leaps in contextual reasoning and testing for edge cases,' according to a March 2025 International Business Machines Corp. report titled AI Agents in 2025: Expectations vs. Reality.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store