logo
#

Latest news with #ChatGPT3.5

Musk's xAI blames rogue tampering for ‘white genocide' glitches
Musk's xAI blames rogue tampering for ‘white genocide' glitches

Toronto Sun

time16-05-2025

  • Business
  • Toronto Sun

Musk's xAI blames rogue tampering for ‘white genocide' glitches

Published May 16, 2025 • 2 minute read The Grok logo on a smartphone arranged in New York, US, on Wednesday, Nov. 8, 2023. Elon Musk revealed his own artificial intelligence bot, dubbed Grok, claiming the prototype is already superior to ChatGPT 3.5 across several benchmarks. Photographer: Gabby Jones/Bloomberg Photo by Gabby Jones / Bloomberg (Bloomberg) — Elon Musk's artificial intelligence chatbot Grok blamed unsanctioned changes to its system for responses this week that included controversial theories about 'white genocide' in South Africa. This advertisement has not loaded yet, but your article continues below. THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. SUBSCRIBE TO UNLOCK MORE ARTICLES Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. REGISTER / SIGN IN TO UNLOCK MORE ARTICLES Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account. Share your thoughts and join the conversation in the comments. Enjoy additional articles per month. Get email updates from your favourite authors. THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK. Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account Share your thoughts and join the conversation in the comments Enjoy additional articles per month Get email updates from your favourite authors Don't have an account? Create Account Grok, the AI bot from Musk's xAI, has thoroughly investigated and reversed the 'unauthorized modification' to its technology, which led to responses that 'violated xAI's internal policies and core values,' it said in a posting Thursday. 'Our existing code review process for prompt changes was circumvented in this incident,' xAI said on its own platform. 'We will put in place additional checks and measures to ensure that xAI employees can't modify the prompt without review.' The responses raised concerns of a lack of oversight and control on AI chatbots such as Grok. The bot this week answered a series of social media posts about enterprise software, baseball salaries and puppies by explaining why claims of 'white genocide' in South Africa are 'highly debated.' This advertisement has not loaded yet, but your article continues below. 'Switching enterprise software is hard, like swapping your favourite LEGO castle for wooden blocks,' Grok replied to one user on the X social media platform earlier this week, before abruptly shifting topics a few sentences later. 'I'm unsure about the South Africa claims, as evidence is conflicting. Courts and analysts deny 'white genocide,' but some groups insist it's real.' As AI chatbots become more ubiquitous around the world, there's increasing concern about the potential for them to be manipulated to propagate harmful and misleading narratives. Even small tweaks done within the AI program can result in unpredictable, even rogue behavior by the bots. Musk, who grew up in South Africa, has in the past spouted the false conspiracy that there's a deliberate plot to cause extinction of white people in the country. Recently, the US granted refugee status to White South Africans, as US President Donald Trump claimed, without evidence, that this group has been the victim of a 'genocide.' As part of the measures to prevent such incidents, Grok's system prompts will be published on GitHub for the public to review and give feedback on, xAI said in its post. The company said it will put in place a 24/7 monitoring team 'to respond to incidents with Grok's answers that are not caught by automated systems.' Celebrity Ontario Toronto Maple Leafs Celebrity Toronto Maple Leafs

Opinion - To short-circuit the higher education AI apocalypse, we must embrace generative AI
Opinion - To short-circuit the higher education AI apocalypse, we must embrace generative AI

Yahoo

time26-01-2025

  • Science
  • Yahoo

Opinion - To short-circuit the higher education AI apocalypse, we must embrace generative AI

The rapid improvement of generative AI tools has led many of my peers to proclaim that higher education as we know it has come to a crashing and shocking end. I agree. In my large-enrollment general education course at the University of Florida, I can no longer assign an essay asking students to state their views on genetic engineering and assume the responses I receive are written by humans. So the critical question we must ask as academics is, 'What do we do now?' Rather than try to create assignments that AI cannot tackle, I propose we develop assignments that embrace AI text generation. We don't want to ignore the 54 percent of students who use AI at least weekly in their course assignments, according to the Digital Education Council. We don't want to ban AI. And even when we, as educators, try to trick AI tools, newer versions of ChatGPT just come along to thwart that strategy. With this in mind, I modified the final assignment in my course to require that students submit an entirely AI-generated first draft, which they then modified to reflect their own perspectives. In the first couple of semesters using this strategy, students color-coded the sources of text to mark which parts were human-generated and which were AI-generated. This strategy allowed students to use and reflect on how they would utilize AI in the future. Tracking of text origin was further streamlined using the recently released 'Authorship' tool from Grammarly, which accurately attributes text as 'typed by a human' or 'copied from a source/AI-generated.' Advancements in technology have upended the careful development of assessments in higher education before and will continue to in the future, even if AI appears to be an all-encompassing, do-everything tool. For those of us born in the 1970s, we remember a time before the ever-present calculator. Math teachers could assign long-division problems without worrying that students who came up with the correct answer did not understand the methods required to generate the answer. More recently, language translation, a key learning tool in language acquisition, was upended over a few days in 2016 by the release of a new version of Google Translate. The rapid improvement in Google Translate bears similar parallels to how ChatGPT 3.5 burst into the consciousness of a large portion of the population in November 2022. In both cases, educators eventually embraced and used these new tools to improve student learning outcomes. While requiring a GenAI first draft of an assignment is not a model that will work in all situations, 'showing the work' and student reflection can play key roles in student assessment. My twin high school seniors possess graphing calculators that are more powerful than the computer on which I wrote my dissertation. So I have observed firsthand how educators have modified assessments to adjust for such changes, emphasizing the processes needed to answer the assignment more than the final answer. Language teachers have pivoted to incorporate student reflections on why one word was chosen over another, for example. In my course, Part B of the final assignment requires students to reflect on how well (or poorly) the initial AI draft reflected their views on the assigned topic. I acknowledge that students with access to AI during the reflection portion of assignments could use the tool to show how they produced the 'work.' Tools that track AI usage, like the 'Authorship' tool, hold promise for providing both instructors and students with information on where and how much AI text was used in an assignment. The capability of AI to generate text (and images) will keep advancing, becoming increasingly integrated into the daily lives of both us and our students. In our professional lives, it will be capable of responding to the most imaginative essay prompts educators can design. By shifting the focus of assignments from pure content creation to critical engagement, analysis and editing, we will teach our students how to think creatively, collaborate and communicate their ideas effectively and responsibly. These are the same skills they need to master to successfully work in teams and communicate efficiently in their future careers. Brian Harfe, Ph.D., is a professor in the College of Medicine and associate provost at the University of Florida. He runs 14 international exchange programs and a study abroad program while teaching about 450 students each semester. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

To short-circuit the higher education AI apocalypse, we must embrace generative AI
To short-circuit the higher education AI apocalypse, we must embrace generative AI

The Hill

time26-01-2025

  • Science
  • The Hill

To short-circuit the higher education AI apocalypse, we must embrace generative AI

The rapid improvement of generative AI tools has led many of my peers to proclaim that higher education as we know it has come to a crashing and shocking end. I agree. In my large-enrollment general education course at the University of Florida, I can no longer assign an essay asking students to state their views on genetic engineering and assume the responses I receive are written by humans. So the critical question we must ask as academics is, 'What do we do now?' Rather than try to create assignments that AI cannot tackle, I propose we develop assignments that embrace AI text generation. We don't want to ignore the 54 percent of students who use AI at least weekly in their course assignments, according to the Digital Education Council. We don't want to ban AI. And even when we, as educators, try to trick AI tools, newer versions of ChatGPT just come along to thwart that strategy. With this in mind, I modified the final assignment in my course to require that students submit an entirely AI-generated first draft, which they then modified to reflect their own perspectives. In the first couple of semesters using this strategy, students color-coded the sources of text to mark which parts were human-generated and which were AI-generated. This strategy allowed students to use and reflect on how they would utilize AI in the future. Tracking of text origin was further streamlined using the recently released 'Authorship' tool from Grammarly, which accurately attributes text as 'typed by a human' or 'copied from a source/AI-generated.' Advancements in technology have upended the careful development of assessments in higher education before and will continue to in the future, even if AI appears to be an all-encompassing, do-everything tool. For those of us born in the 1970s, we remember a time before the ever-present calculator. Math teachers could assign long-division problems without worrying that students who came up with the correct answer did not understand the methods required to generate the answer. More recently, language translation, a key learning tool in language acquisition, was upended over a few days in 2016 by the release of a new version of Google Translate. The rapid improvement in Google Translate bears similar parallels to how ChatGPT 3.5 burst into the consciousness of a large portion of the population in November 2022. In both cases, educators eventually embraced and used these new tools to improve student learning outcomes. While requiring a GenAI first draft of an assignment is not a model that will work in all situations, 'showing the work' and student reflection can play key roles in student assessment. My twin high school seniors possess graphing calculators that are more powerful than the computer on which I wrote my dissertation. So I have observed firsthand how educators have modified assessments to adjust for such changes, emphasizing the processes needed to answer the assignment more than the final answer. Language teachers have pivoted to incorporate student reflections on why one word was chosen over another, for example. In my course, Part B of the final assignment requires students to reflect on how well (or poorly) the initial AI draft reflected their views on the assigned topic. I acknowledge that students with access to AI during the reflection portion of assignments could use the tool to show how they produced the 'work.' Tools that track AI usage, like the 'Authorship' tool, hold promise for providing both instructors and students with information on where and how much AI text was used in an assignment. The capability of AI to generate text (and images) will keep advancing, becoming increasingly integrated into the daily lives of both us and our students. In our professional lives, it will be capable of responding to the most imaginative essay prompts educators can design. By shifting the focus of assignments from pure content creation to critical engagement, analysis and editing, we will teach our students how to think creatively, collaborate and communicate their ideas effectively and responsibly. These are the same skills they need to master to successfully work in teams and communicate efficiently in their future careers. Brian Harfe, Ph.D., is a professor in the College of Medicine and associate provost at the University of Florida. He runs 14 international exchange programs and a study abroad program while teaching about 450 students each semester.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store