logo
#

Latest news with #ImaginingtheDigitalFutureCenter

AI cheating surge pushes schools into chaos
AI cheating surge pushes schools into chaos

Axios

time26-05-2025

  • Axios

AI cheating surge pushes schools into chaos

High schools and colleges are stuck in limbo: Use of generative AI to cut corners and cheat is rampant, but there's no clear consensus on how to fight back. Why it matters: AI is here to stay, forcing educators to adapt. That means sussing out when students are using it — and avoiding the temptation of overusing it themselves. "I have to be a teacher and an AI detector at the same time," says Stephen Cicirelli, an English professor at Saint Peter's University in Jersey City, N.J. "[Any assignment] that you take home and have time to play around with, there's going to be doubt hanging over it." Cicirelli captured the zeitgeist with a viral post on X about how one of his students got caught submitting an AI-written paper — and apologized with an email that also appeared to be written by ChatGPT. "You're coming to me after to apologize and do the human thing and ask for grace," he says. "You're not even doing that yourself?" By the numbers: Use is ubiquitous in college. A survey of college students taken in January 2023, just two months after ChatGPT's launch, found that some 90% had already used it on assignments, New York Magazine reports. 1 in 4 13- to 17-year-olds say they use ChatGPT for help with schoolwork, per a recent Pew survey. That's double what it was in 2023. Driving the news: The proliferation of AI-assisted schoolwork is worrying academic leaders. 66% think generative AI will cut into students' attention spans, according to a survey of university presidents, chancellors, deans and more from the American Association of Colleges & Universities and Elon University's Imagining the Digital Future Center. 59% say cheating has increased on campus. 56% say their schools aren't ready to prepare students for the AI era. "It's an undeniable and unavoidable disruption," says Lee Rainie, director of Elon's digital future center. "You can't avert your eyes." One big snag: Teachers can't agree on what's acceptable in this new world. For example, 51% of higher education leaders say it's fine for a student to write a paper off a detailed outline generated by AI, while the rest say it's not or they don't know, per the AAC&U and Elon survey. Policies vary from classroom to classroom within the same school. Plus, the rise of AI is causing unforeseen headaches. Teachers run assignments through detectors, which often don't get it right, either missing AI-generated work or mistakenly flagging original work as written by AI. Students who didn't use AI have had to appeal to their schools or submit proof of their process to avoid getting zeroes, The New York Times reports. Instructors are getting caught leaning on ChatGPT, too. One Northeastern senior demanded tuition reimbursement after discovering her professor had used AI to prep lecture notes and slides, according to The New York Times. The other side: As much as they're struggling to wrangle AI use, many educators believe it has the potential to help students — and that schools should be teaching them how to use it. American University's business school is launching an AI institute for just that purpose. "When 18-year-olds show up here as first-years, we ask them, 'How many of your high school teachers told you not to use AI?' And most of them raise their hand," David Marchick, the dean of American University's Kogod School of Business, told Axios' Megan Morrone. "We say, 'Here, you're using AI, starting today.'" ChatGPT can be a real-time editor and refine students' writing or speed up research so they can focus on organizing big ideas instead of information gathering, Jeanne Beatrix Law, an English professor at Kennesaw State University, writes in The Conversation. "Don't block AI ... Instead, let's put in place some of the same safety and wellness protocols that it took us a decade to build for social media and web 1.0," says Tammy Wincup, CEO of Securly, a software company that builds safety tools for K-12 schools. What to watch: "There is a gigantic question across academic institutions right now," Rainie tells Axios. "How do you assess mastery?" Cicirelli says he's asking students to draft their work in Google Docs so he can see the brainstorming and writing process.

Evolving Leadership In The Age Of AI: 5 Takeaways For 2035
Evolving Leadership In The Age Of AI: 5 Takeaways For 2035

Forbes

time09-04-2025

  • Forbes

Evolving Leadership In The Age Of AI: 5 Takeaways For 2035

Giant robot throwing man in a trash can. Artifical intelligence replacing jobs concept. Vector ... More illustration. New research from Elon University's Imagining the Digital Future Center has surveyed nearly 200 global technology experts on the future, as humans brace for the impact of Artificial Intelligence (AI). The result is a fantastical glimpse into a near future defined by a mixture of dystopia and utopia. What is clear is the scale of change on the horizon: 61% of experts surveyed in the study envisaged the impact of AI to be 'deep and meaningful' or 'fundamental and revolutionary.' Co-author of the report, Janna Anderson told me, 'It's a revealing and provocative declaration of the profound depth of change people are undergoing, mostly without noticing it at all, as we adapt to deeper uses of advancing digital technologies. A majority of the experts we have been surveying the past few years have also been calling on humanity to think intentionally and carefully about all of this and take wise collective action, so we don't sleepwalk into an AI future we never intended and do not want. Many say we are running out of time. It has to be now'. Here are five major implications for evolving leadership from the report. We are no longer static selves. We have avatars, proxies, and digital twins that manage our deadlines and perhaps even our deaths. As I note in my own contribution to the report, we will come to redefine ourselves as 'database selves', a constellation of algorithmically-managed personas tailored to context, platform and audience. This will render the notion of authenticity obsolete. The increasingly mediated future we are heading towards contains the potential for context collapse, and identity collapse too. Until now, most experts have warned about digital deepfakes, but it is beginning to dawn on many that in an age of AI we will face an internal rather than an external crisis. Perhaps that is why 45% of experts surveyed think this kind of AI co-evolution will have a more negative than positive effect on mental well-being by 2035. Barry Chudakov, of Sertain Research, imagines a future in which schizophrenia becomes the natural state of most humans, with part of us online, and part of us using AI to help self-promote, self-brand and self-improve. Strategist, Neil Richardson, suggests that our digital shadows, the sum total of our online expressions and biometric traces, may soon outlive us, creating some kind of posthumous identity that transcends mortality. Whilst Silicon Valley based technology forecaster Paul Saffo suggests that an AI-generated actor will win Best Supporting Actor in the 2035 Academy Awards. Evelyne Tauchnitz, a senior research fellow at the Lucerne Graduate School of Ethics, explores the implications of this shift for human freedom and personal agency. Her concern is clear: AI recommendations, manipulations and algorithmic systems designed to nudge us to what is considered 'normal' will create additional pressure to conform and render our ability to choose differently, and freely, much compromised. As AI continues to optimize every choice we make from what we eat to whom we trust, she ponders whether it will even be possible to contradict oneself in the future, adding: 'Freedom is the very bedrock of moral capability. If AI directs our actions shaping our behavior based on data-driven predictions of what is 'best' , we lose our moral agency.' It's a haunting thought: in our pursuit of perfection, we may abolish the very imperfections that make us human, imperfections that, in this century, leaders have been encouraging us to embrace. Indeed 44% of the respondents surveyed think that AI's effect on individual agency and ability to act independently is likely to be more negative than positive with only 16% predicting a fairly equal split between positive and negative change. AI won't just inhabit our minds. Increasingly, it will replace our relationships. Nell Watson, president of EURAIO, the European Responsible Artificial Intelligence Office, suggests that 'AI romantic partners will provide idealized relationships that make human partnerships seems unnecessarily difficult'. Along the same theme, Henning Schulzrinne, former co-chair of the Internet Technical Committee of the IEEE, reflects that we will treat bots as 'training wheels or the equivalent of treadmills at the gym' for improving our social interactions. Overall the report paints a future where the most intimate corners of our lives are increasingly shaped by code. In this parasocial future, emotional attachments become programmable. We form bonds with digital personas that neither disappoint nor require compromise. The relationship is one-way, the affection is frictionless. After all, why struggle to understand another work colleague when your AI work partner is already optimized to agree with you? As Schulzrinne points out about personal life, online dating might hold its disappointment but 'who will proudly look back on a 25-year marriage with a bot?' The same might be said of longtime, loyal working relationships with companies. Not all the findings are dark. Some respondents see in this transformation a chance for personal growth. David Weinberger from Harvard University's Berkman Klein Centre for Internet & Society, envisions a future in which AI enables us to notice things that humans cannot. By lifting the cognitive burdens that limit human perception he believes AI might expand rather than diminish our understanding and encourage humans to see the world differently. AI will be there to teach us about ourselves and inspire us to explore in new ways. Dave Edwards, co-founder of the Artificiality Institute, offers a complementary vision. He speaks of AI systems as 'minds for our minds', part of a distributed knowledge system that augments rather than replaces human judgment. The challenge, he warns, is to avoid commodifying intimacy, and reject technology companies who continue to mine our intimacy for profit. There is much more positivity from the experts about the way in which AI will improve and enhance human curiosity and our capacity to learn. 42% foresee more positive change than negative change in this area and only 5% see little or no change by 2035. AI is seen as an expansive tool for human learning and a kind of motivator to ask more questions, consider more options and generally expand, rather than diminish, human thought. Finally, there is a warning about the implications of attempts to standardize information through machines. Professor of Innovation, Alf Rehn, describes AI systems in 2035 as 'mediocrity engines' saying AI falls short when it comes to spark and wit thus deadening creativity. It does seem that today's intelligent machines mostly produce acceptable, average outputs en masse, flattening the peaks and valleys of true human innovation. However, the future does not have to pan out like that. In fact, he offers a counter-image: alien-like AI he calls 'octopodes' that generate truly strange outcomes, not by mimicking humans, but by thinking differently. This way, a brighter future lies in hybrid intelligences, not homogenous outputs. What becomes clear across the essays and survey results, is that the core question of the next decade is not technical—it is existential. The ultimate fear is not that by 2035 we will be replaced but that we will be utterly reshaped in AI's image, forgetting what we once were. Indeed, Paul Saffo predicts the best-selling book of 2035 will be called 'What Was Human?' written by AI and purchased by more AIs than actual human readers. However, co-author of the report, Lee Rainie told me, 'Our expert respondents gave us a major insight. When they identified creativity, curiosity and decision-making as three human traits that might be a positive benefit as AI systems evolve, they were highlighting the building blocks of leadership. Essentially, they told us that good leadership built on those traits might be humans' ultimate saving grace in the world to come'. This suggests successful co-evolution with AI should not be judged by how well these systems serve us, but by how well they preserve our ability to grow and chart our own course as humans. The author was one of the 200 global technology experts surveyed.

Majority of Americans have used AI models like ChatGPT: Survey
Majority of Americans have used AI models like ChatGPT: Survey

Yahoo

time13-03-2025

  • Yahoo

Majority of Americans have used AI models like ChatGPT: Survey

A majority of Americans have used ChatGPT-like artificial intelligence (AI) models according to a new survey. In the survey from Elon University's Imagining the Digital Future Center, 52 percent said they 'use artificial intelligence (AI) large language models,' a category that includes OpenAI's famous ChatGPT. Out of that percentage, 5 percent said they use the models 'almost constantly,' 7 percent said they use them 'several times a day,' 5 percent said they use them 'about once a day,' 10 percent said they use them 'several times a week' and 25 percent said they use them 'less often.' Forty-seven percent said they use them 'not at all.' 'The rise of large language models has been historic. In less than two-and-and-half years, half the adults in America say they have used LLMs. Few, if any, communications and general technologies have seen this pace of growth across the entire population,' a report on the survey reads. Despite Americans appearing to be more comfortable with AI, a recent poll found 55 percent disagree with the government using AI to make choices about eligibility for unemployment assistance, college tuition aid, research investments, food aid and small business loans. Among 500 users of large language models surveyed in the Imagining the Digital Future Center survey, 52 percent said they use them 'for work activities.' Thirty-six percent said they use them 'for schoolwork and homework activities.' The Imagining the Digital Future Center survey for the 500 users of large language models took place from Jan. 21 to 23. and has 5.1 percentage points as its margin of error. Another wider group of 939 people, both users and non-users of large language models, has 3.2 percentage points as its margin of error. The Hill has reached out to the Imagining the Digital Future Center about the survey dates for the wider group. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Majority of Americans have used AI models like ChatGPT: Survey
Majority of Americans have used AI models like ChatGPT: Survey

The Hill

time13-03-2025

  • The Hill

Majority of Americans have used AI models like ChatGPT: Survey

A majority of Americans have used ChatGPT-like artificial intelligence (AI) models according to a new survey. In the survey from Elon University's Imagining the Digital Future Center, 52 percent said they 'use artificial intelligence (AI) large language models,' a category that includes OpenAI's famous ChatGPT. Out of that percentage, 5 percent said they use the models 'almost constantly,' 7 percent said they use them 'several times a day,' 5 percent said they use them 'about once a day,' 10 percent said they use them 'several times a week' and 25 percent said they use them 'less often.' Forty-seven percent said they use them 'not at all.' 'The rise of large language models has been historic. In less than two-and-and-half years, half the adults in America say they have used LLMs. Few, if any, communications and general technologies have seen this pace of growth across the entire population,' a report on the survey reads. Despite Americans appearing to be more comfortable with AI, a recent poll found 55 percent disagree with the government using AI to make choices about eligibility for unemployment assistance, college tuition aid, research investments, food aid and small business loans. Among 500 users of large language models surveyed in the Imagining the Digital Future Center survey, 52 percent said they use them 'for work activities.' Thirty-six percent said they use them 'for schoolwork and homework activities.' The Imagining the Digital Future Center survey for the 500 users of large language models took place from Jan. 21 to 23. and has 5.1 percentage points as its margin of error. Another wider group of 939 people, both users and non-users of large language models, has 3.2 percentage points as its margin of error. The Hill has reached out to the Imagining the Digital Future Center about the survey dates for the wider group.

Over half of American adults have used an AI chatbot, survey finds
Over half of American adults have used an AI chatbot, survey finds

Yahoo

time13-03-2025

  • Science
  • Yahoo

Over half of American adults have used an AI chatbot, survey finds

Artificial intelligence technology is becoming increasingly integral to everyday life, with an Elon University survey finding that 52% of U.S. adults have used AI large language models like ChatGPT, Gemini, Claude and Copilot. The survey, conducted in January by the Imagining the Digital Future Center at the university in North Carolina, found that 34% of its 500 respondents who had used AI said they use large language models (LLMs) at least once a day. Most popular was ChatGPT, with 72% of respondents reporting they have used it. Google's Gemini was second, at 50%. It has become increasingly common for people to find themselves developing personal relationships with AI chatbots. The survey found that 38% of users said they believe LLMs will 'form deep relationships with humans,' and over half reported having had spoken conversations with chatbots. Around 9% of users said the main purpose they use the models for is 'social kinds of encounters like casual conversation and companionship.' The respondents found that the models can express a variety of personality traits, including confidence, curiosity and even senses of humor. 'These findings start to establish a baseline for the way humans and AI systems will evolve together in the coming years,' Lee Rainie, director of the Imagining the Digital Future Center, told NBC News in a statement. 'These tools are increasingly being integrated into daily life in sometimes quite intimate ways at the level of emotion and impact. It's clearly shaping up as the story of another chapter in human history.' That is consistent with an overall trend that found that 51% of respondents use LLMs for personal endeavors, rather than work-related activities. When it comes to using the models for work purposes, respondents reported that they have used them with work-related apps such as Slack, PowerPoint and Zoom. They have also used the models to do such things as write emails, research ideas and summarize documents. Over 50% of respondents said the models have helped them improve their productivity. Many respondents reported having anxieties about the technology. Sixty-three percent thought the models could replace a significant amount of human-to-human communication, and 59% thought they could cause a significant number of job losses. AI technology is becoming more popular as President Donald Trump's administration has been pushing for increased investment in AI technology. In January, Trump announced a joint venture among OpenAI (the company that developed ChatGPT), Oracle and SoftBank. The executives of the AI companies committed to invest $100 billion in the initiative up front and up to $500 billion over the next four years. Trump also signed an executive order about AI a few days after he entered office. When it comes to politics, the survey found that Republican LLM users are more likely than Democrats to use the models to search for political news and information. Thirty-four percent of respondents reported using them to get news or political information. Vice President JD Vance has been particularly outspoken about bias in AI, telling world leaders at an AI summit in Paris that the 'Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens' right to free speech.' The survey found, however, that 49% of users believe the creators of the LLMs should be responsible for limiting bias in LLM answers rather than government officials and lawmakers. As schools have also been in limbo trying to find the best way to address the use of AI technology in classrooms, the survey found that 36% of respondents use the models for schoolwork or homework. The survey also found that 77% of LLM users are ages 18 to 29. Households with children under 18 were also more likely to use LLMs, compared with those without. This article was originally published on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store