I'm a university lecturer concerned that students are using AI to cheat. It's made my workload skyrocket, and I've had to make drastic changes.
Students always cheat.
I've been a lecturer for 18 years, and I've dealt with cheating throughout that time, but with AI tools becoming widely available in recent years, I've experienced a significant change.
There are definitely positive aspects to AI. It's much easier to get access to information and students can use these tools to improve their writing, spelling, and grammar, so there are fewer badly written essays.
However, I believe some of my students have been using AI to generate essay content that pulls information from the internet, instead of using material from my classes to complete their assignments.
AI is supposed to help us work efficiently, but my workload has skyrocketed because of it. I have to spend lots of time figuring out whether the work students are handing in was really written by them.
I've decided to take dramatic action, changing the way I assess students to encourage them to be more creative and rely less on AI. The world is changing, so universities can't stand still.
Cheating has become harder to detect because of AI
I've worked at SOAS University of London since 2012. My teaching focus is ecological economics.
Initially, my teaching style was exam-based, but I found that students were anxious about one-off exams, and their results wouldn't always correspond to their performance.
I eventually pivoted to a focus on essays. Students chose their topic and consolidated theories into an essay. It worked well — until AI came along.
Cheating used to be easier to spot. I'd maybe catch one or two students cheating by copying huge chunks of text from internet sources, leading to a plagiarism case. Even two or three years ago, detecting inappropriate AI use was easier due to signs like robotic writing styles.
Now, with more sophisticated AI technologies, it's harder to detect, and I believe the scale of cheating has increased.
I'll read 100 essays and some of them will be very similar using identical case examples, that I've never taught.
These examples are typically referenced on the internet, which makes me think the students are using an AI tool that is incorporating them. Some of the essays will cite 20 pieces of literature, but not a single one will be something from the reading list I set.
While students can use examples from internet sources in their work, I'm concerned that some students have just used AI to generate the essay content without reading or engaging with the original source.
I started using AI detection tools to assess work, but I'm aware this technology has limitations.
AI tools are easy to access for students who feel pressured by the amount of work they have to do. University fees are increasing, and a lot of students work part-time jobs, so it makes sense to me that they want to use these tools to complete work more quickly.
There's no obvious way to judge misconduct
During the first lecture of my module, I'll tell students they can use AI to check grammar or summarize the literature to better understand it, but they can't use it to generate responses to their assignments.
SOAS has guidance for AI use among students, which sets similar principles about not using AI to generate essays.
Over the past year, I've sat on an academic misconduct panel at the university, dealing with students who've been flagged for inappropriate AI use across departments.
I've seen students refer to these guidelines and say that they only used AI to support their learning and not to write their responses.
It can be hard to make decisions because you can't be 100% sure from reading the essay whether it's AI-generated or not. It's also hard to draw a line between cheating and using AI to support learning.
Next year, I'm going to dramatically change my assignment format
My colleagues and I speak about the negative and positive aspects of AI, and we're aware that we still have a lot to learn about the technology ourselves.
The university is encouraging lecturers to change their teaching and assessment practices. At the department level, we often discuss how to improve things.
I send my two young children to a school with an alternative, progressive education system, rather than a mainstream British state school. Seeing how my kids are educated has inspired me to try two alternative assessment methods this coming academic year. I had to go through a formal process with the university to get them approved.
I'll ask my students to choose a topic and produce a summary of what they learned in the class about it. Second, they'll create a blog, so they can translate what they've understood of the highly technical terms into a more communicable format.
My aim is to make sure the assignments are directly tied to what we've learned in class and make assessments more personal and creative.
The old assessment model, which involves memorizing facts and regurgitating them in exams, isn't useful anymore. ChatGPT can easily give you a beautiful summary of information like this. Instead, educators need to help students with soft skills, communication, and out-of-the-box thinking.
In a statement to BI, a SOAS spokesperson said students are guided to use AI in ways that "uphold academic integrity." They said the university encouraged students to pursue work that is harder for AI to replicate and have "robust mechanisms" in place for investigating AI misuse. "The use of AI is constantly evolving, and we are regularly reviewing and updating our policies to respond to these changes," the spokesperson added.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 minutes ago
- Yahoo
Xpand raises $6m to support autonomous retail stores growth
Israel-headquartered retail technology startup Xpand has raised $6m in a funding round to expand its AI-powered autonomous retail stores globally. The investment, led by Ibex Investors and Emerge, will support the launch of Xpand's first smart autonomous store in Vienna, Austria, and 'the company's plan to expand access to efficient, frictionless retail infrastructure worldwide'. The new funding will be instrumental for Xpand as it enters the next phase of growth, with a focus on the deployment of its "store-in-a-box" units. The modular stores operate without staff, using robotics, computer vision and real-time inventory management to provide a 24/7 shopping experience. The new capital will expedite the rollout of stores across Europe and North America, as well as expanding the company's sales, marketing and technology teams. The executive team's participation in the funding round included chairman and CEO Joel Bar-El. Bar-El, who co-founded Trax Retail, brings industry expertise to the company, along with other senior team members from SAP, Retalix, Bringg and Magic Leap. 'With this new investment, we are ready to move from vision to global execution,' Bar-El stated. 'The Vienna store is only the beginning. Our leadership team is deeply committed to bringing scaleable, autonomous retail to life.' Xpand was originally founded as 1MRobotics in 2021. It underwent rebranding to better represent its transition from research and development to commercialisation. Bar-El added: 'Our mission is to power the next generation of retail - autonomous, scaleable, and always on. We are excited to partner with global retailers to bring this model to life.' "Xpand raises $6m to support autonomous retail stores growth" was originally created and published by Retail Insight Network, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
5 minutes ago
- Forbes
Responsible AI Leadership: Why Kindness And Ethics Will Win The Future
Nav Thethi, well recognized and reputed Top DX Contributor, Executive Advisor, Podcast Host, Corporate Trainer, and Mentor. In the age of AI, the most successful leaders will innovate with heart. As AI reshapes industries, C-suite executives must choose between short-term gains and building future-ready organizations rooted in ethics, empathy and sustainability. The data is clear. The Thomson Reuters "2025 C-Suite Survey" shows that 85% of executives see AI as transformational, yet only 34% have "equipped employees with AI tools." This gap reveals a critical need for responsible AI leadership that prioritizes ethical governance, employee trust and alignment with long-term societal impact. Ethical AI Governance: Building Trust Through Transparency Ethical AI governance is a cornerstone of digital maturity. Consumers, regulators and employees are watching how companies deploy AI. Missteps like biased algorithms or poor data practices can kill trust and invite legal risk. Microsoft's Responsible AI principles offer a blueprint for fairness, transparency, accountability and inclusiveness, not as buzzwords but as practical steps: Fairness involves regularly auditing models for bias, and transparency means clearly communicating how AI makes decisions, including labeling AI-generated content, an emerging requirement in the EU. Leaders can activate these principles by establishing AI ethics boards, integrating bias checks into development and training teams on compliance. A KPMG study shows that 86% of consumers are concerned about data privacy, while McKinsey reports that 71% want personalized experiences. Responsible leaders bridge this gap by using consent-driven data and clearly explaining AI's role in plain language. This approach builds trust, a currency more valuable than any algorithm. Publicly sharing AI ethics guidelines can boost brand loyalty while satisfying regulatory demands. The Business Case For Kindness: Empathy As A Competitive Edge Kindness in leadership isn't soft; it's strategic. Empathetic leaders who prioritize people alongside profits foster loyalty, spark innovation and accelerate digital maturity. AI can amplify this impact. A 2025 Communications Psychology study, cited by Live Science, found that "AI-generated responses were rated 16% more compassionate than human responses and were preferred 68% of the time, even when compared to trained crisis responders." Internally, kindness means investing in employees. The Thomson Reuters '2025 C-Suite Survey' revealed that only 34% of organizations have equipped teams with AI tools. This isn't just a tech gap; it's a trust gap. Employees fear being replaced, not empowered. Responsible leaders close the gap by prioritizing upskilling. Amazon's Upskilling 2025 initiative, which trained 100,000 employees in skills like machine learning and cloud computing between 2019 and 2025, proves how investing in people fuels innovation. Ethical, transparent AI personalization shows customers they're understood. A 2024 Accenture study found that "85% of consumers say personal data protection is important when using conversational AI tools." Empathetic leadership, when paired with responsible AI, transforms goodwill into sustained growth, for both people and performance. Closing The Vision-Execution Gap: Leadership For The Long Game Responsible leaders align AI strategies with ethical and sustainable goals. They integrate AI thoughtfully by mandating audits, forming ethics committees and prioritizing training to build literacy, skills and collaboration. Sustainability is another pillar of responsible AI leadership. According to Goldman Sachs Research, "the overall increase in data center power consumption from AI [will] be on the order of 200 terawatt-hours per year between 2023 and 2030." This underscores the urgent need for energy-efficient innovation. Forward-thinking leaders are already investing in renewable-powered infrastructure and optimizing algorithms to meet ESG goals. Microsoft's pledge to achieve carbon neutrality by 2030 illustrates how sustainability enhances both impact and brand reputation. Finally, measure what matters. Responsible leaders go beyond traditional metrics. They track customer trust scores, algorithmic bias reduction and other ethical metrics that reflect AI's broader role in shaping brand equity and societal value. A 2024 Forbes study projects that by 2030, brands will dedicate 10% to 15% of marketing budgets to sustainability messaging, driven by consumer demand and regulatory momentum. Leaders who act now will stay ahead of the curve. Leading With Heart In An AI World Ethical governance builds trust, empathetic leadership drives loyalty and sustainability ensures long-term relevance. To close the gap between AI's promise and reality, leaders must act with clear audits, team training and holistic impact measurement. Consider IBM's approach: Its AI ethics board, transparent AI policies and focus on upskilling have made it a leader in responsible innovation. Or consider Unilever, which utilizes AI to optimize supply chains while advancing its sustainability goals, demonstrating that ethics and profitability can coexist. These companies show that kindness and innovation go hand in hand. My Brand's Perspective I see responsible AI leadership as essential. At The Nav Thethi, our policy is rooted in ethics-first design, people-centered personalization and sustainable innovation. Drawing from our work with enterprise leaders, we've learned that effective AI must be human-aligned, not just technically sound. The Digital Maturity Model we offer clients, spanning sustainability, financial economics, operational efficiency and customer experience, outlines five stages from Awareness to Transformation, where ethical AI becomes a driving factor. We don't just chase trends; we vet every tool for fairness, transparency and privacy. Our strategies emphasize consent-first data and emotional intelligence. We train in AI ethics and stay aligned with ESG goals. The impact: stronger trust, deeper engagement and better-qualified leads. For us, AI amplifies human insight; it doesn't replace it. While the current political climate leans toward deregulation, we steer with purpose. We are bullish on AI, but always within clear ethical boundaries. We lean with conviction, aligned with purpose, people and planet. Why It Matters Now AI today isn't just about capability, but conscience. My team has seen that kindness, accountability and digital maturity aren't in conflict, but rather are interconnected drivers of sustainable growth. AI's potential is vast, but its risks are real. Responsible leaders will succeed by combining technical expertise with human values. They'll train their teams and respect their customers. Forbes Coaches Council is an invitation-only community for leading business and career coaches. Do I qualify?


The Verge
5 minutes ago
- The Verge
Elon Musk says he's suing Apple for rigging App Store rankings
Elon Musk says that his artificial intelligence company xAI 'will take immediate legal action' against Apple for allegedly manipulating its App Store rankings to the advantage of rival AI apps. In a series of X posts on Monday night, Musk suggested that Apple was 'playing politics' by not placing either X or xAI's Grok chatbot in the App Store's list of recommended iOS apps, and that he had no choice but to file a lawsuit. 'Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,' Musk said. 'Why do you refuse to put either X or Grok in your 'Must Have' section when X is the #1 news app in the world and Grok is #5 among all apps?,' the xAI CEO asked Apple in another post, which is now pinned to his profile. Musk provided no evidence for his claims, and it's unclear if he has made good on his threats and filed the lawsuit yet. The Verge has reached out to xAI, Apple, and OpenAI for comment. At the time of writing, ChatGPT is listed as Apple's top free iPhone app in the US, with Grok ranked as the sixth. China's DeepSeek AI briefly managed to take the top App Store position from ChatGPT in January, however, disproving Musk's claim that it would be impossible for other AI apps to do so. Accusations of meddling from Musk are ironic, given allegations about alterations made to X after he acquired the company in 2022, then known as Twitter. A research study in 2024 suggested that X's algorithm was manipulated to boost posts from Musk's account. OpenAI CEO Sam Altman responded to Musk's posts by sharing a Platformer report from 2023 that similarly found Musk had a system built to promote his X posts to the entire platform's userbase. In June, the 'maximally truth-seeking' Grok chatbot was also found to be consulting Musk's opinion before providing answers to controversial questions around topics like Israel and Palestine, US immigration, and abortion. This isn't the first time that Musk has beefed with Apple and OpenAI. Musk was an early founder at OpenAI, and previously targeted the AI startup via lawsuits and a unanimously rejected $97.4 billion buyout offer after taking issue with its pivot to being a profit-driven business. Following OpenAI's partnership with Apple to integrate ChatGPT into iPhones, iPads, and Macs last year, Musk threatened to ban Apple devices at his companies if OpenAI's tech was fused with Apple's OS. Posts from this author will be added to your daily email digest and your homepage feed. See All by Jess Weatherbed Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Apple Posts from this topic will be added to your daily email digest and your homepage feed. See All Apps Posts from this topic will be added to your daily email digest and your homepage feed. See All Elon Musk Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Twitter - X Posts from this topic will be added to your daily email digest and your homepage feed. See All xAI