Latest news with #Quillbot

Barnama
22-07-2025
- Barnama
- The Dark Side Of AI In Universities: Are We Celebrating Cheating?
22/07/2025 03:55 PM Opinions on topical issues from thought leaders, columnists and editors. By : Assoc Prof Dr Azmi Abdul Latiff Artificial Intelligence (AI) has become the shiny new tool in today's classrooms. From helping university students correct English grammar in their assignments to assisting lecturers in creating materials, AI is reshaping education at lightning speed and has become very resourceful, pushing search engines like Google aside. But lurking beneath the excitement is a worrying trend: the growing over-reliance on AI and the celebration of cheating through it. Recently, I came across a post on social media where a local university student openly boasted about using AI to complete assignments. He claimed that he could complete the tasks given even at the eleventh hour, by relying on AI to generate essays and reports, without getting caught by lecturers. Shockingly, this was not whispered in shame but proudly shared, as if it were a clever achievement. Such a growing overreliance on AI tools when completing assignments raises serious concerns. Universities risk producing graduates who not only lack the professional competence to perform in the real world but may also emerge as individuals with questionable work ethics and underdeveloped personal responsibility. Are universities, knowingly or unknowingly, breeding a culture where AI cheating is celebrated rather than condemned? AI: A double-edged sword There's no denying that AI tools like ChatGPT, Grammarly and Quillbot have transformed how students approach learning and assignments. They offer instant feedback, improve accuracy, and help learners overcome writer's block. ChatGPT, for instance, is akin to a very resourceful friend who knows everything under the sun and a servant who would prepare anything on the 'master's' request. Nevertheless, AI tools are meant to support learning, not replace it. When students use AI to generate assignments fully, it defeats the purpose of education. They skip the very processes that education is designed to cultivate – critical thinking, creativity, problem-solving, and self-expression. They may submit polished and remarkable work but walk away with shallow learning. In English language proficiency classes, for instance, the goal is not just to produce correct sentences in assignments but to 'own' the language, to communicate confidently, and to express one's own ideas. A student who depends entirely on AI tools will struggle to perform in real-life interactions, workplace communications, or even oral exams. Why is this happening? Several forces are pushing students down this slippery slope. First, the pressure to perform is immense. Faced with tight deadlines and high expectations, some students view AI as an easy way out. Second, peer influence plays a big role. When students see others using AI to get ahead without adverse consequences, they start believing it's normal or even clever. Third, many students lack awareness of the ethical lines they're crossing. To them, using AI might not feel like cheating because it's not copy-pasting from another student or the internet, it's 'just an app'. Finally, there's an institutional problem: some universities have not yet set clear policies on AI use, leaving both students and teachers in a grey zone of what is allowed and what is not. The risks we overlook When universities fail to address this issue, they risk not only the integrity of assignments but also their entire reputation. Graduates may leave their universities with AI-polished degrees, but their lack of real-world skills, such as communication and interpersonal skills, will quickly be exposed to the job market. Employers will notice. Industries will complain. The institution's credibility and the country's higher education system will slowly erode. Beyond that, overreliance on AI erodes the teacher-student relationship. Instead of seeing lecturers as mentors guiding their learning, students start viewing them as obstacles to outsmarting. The students will celebrate every time they complete their AI-generated assignments without getting caught by the teachers. This undermines the entire spirit of education. What needs to change? Universities must act now to reclaim the narrative. First, set clear AI guidelines: Define what counts as acceptable assistance (e.g., grammar checks) and what crosses the line (e.g., full essay generation). Second, teach ethical AI use. Integrate discussions about responsible AI practices into the curriculum. Third, design better assessments. Assessments that require students to do physical oral presentations, handwritten reflective journals, and in-class tests and discussions are much harder to fake with AI. Next, train lecturers to detect AI-generated work and, more importantly, design learning tasks that promote originality and critical engagement. Most importantly, we need to rebuild the value of effort and learning among students. Education should not be reduced to chasing grades or tricking systems. It should be about growth, discovery, and building human capacities that no machine can replicate. Technology should serve, not take control AI is here to stay, but so are the values of integrity, effort, and human learning. As educators and institutions, we must ensure that technology serves these values, not undermines them. For students, please stop celebrating the clever use of AI to cheat and start championing the honest, sometimes messy, but ultimately rewarding process of real learning. -- BERNAMA Assoc Prof Dr Azmi Abdul Latiff is Dean of the Centre for Language Studies at Universiti Tun Hussein Onn Malaysia (UTHM). (The views expressed in this article are those of the author(s) and do not reflect the official policy or position of BERNAMA)
Yahoo
11-06-2025
- Yahoo
Fact Check: Don't fall for photos of Pope Leo XIV tumbling down stairs
Claim: In June 2025, a series of photographs authentically showed Pope Leo XIV falling down stairs. Rating: In 2025, a set of photographs allegedly depicting Pope Leo XIV falling down stairs circulated online. For example, one Facebook post (archived) by the account Daily Bible Verse shared three images, one of the pope waving to the crowd as he walked down stairs and two of him falling down stairs: The same photos appeared several times on Facebook (archived) and Threads (archived). However, the story was fictional. A Google search (archived) and a Google News search (archived) revealed no reputable news outlet reported this incident. Of the three images, one showing the pope waving was most likely authentic. The photo started circulating online on May 21, 2025, after the pope's first weekly general audience. Similar photos from that event appeared on the same day in the same setting from reputable news agencies such as Getty Images, NurPhoto and The Associated Press, and artificial intelligence detectors indicated it was not AI-generated. But there were visual clues that the two smaller images showing the pope falling were unlikely to be real. For example, Leo's face in them was blurry and elongated. His position as he fell also appeared to change from image to image — falling backward in the first image and then falling forward in the second — in a way that seemed physically implausible. Snopes ran the images through two different artificial intelligence image detectors, Decopy and Undetectable, both of which determined the images of the pope falling were AI-generated. The pinned comment on the Daily Bible Verse post linked to a website with an article that appeared to have little to do with the photographs. It read: According to multiple eyewitnesses, a piece of ceremonial technology—possibly a small microphone transmitter or liturgical device—detached unexpectedly from Pope Leo's vestment and fell near the altar. The moment was brief, almost imperceptible to many in the crowd, but cameras caught it. Within minutes, social media platforms exploded with theories, commentary, and metaphor-laden interpretations. Snopes ran the text of the article through two AI text detectors, Quillbot and GPTZero, both of which concluded it was AI-generated — a clue that the website in question was a junk content farm filled with so-called "AI slop." Snopes often fact-checks fake and altered images of well-known people; see, for example, our story on an edited image of tech billionaire Elon Musk's chest and a fact check debunking an image of United Healthcare CEO shooting suspect Luigi Mangione wearing a "Sailor Moon" costume. Ibrahim, Nur. "Fake Photo Shows Luigi Mangione in 'Sailor Moon' Costume." Snopes, 16 Dec. 2024, Accessed 10 June 2025. Liles, Jordan. "Photo of Elon Musk Altered to Increase His Chest and Stomach Size." Snopes, 11 Nov. 2024, Accessed 10 June 2025.


Techday NZ
22-05-2025
- Business
- Techday NZ
Organisations ramp up AI tool blocks to counter shadow AI risks
Research from DNSFilter reveals an increase in organisations blocking access to generative AI tools in efforts to mitigate risks associated with shadow AI. DNSFilter has reported that it blocked over 60 million generative AI queries during March, representing approximately 12% of all such queries processed by its DNS security platform that month. Notion accounted for the vast majority of these blocked requests, making up 93% of all generative AI queries stopped by DNSFilter, surpassing the combined total of Microsoft Copilot, SwishApps, Quillbot and OpenAI queries. According to the company's analysis, the average monthly number of generative AI-related queries processed has been over 330 million since January 2024, indicating growing usage and interest in these tools within professional environments. Alongside increasing usage, organisations are developing policies to manage and regulate the adoption of generative AI technologies among their employees. Many have opted to block specific domains by employing DNS filtering, aiming to exert greater control and comply with internal policies designed to reduce the prevalence of shadow AI - the use of AI tools that operate outside the awareness or control of IT and security teams. While the presence of malicious and fake generative AI domains such as those impersonating ChatGPT has seen a significant decrease - down 92% from April 2024 to April 2025 - DNSFilter's data shows a notable shift by threat actors towards domains containing "openai" in their names. There has been a 2,000% increase in such malicious sites during the same period, highlighting the evolving threat landscape related to generative AI. The use of DNS-based filtering allows businesses to manage not just cyber threats but also the internal adoption of AI tools, ensuring that only approved solutions are accessible. This helps mitigate risks associated with unsanctioned adoption of generative AI, particularly when such adoption takes place without oversight from IT or security professionals. Ken Carnesi, Chief Executive Officer and co-founder of DNSFilter, commented: "Companies know the benefits that generative AI offers, but they also know the potential cybersecurity risks. More organisations are now proactively choosing which tools to block and which to allow. That way, employees can't 'sneak' AI tools into the corporate network or inadvertently use a malicious one. In this way, a DNS filtering solution helps companies enforce policies, block possible threats and enable greater productivity all at the same time." DNSFilter's data underscores the tension between the drive to leverage generative AI for workplace productivity and the imperative to maintain robust security controls against emerging threats linked to shadow AI and domain-based impersonation. Follow us on: Share on: