logo
#

Latest news with #ChatGPT-generated

Why ChatGPT is disrupting liberal arts but not STEM in US colleges
Why ChatGPT is disrupting liberal arts but not STEM in US colleges

Time of India

time25-05-2025

  • Time of India

Why ChatGPT is disrupting liberal arts but not STEM in US colleges

College campuses across the US are grappling with a question that feels more philosophical than procedural: when every student uses artificial intelligence to complete assignments, is it still cheating? In 2025, the use of tools like ChatGPT is so widespread in academia that it's no longer a fringe activity—it's becoming the norm. Tired of too many ads? go ad free now According to a discussion published by The Bulwark, AI usage among students has reached a scale that 'you can't believe.' High school and college students now rely on generative AI to write essays, complete homework, and even tackle exams. What was once considered academic dishonesty is being reinterpreted as digital literacy, and many students see it as a justifiable shortcut rather than a violation of rules. A tale of two disciplines The debate is largely divided along academic lines. 'Generally speaking, there are two branches of study,' the The Bulwark article argues. 'Real subjects—STEM—and fake subjects—liberal arts.' While controversial, the distinction highlights a growing gap in how AI affects different fields. STEM courses, which often rely on in-person exams, lab work, and problem sets, remain relatively resistant to AI interference. As explained in The Bulwark, 'AI would have been of no use to me,' wrote one contributor who studied physical chemistry and immunobiology. They noted that while AI might assist in solving complex problem sets, the final answers still required manual verification—'questions with right and wrong answers. ' Exams in STEM fields often take place in proctored settings where students are 'staring at exam packets filled with equations and formulas,' armed with only a pencil and their own knowledge. In contrast, liberal arts courses are facing a reckoning. Tired of too many ads? go ad free now Essays, reading responses, and open-ended research assignments are now prime targets for ChatGPT-generated content. 'Legitimately, I cannot think of a way to stop AI from completely disrupting liberal arts education,' the piece from The Bulwark noted. A generational shift in norms Comparisons to past social trends offer insight into how normalized AI-assisted work has become. 'Cheating with ChatGPT in 2025 is like smoking pot in 1975,' The Bulwark observed. 'Everyone is doing it.' This generational shift suggests that the ethical boundaries surrounding academic work are being redrawn in real time. One The Bulwark contributor even sympathized with students, saying they were 'kind of on the side of the cheaters.' They questioned whether the educational model is outdated in the face of transformative technology. The classroom after ChatGPT As President Donald Trump meets with South African officials and debates swirl over education policy, the real classroom revolution may be happening quietly—one AI-generated assignment at a time. The question now isn't whether students will keep using ChatGPT. It's whether colleges will adapt before they lose control entirely.

Why this town in Assam shares its name with our favorite cheesy pizza, and who borrowed it first? This tale will leave you in surprise
Why this town in Assam shares its name with our favorite cheesy pizza, and who borrowed it first? This tale will leave you in surprise

Time of India

time17-05-2025

  • Time of India

Why this town in Assam shares its name with our favorite cheesy pizza, and who borrowed it first? This tale will leave you in surprise

You've devoured its slices, admired its simplicity, and maybe even argued about the best cheese-to-sauce ratio. But what if we told you that your beloved Margherita pizza shares its name not just with an Italian queen—but with a small, coal-dusted town in the farthest corner of India? Welcome to Margherita, Assam, where railway tracks, royal legacies, and cheese-laden legends converge. A Slice of Royalty in the Heart of Assam Tucked away in the lush, forested hills of Upper Assam's Tinsukia district lies a town called Margherita—yes, spelled just like the classic pizza. This might seem like a coincidence too bizarre to be true. But the story of how this sleepy town inherited such a delectable name has roots deeper than pizza dough and richer than mozzarella. In the 1880s, the British Raj was busy building railroads to extract coal from Assam's mineral-rich underbelly. To lead this ambitious engineering feat, they brought in a team of Italian railway engineers . Among them was one Roboto Paganini, a man of both technical brilliance and national pride. While camped by the Dehing River, Paganini christened the nearby settlement Margherita , in honour of Italy's newly crowned Queen Margherita of Savoy—a monarch adored across his homeland. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Historic Figures Who Were Actually Photographed Gloriousa Undo Little did anyone know that this quiet homage would later echo across continents in a most unexpected form. From Royal Palaces to Pizza Plates Just a few years after Assam got its Margherita, Italy's Queen Margherita would receive another tribute—this time edible. In Naples, master pizzaiolo Raffaele Esposito whipped up a pizza inspired by the colors of the Italian flag: red tomato, white mozzarella, and green basil. Presented to the queen during her visit in 1889, the simple dish won her heart and was named 'Pizza Margherita' in her honour. You Might Also Like: Caught red-handed using AI: Student demands tuition fee refund after spotting ChatGPT-generated content in professor's notes While food historians argue about the exact origins of this now-iconic pie—some tracing its ingredients to Neapolitan street fare as early as the 1790s—the name's royal association has stuck. Yet, few realize that a tiny Indian town bore the name Margherita before the pizza did. A Taste That Transcends Borders What's fascinating is how Queen Margherita's name has crossed oceans and class lines—beloved in both royal palaces and roadside food stalls, etched into history both as a culinary icon and a coal town. The pizza might be served on Instagrammable plates, but the town that shares its name is still fighting for a just transition from its coal-streaked past. So, the next time you sink your teeth into a slice of Margherita, pause to savour not just the taste—but the journey of a name that traveled from royal Italy to the rain-drenched hills of Assam. It's a tale baked in history, layered with irony, and topped with a reminder that even the most familiar things can hold astonishing surprises. You Might Also Like: Still postponing your road trip? Kerala woman, with her kid and 76-year-old mother, is on a 4,000 km caravan journey

AI-generated images romanticizing prostitution circulate online
AI-generated images romanticizing prostitution circulate online

Korea Herald

time12-05-2025

  • Korea Herald

AI-generated images romanticizing prostitution circulate online

Accounts promoting prostitution to be removed, but expression isn't clear violation: Meta Amid a surge in generative AI use, South Korea has seen a rise in several Instagram accounts — allegedly run by sex workers — using ChatGPT-generated images to promote and romanticize illegal prostitution, prompting criticism toward Meta for failing to take appropriate action despite the accounts' potential legal violations. On Instagram, numerous accounts can be found sharing AI-generated images and comics depicting the working environments of sex workers in Korea, where prostitution is prohibited by law. With account bios reading, 'Based on real stories from the girls' and 'Touching real-life experiences illustrated with AI,' the accounts show illustrations in a comic-like style. One illustration found on such an account depicts a woman getting her hair and makeup done at a salon as she waits for a 'call,' or a vehicle that picks up "hosts" before driving them to their clients. Another account, which mainly deals in four-panel cartoons of sex workers' personal lives as well as some happenings at work, was seen posting cartoons of the women receiving high-priced gifts such as designer bags from their clients as well as their experiences interacting with clients they deemed to be rude or disrespectful. On this account, a cartoon posted as an Instagram Reel titled 'How to deal with old customers' had garnered 5.8 million views as of Monday. Though most of the Instagram accounts were created no earlier than April, many of them have already gained traction, ranging from 5,000 to 9,000 followers. Regarding the emergence of such Instagram accounts, various Koreans have voiced discomfort. 'I see accounts like this way too many times on Instagram. Every time I do, I make sure to report the account, but Instagram doesn't seem to be doing much about it,' 30-year-old Kim Jin-kyeong told The Korea Herald. Under South Korean law, promoting prostitution or a business place that provides prostitution as its service is explicitly prohibited, according to the Act on the Punishment of Arrangement of Commercial Sex Acts. Meta's community guidelines also prohibit content that facilitates, encourages or coordinates commercial sexual services, even if the content is presented in a narrative or artistic manner. Lawyer Min Go-eun told The Korea Herald that such accounts could face legal action, though they also 'hold some loopholes.' 'As it posts suggestive content that holds potential in romanticizing sex work and also provides means of contact to the individuals behind such accounts, one can say that it holds potential in violating South Korean law,' said Min. 'However, there are some loopholes, in which the account owners can claim that they didn't promote sex work, as it doesn't explicitly promote its own sexual services to its followers." An official from Meta also told The Korea Herald that such accounts still require additional review of whether they clearly violate the platform's policies. 'Instagram accounts and posts are reviewed for violations in accordance with our community guidelines and appropriate actions are taken after that,' said the official. 'If an account promotes illegal prostitution, Meta will definitely make sure such accounts are removed. However, in cases where the content holds potential in being determined as a mere form of expression rather than promotion, it's hard for Meta to take a stance against that, as a form of speech cannot be said to be a violation of our policies.'

AI in higher education: Tool or trap?
AI in higher education: Tool or trap?

Malaysian Reserve

time06-05-2025

  • Malaysian Reserve

AI in higher education: Tool or trap?

Despite its convenience, educators are finding that AI is hindering the development of essential human skills in students by FARAH SOLHI HIGHER education institutions are observing a growing trend among students: Assignments that require critical thinking are increasingly being answered using artificial intelligence (AI)-powered chatbots. Since 2022, developers have introduced text-generating tools such as OpenAI's ChatGPT, Google's Gemini and Microsoft Copilot, aimed at making life easier. These tools can perform tasks like home automation and personal assistance with just a few typed prompts — essentially enabling conversations with virtual robots. However, despite their convenience, educators are finding that AI is hindering the development of essential human skills in students. For Tunku Abdul Rahman University lecturer Haizren Mohd Esa, the appearance of specific patterns, the use of pointers or repetitive words such as 'delve,' 'additionally' or 'key descriptive' in her students' essays are often the telltale of AI usage. 'These are common markers of ChatGPT-generated content. Unlike Turnitin, which is used to detect plagiarism, ChatGPT does not have a clear author, making it difficult to determine whether the content reflects the students' own critical thinking or was generated by AI. 'ChatGPT provides students with complete answers to their questions and can generate content on virtually any topic. In contrast, tools like Grammarly only help identify grammatical errors and do not produce full responses to assignments,' she told The Malaysian Reserve (TMR). I often convert written assignments into presentations or question-and-answer sessions, says Haizren (Pic courtesy of Haizren Mohd Esa) With Turnitin, she could detect plagiarism from online sources, and if the similarity index exceeds 20%, students may face mark deductions. However, it cannot detect the use of ChatGPT or other AI tools and students are aware of this, which presents a challenge for educators. With some universities lacking the tools or apps to detect AI-generated content, Haizren said lecturers can only remind students and trust them not to use such tools in their assignments. For now, she said, this is the best educators can do, as it would be difficult to completely prevent the use of ChatGPT and similar tools. Alternatively, lecturers could find their own ways around these tools by designing creative and innovative assignments to test students' knowledge without the use of AI. 'For example, I often convert written assignments into presentations or question-and-answer sessions. This allows me to assess their knowledge directly, rather than relying on written work that may have been copied from AI tools. Practical assessments are also an effective way to evaluate students, as these cannot be completed using AI. 'That is just the way forward now. We must adapt our teaching methods and assessments to address AI usage. Since it is widely used, we need to find a way to manage and tolerate students' use of these tools in a controlled manner by educating and encouraging them to not rely entirely on AI-generated content.' If universities allow students to rely on text-generated chatbots for written assignments or to find solutions to practical issues, they risk undermining students' critical thinking skills and jeopardising their personal development in the future, Haizren added. Fatimah says increasing use of AI tools among university students emphasises the need for ethics (Pic courtesy of Fatimah Tajuddin) For UOW Malaysia Communication and Creative Arts lecturer Dr Fatimah Tajuddin, the growing use of AI tools among university students highlights the need for one thing: Ethics. This emphasis came after she personally experienced researching for a study paper to cross-check the references cited in written assignments submitted by her students, only to be disappointed when she discovered that those study papers did not exist. 'It was very unethical for me. Initially, I did not like students using AI tools to do their assignments and at some point, I was even appalled by some academics who are using ChatGPT to conduct their research. 'However, it is the norm these days, so it is up to me to understand the new reality that AI is being used everywhere and by many types of people,' she told TMR. Fatimah now advocates for the ethical use of AI, where users need to be aware of how much they are relying on AI as an assistant to provide structure or starting points, and then elaborate on the subject matter on their own. 'Using 100% of AI would be like replacing your brain, which I do not like at all. We, as humans, still need to use our brain to structure our points in our own words.' If it were up to her, she would put a stop to AI usage among her students altogether. 'But maybe that is impossible, so I have grown okay with students using AI. But then again, they must remember that AI is just to assist us, not to replace us,' Fatimah warned. Text-generating AI tools won't produce meaningful responses without prompts that include solid ideas to begin with (pic: Bloomberg) Why Students Rely on AI Tools? A university student, who wants to be known as Siti from Perak, said she regularly turns to AI tools in her studies, describing them as practical aids that simplify her academic workload and help her keep up with technology. The 24-year-old engineering student said she often turns to ChatGPT to complete her essays, as it helps her to expand her ideas and explore new perspectives on various topics. 'I did not turn to such tools out of fear that I may not write well and eventually fail, but rather to help me better understand my essay topics. 'As an engineering student, it is important to have someone or something to bounce ideas off of. With ChatGPT and other text-generating tools, it feels like having a collaborative, shareable space to refine my thoughts and solve prob- lems efficiently,' she told TMR. Siti added that while AI tools help her develop her own thoughts, they do not replace the critical thinking process. She explained that these tools provide a basic foundation of ideas, but the responsibility for analysis and decision-making still lies with the user. She noted that text-generating AI tools won't produce meaningful responses without prompts that include solid ideas to begin with. 'You need to know what you want to learn or explore in order to solve problems or expand your ideas,' she added. Meanwhile, Rahman, a student from a university in Kedah, said some AI tools are helpful in boosting his productivity. He believes that those who know how to use AI tools effectively will benefit the most, as the impact of AI depends entirely on how it is used — contrary to the belief that AI will eventually replace human jobs. 'Using AI tools doesn't guarantee a solution to the problem at hand. They help me explore ideas and occasionally offer useful insights, but not to the extent that I rely entirely on them,' he said. 'With the rise of the Internet, universities expect students to spend time researching existing studies to solve more advanced problems. This can be overwhelming, so AI tools like ChatGPT help guide our thinking and approach,' he added. The second-year mechatronic engineering student added that AI can also provide ideas and suggestions in response to inquiries, and help users form logical hypotheses beyond academic contexts. According to Murali, the key lies in how the usage of AI tools is embedded into educational process (pic: TMR) How Can Students Work Around AI? Asia Pacific University (APU) deputy vice-chancellor Prof Dr Murali Raman told TMR that the use of AI tools among students, as raised by many lecturers, is not alarming, as is impossible to avoid a technology that is becoming an integral part of daily life. However, there are guidelines in place to help lecturers set boundaries for students regarding the use of AI in their assignments. 'Among the key points in these guidelines is the emphasis on academic integrity, clearly outlining what is acceptable and what is not in terms of AI use in academic work. 'Students must be educated about the ethical implications and potential consequences of improper AI usage. Some higher education institutions also employ AI-detection software to identify assignments that may have been completed using these tools,' he said. Murali said APU enforces academic policies and regulations, which may include disciplinary actions for violations and non-compliance. While AI tools can be valuable resources for generating ideas and enhancing writing, Murali said a holistic approach is still required when it comes to assignment design. He emphasised the importance of educating students about the ethical use of AI and clearly communicating guidelines on what constitutes acceptable use in academic work. 'We also believe that assignments should be designed to encourage critical thinking and originality, making it less feasible to rely solely on AI-generated content. These are often referred to as 'authentic assessments',' he added. By incorporating these aspects, APU can ensure that AI is used responsibly and constructively in academic settings. Higher learning institutions cannot shy away from the use of AI in assignments, as doing so would be akin to asking someone to stop using the Internet. One way to address this, he reiterated, is to come up with a holistic approach that combines clear policies, regulations, and awareness programmes. 'Additionally, the use of relevant detection tools and, more importantly, the incorporation of authentic assessments are essential. This comprehensive strategy ensures that AI is used responsibly and constructively in academic settings,' he added. It is a valid concern to view some cases of students using AI tools to complete their assignment as 'taking the easy way out,' Murali said. However, he said, the key lies in how the usage of AI tools are embedded into the educational process. Murali noted that if used ethically, AI can be a powerful tool to enhance learning rather than a crutch. Assignments and research should be more focussed on solving real world problems or be reflective in nature, and in the case of smaller cohorts, be driven by active class participation and engagement. 'Education as a whole should focus on character development and producing professionals who are competent and industry-ready. 'These days, industries too are leveraging AI on many fronts. It is therefore imperative that we train our students on how to maximise the value potential of AI, while ensuring its use remains responsible and ethical,' he said. This article first appeared in The Malaysian Reserve weekly print edition

ChatGPT as second opinion? AI is changing how patients look at their medical data, doctors little wary
ChatGPT as second opinion? AI is changing how patients look at their medical data, doctors little wary

India Today

time29-04-2025

  • Health
  • India Today

ChatGPT as second opinion? AI is changing how patients look at their medical data, doctors little wary

When Adrian Pauly, a desk-bound software engineer battling chronic lower back pain for over a decade, decided he needed a second opinion, he didn't head to a new specialist. Instead, he turned to a chatbot. Using ChatGPT's Deep Research feature, Adrian uploaded years' worth of injury history, therapy notes, exercise logs, and personal observations. The AI promptly returned not just a detailed breakdown of his condition, but a dynamic, personalised plan that adapted to his daily needs – something no human doctor had ever like the fog lifted,' Adrian told India Today Tech. 'I finally understand what's happening in my body, and why certain exercises work. Before this, I honestly wasn't sure if it was fixable. Now, I know it's a solvable problem.'Adrian's story isn't an anomaly. Across Reddit forums, Facebook groups, and X, a growing number of people are sharing about turning to AI tools like ChatGPT's Deep Research for medical insights. We also came across Pauly's post on Reddit. Many users now feel that traditional healthcare leaves them wanting. Some say the chatbots have provided clearer explanations, more personalised recommendations, and in certain cases, even more accurate diagnoses than the doctors they visited. Adrian Pauly's post on Reddit advertisement A user on Reddit claims ChatGPT was able to diagnose an issue that doctors couldn't for years This phenomenon raises a key question: Are AI tools like ChatGPT quietly becoming the new "second opinion"?From Google to ChatGPT: The evolution of DIY diagnosisDoctors have long struggled with patients self-diagnosing with the help of "Dr Google," often arriving at clinics convinced they have rare and catastrophic diseases. But ChatGPT's Deep Research feature, which lets users upload context-rich documents and asks nuanced follow-up questions, has changed the Ishwar Gilada, an infectious diseases specialist who has worked extensively with patients dealing with HIV, STDs, and related phobias, says he's seen this evolution firsthand."For the last four or five years, patients have been coming in armed with information from Google — sometimes with distracted or inaccurate knowledge," he said. "Now, with ChatGPT, the information patients bring in is more organised. It channels their thought process better than Google ever did, though it's still not always completely logical or correct."While ChatGPT represents an upgrade in how information is gathered and synthesised, Dr Gilada cautions that it's not a replacement for human expertise. "ChatGPT can guide you, help you gauge if something is serious, and even suggest specialists. But it cannot replace a doctor's judgement, experience, or the irreplaceable human touch of counselling,' he said. The Doctor's dilemmaThe rise of AI-driven health research is forcing many in the medical community to rethink their role."When patients come with ChatGPT-generated information, doctors should first verify the sources," Dr Gilada advised. "If references are credible – say, from peer-reviewed journals – then it's worth considering. That's a big improvement over Google, where you often don't know how authentic the information is." More and more doctors are now talking about the role of AI in healthcare "AI is outperforming doctors" Yet the challenge isn't just fact-checking. Dr Gilada warns that over-reliance on AI could exacerbate a broader societal shift where human contact is minimised. "Today's generation is glued to screens. Without human conversation – with doctors, friends, family – it's easy to start believing you have a disease you don't even have."advertisementStill, Dr Gilada is optimistic that AI can have a meaningful role in healthcare – provided doctors themselves stay informed. "Doctors must improvise and stay updated. If a patient finds a new treatment on ChatGPT that you haven't heard of, and you dismiss them out of ignorance, you lose their trust," he said. "We have to play with these tools ourselves and feed them better information." Patients are building their own medical roadmapsFor users like Adrian Pauly, Deep Research has offered something more than a better Google search — it has given them with years of symptom-chasing through traditional physical therapy, Adrian used Deep Research to map out the connections between tight hip flexors, weak glutes, and stress on his sacroiliac (SI) joint — knowledge that traditional medical visits never quite connected.'It felt like my physical therapists were just throwing random exercises at the wall," he said. "ChatGPT helped me understand the biomechanics – why specific muscles were underperforming and how that created a chain reaction. No one had ever explained it that clearly.'advertisementAdrian was quick to stress that AI isn't a substitute for medical care. "Use it as a tool, not a replacement," he said. "But if you're curious and want to really understand your body, it's incredibly useful."The flexibility was also a game-changer. Adrian described how ChatGPT adapted his exercise routines in real-time based on flare-ups, gym schedules, or running days — a level of personalisation he never found in short doctor consultations. Meanwhile, some doctors have also raised concerns about "overglorifying ChatGPT" The fine printWhile AI tools can help patients feel more empowered, they also risk leading users down dangerous paths if used for instance, while structured and articulate, can still occasionally deliver inaccurate or incomplete information. Dr Gilada shared an example where ChatGPT mistakenly suggested that Human Papillomavirus (HPV) could only be sexually transmitted – omitting the fact that it can also spread non-sexually to children and even I just blindly used ChatGPT to make a medical brochure, it would have misinformed people,' he noted. "AI can give a good working draft, but it always needs expert verification."In short: ChatGPT can galvanise your health journey, but it can't finish it for rather than seeing AI as competition, experts like Dr Gilada believe the medical community should engage with these tools – refining them, correcting them, and using them as bridges to better patient care. "There will always be a gap between what AI knows and what a doctor knows from experience," he said. "But if doctors and AI can work together, we can help patients feel more informed without losing the human connection."As for Adrian Pauly, he's simply grateful that something finally clicked. 'Deep Research gave me what I was missing: real understanding. It didn't replace my doctors – it made me a better patient.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store