
AI chatbot Grok misidentifies Gaza famine photo as Yemen
Social media users questioned Grok, Elon Musk's AI chatbot, about the photo's origin.
Grok incorrectly stated the image depicted a Yemeni child from 2018.
The photo actually shows nine-year-old Mariam Dawwas in Gaza City on August 2, 2025.
Before the war, Mariam weighed 25 kilograms, her mother told AFP.
Now, she weighs only nine kilograms and survives mostly on milk, which is 'not always available.'
When challenged, Grok insisted it does not spread fake news and relies on verified sources.
However, the chatbot later repeated the same incorrect claim about the photo's origin.
Grok has previously generated controversial responses, including praise for Nazi leader Adolf Hitler.
Louis de Diesbach, a researcher in technological ethics, highlighted AI's limitations.
He described AI tools as 'black boxes' with unclear reasoning behind their responses.
Grok's biases align with Elon Musk's ideological leanings, according to Diesbach.
AI chatbots are not designed for verifying facts but for generating content, he explained.
Another AFP photo of a starving Gazan child was also misidentified by Grok as being from Yemen.
Diesbach warned against relying on AI for factual accuracy, calling chatbots 'friendly pathological liars.'
Mistral AI's Le Chat also misidentified the Gaza famine photo as Yemeni.
AI's training data and alignment phase influence its responses, making errors persistent.
Experts urge caution when using AI tools for fact-checking due to their unreliable nature. - AFP

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Malay Mail
3 hours ago
- Malay Mail
China wants US to ease AI chip restrictions before possible Trump-Xi summit
BEIJING, Aug 10 — China wants the United States to ease export controls on chips critical for artificial intelligence as part of a trade deal before a possible summit between Presidents Donald Trump and Xi Jinping, the Financial Times reported on Sunday. Chinese officials have told experts in Washington that Beijing wants the Trump administration to relax export restrictions on high-bandwidth memory chips, the newspaper reported, citing unnamed people familiar with the matter. The White House, State Department and China's foreign ministry did not immediately respond to requests for comment on the report. HBM chips, which help perform data-intensive AI tasks quickly, are closely watched by investors due to their use alongside AI graphic processors, particularly Nvidia's. The FT said China is concerned because the US HBM controls hamper the ability of Chinese companies such as Huawei to develop their own AI chips. Successive US administrations have curbed exports of advanced chips to China, looking to stymie Beijing's AI and defence development. While this has impacted US firms' ability to fully address booming demand from China, one of the world's largest semiconductor markets, it still remains an important revenue driver for American chipmakers. — Reuters

The Star
6 hours ago
- The Star
Yomiuri sues US AI startup over use of articles; Perplexity allegedly used over 100,000 news stories
FILE PHOTO: Perplexity AI application icon is seen in this illustration taken January 4, 2024. - Reuters TOKYO: Three Yomiuri Shimbun newspaper companies have filed a lawsuit against a US startup company that allegedly used a large number of Yomiuri Shimbun articles and images for a generative AI-powered search engine without permission. The Yomiuri Shimbun; The Yomiuri Shimbun, Osaka; and The Yomiuri Shimbun, Seibu filed the lawsuit demanding that Perplexity AI stop using the newspaper's articles and seeking damages of about ¥2.17 billion (US$1.47 million) in the Tokyo District Court on Aug 7. The three companies are under The Yomiuri Shimbun Holdings. This is the first time a major Japanese media company has filed such a lawsuit, although several AI-related businesses in Europe and the United States have been taken to court for the unauthorised use of copyrighted material by generative artificial intelligence (AI). Established in 2022, Perplexity provides a search service that parses the latest online information to compile answers to questions typed in by users. A conventional search engine will display a list of websites related to the search terms entered by a user, but Perplexity pitches its service as an 'answer engine' that summarises information gleaned online. This enables users to acquire information they want without looking through individual websites. In its lawsuit, The Yomiuri Shimbun claims that Perplexity copied articles from the Yomiuri Shimbun Online (YOL) digital service and sent answers with content similar to these articles to its users. The Yomiuri Shimbun claims this violated the right of reproduction and the right to public transmission under the copyright law. Perplexity allegedly acquired 119,467 Yomiuri articles without permission between February and June 2025, for the purpose of generating answers for its users. The Yomiuri Shimbun decided to seek damages of ¥16,500 per article, calculated based on a regular licensing fee. The amount of compensation being sought could increase depending on the findings of a further investigation. In addition, conventional search engines encourage users to visit the YOL service, which generates advertising revenue for The Yomiuri Shimbun. However, Perplexity's service results in fewer online visits to the YOL. The Yomiuri Shimbun claims that this is causing a drop in advertising revenue, which also constitutes an infringement of its business interests. About 2,500 reporters are involved in news coverage for The Yomiuri Shimbun. Consequently, the newspaper believes Perplexity has been getting a 'free ride' on the activities of a media organisation that devotes 'great effort and expense' to creating news articles. The Yomiuri Shimbun also is seeking compensation for lost advertising income. In response to The Yomiuri Shimbun's emailed request for comment, Perplexity sent a message that said: 'We are deeply sorry for the misunderstanding this has caused in Japan. We are currently working hard to understand the nature of the claims. We take this very seriously, because Perplexity is committed to ensuring that publishers and journalists benefit from the new business models that will arise in the AI age.' In October 2024, Dow Jones, a division of News Corp, filed a lawsuit together with another company alleging copyright infringement by Perplexity. That case is pending in federal district court in New York. Perplexity has rejected the claims in that case and insists that its search function is based on publicly available facts that are not protected by copyright laws. The Yomiuri Shimbun Holdings' Corporate Communications Department issued a statement that said: 'Allowing a company to free ride on the results of our reporting would negatively affect our accurate news coverage backed by our research, and could undermine the foundations of democracy. We hope this lawsuit will raise questions about rules on the rapidly spreading use of generative AI and how it should be used and applied.' - The Japan News/ANN


Borneo Post
11 hours ago
- Borneo Post
Balancing AI benefits, academic integrity
ChatGPT is known to offer substantial benefits as a learning tool, such as generating essays, enhancing writing creativity, analysing data, accelerating research processes, and providing instant answers to complex questions. — AFP photo IN the rapidly advancing era of artificial intelligence (AI), tools like ChatGPT are reshaping the landscape of higher education, bringing profound changes to institutions of higher learning (IPTs) nationwide. ChatGPT offers substantial benefits as a learning tool, such as generating essays, enhancing writing creativity, analysing data, accelerating research processes, and providing instant answers to complex questions. However, this convenience also raises concerns, particularly over misuse by students who rely on the software to complete assignments automatically, without true comprehension or critical engagement. Academic dishonesty is becoming more complex, as conventional plagiarism tools struggle to detect AI-generated content. Even more concerning is the growing reliance on AI, which blurs the line between genuine student effort and machine-assisted work – raising important ethical and pedagogical questions. Changing landscape of academic dishonesty According to Associate Professor Dr Mohd Khairie Ahmad, Dean of the School of Multimedia Technology and Communication at Universiti Utara Malaysia (UUM), the philosophy of technology is to simplify and enhance capabilities; and when it comes to the issue of AI in learning, it depends on context. 'Generative AI is a technological advancement capable of producing content that previously required human thought and effort. AI can certainly generate student assignments or coursework. 'If students rely entirely on AI, it could potentially hinder their learning process. This irresponsible or unethical use of AI to complete assignments, while claiming them as original work, is referred to as 'AIgiarism' or AI plagiarism,' he tells Bernama . Dr Mohd Khairie Ahmad Sharing that digital plagiarism or academic dishonesty is not a new phenomenon, Mohd Khairie says AI's development has made academic misconduct more dynamic. He has observed that since generative AI gained popularity around 2022, the higher education world has become aware of and anticipated the challenges it brings. 'It is undeniable that the use of AI in learning, especially for assignment completion, has become common over the past year or two. 'There are students who rely entirely on AI to complete assignments or even answer tests or quizzes, especially when conducted online. 'Many students believe such actions are not wrong since AI is legal and not a prohibited technology. 'However, this is considered unethical because the work does not stem from the student's own cognitive effort or thinking. 'In fact, such conduct is regarded as a form of plagiarism. 'Typically, lecturers evaluate student assignments by measuring the similarity index, and now also through AI detection. 'Among the AI applications that can detect AI plagiarism are Turnitin, GPTZero, Winston AI, Copyleaks AI Detector, and says Mohd Khairie, adding that evaluating the style, language structure, and content of assignments also helps detect breaches of academic integrity. While not denying that educators, particularly the lecturers, also use AI for teaching and research purposes, he says there can be no compromise when it comes to violating the principles of academic integrity. According to him, the world of higher education upholds the practice of respecting and valuing past scholarly works. 'A scholarly work requires reading and digesting prior writings as part of the process of generating new thoughts or ideas. 'This is a defining feature of academic writing and a core principle of scholarly work – to acknowledge references used, at the very least by listing them in citations. 'In the context of AI being a productive tool that supports scholarly work, it is therefore ethical to clearly disclose its use and to list the AI sources used to obtain information, ideas, and so on,' he points out. Establishing guidelines Responding to whether IPTs have clear guidelines on AI usage by students and lecturers, Mohd Khairie says to his knowledge, the Malaysian Qualifications Agency (MQA) is among the earliest to issue brief guidance through an Advisory Note in 2023 on the use of generative AI across all Malaysian institutions. 'In 2024, Universiti Teknologi Malaysia (UTM) published more specific guidelines for educators and students on the application of generative AI. 'These guidelines focus on lawful, responsible, transparent, trustworthy, and ethical use of AI, grounded in values, regulations, and legislation. 'Since AI has become a foundational and routine part of the teaching and learning process, all IPTs should have clearer and more specific guidelines for generative AI. 'Furthermore, these guidelines should eventually align with the AI Act currently being drafted by the National Artificial Intelligence Office (NAIO), under the Ministry of Digital.' Describing the best approach as educating students to use AI ethically and responsibly – as a learning aid rather than a shortcut to complete assignments – Mohd Khairie stresses about the importance of awareness education, especially since AI is poised to become an essential tool for enhancing learning efficiency and productivity. 'AI should be understood not as the end product, but as a process that supports students' cognitive (thinking) activities. 'If this understanding doesn't take root, it's not impossible that digital 'illnesses' like brain rot (mental fatigue) may affect university students. 'AI is an unavoidable phenomenon, and at the same time, a current necessity. 'Its exposure and practice as a learning support tool should be promoted as a value and part of the academic culture. 'A study by leading international publisher Wiley found that in 2024, AI contributed to a 72 per cent increase in academic dishonesty compared to 2021 in the United States and Canada. 'However, responsible and ethical AI guidance by educators has been shown to potentially reduce academic misconduct among students.' AI as part of ecosystem Meanwhile, the Malaysian Cyber Consumers Association (MCCA) views the increasing use of AI, particularly ChatGPT, among students in IPTs as a clear sign that higher education is undergoing a profound technological transformation. MCCA president Siraj Jalil says that AI is no longer a tool of the future, but has already become an integral part of the current ecosystem in IPTs. 'MCCA does not see this issue as entirely a threat, nor as an opportunity without risks. 'It lies in a grey area that can bring either benefits or harm depending on how it is used. 'If a student uses AI to enhance their understanding of a subject, generate ideas, or organise their thoughts, it can lead to progress. 'However, if it is used entirely without the involvement of reasoning, critical-thinking, and a sense of responsibility, then it clearly challenges academic integrity. 'Therefore, MCCA believes this is the time for IPTs to re-evaluate their approaches to assessment and learning – not to reject AI from educational methods, but to develop a framework that allows AI to be used ethically and effectively.' Siraj Jalil Siraj observes that the concerns of some lecturers regarding this issue should also be taken seriously. 'The MCCA has received a great deal of direct feedback from lecturers reporting a sharp increase in students submitting assignments almost entirely generated by AI. 'This not only disrupts the academic assessment process, but also raises uncertainty in terms of academic aesthetics and values. 'The solution to this issue isn't merely to impose restrictions or punishments, but to create a more responsible academic ecosystem – one that focuses on ethics and perhaps even redefines academic benchmarks beyond AI usage. 'Every IPT should develop clear AI usage guidelines and integrate AI literacy and academic ethics modules into student orientation and professional development for lecturers. 'Assignments should also be restructured to emphasise process rather than just outcomes, such as through presentations, reflective portfolios, or fieldwork,' he adds, noting that ethical use is shaped not by fear, but through understanding and clear guidance. At the same time, Siraj suggests that lecturers be given training on the use of AI in research and academic writing, including the importance of disclosing AI usage openly in methodology or references to safeguard academic integrity. 'Academic publications, especially journals and conference proceedings, should begin adapting their policies on AI-generated content. 'What matters most is striking a balance between innovation and integrity. 'This is to address concerns that some research content could be produced without critical review or clear AI usage disclosure.' Siraj also believes that the Ministry of Higher Education (MoHE), in collaboration with NAIO, could formulate a national policy or official guidelines on AI usage in IPTs. He proposes that such a policy would include several key components: permitted levels of AI usage, types of assignments appropriate for AI support, forms of misuse that warrant action, and AI literacy and ethics requirements for all campus communities. 'This policy should be developed inclusively, with engagement from academic experts, students, technology practitioners, and industry stakeholders to ensure it is responsive and practical. 'Responsible use of AI begins with the fundamental principle that AI is a tool – not a replacement for human reasoning. 'For students, responsibility begins with the awareness that learning is a process of self-development and understanding one's field, not just completing tasks for grades. 'Using AI to understand concepts or review writing structure is acceptable. 'But copying or generating an entire assignment without comprehension goes against the spirit and discipline of education,' says Siraj, adding that both students and lecturers must understand the risks and threats of AI misuse, including the possibility of false information, biased algorithms, and unverified content dissemination. Awareness, and high literacy Sharing his views, Muhammad Haziq Sabri, president of the Student Representative Council at Universiti Teknologi Mara (UiTM) Shah Alam for the 2024/2025 session, says ChatGPT has now become a common tool among university students, and has helped him significantly in completing assignments and preparing notes for examinations. 'It enables note generation from lecture slides and helps in understanding certain topics. 'Using ChatGPT to correct grammar and sentence structure also speeds up the process of completing assignments,' he says. Rejecting the notion that the use of AI, particularly ChatGPT, is a form of academic cheating, Muhammad Haziq says it should be seen as 'a modern learning support tool that must be used responsibly'. 'It becomes academic dishonesty when students just 'copy and paste' without understanding or modifying the content generated by ChatGPT. 'Almost all my friends also use ChatGPT, but not excessively; they focus on things like assignment structure and grammar-checking. 'So far, I have not heard of any students facing disciplinary action for AI misuse. 'Most students use ChatGPT responsibly because they understand that misuse could violate the university's academic ethics policies.' Muhammad Haziq Sabri Muhammad Haziq says according to Academic Circular No 5 of 2023, official guidelines on the use of ChatGPT in teaching and learning have been issued, adding that lecturers are encouraged to guide students on 'using ChatGPT ethically as a learning tool'. He says the circular also stresses the importance of ensuring that AI is used to foster critical thinking, understanding, and values – not merely for copying answers – as outlined in Article 6. 'This shows that the university not only allows the use of AI, but encourages its responsible use and provides guidelines,' says the Bachelor of Public Relations student from the Faculty of Communication and Media Studies. For Muhammad Asyraf Daniyal Abdul Halid, 24, a Master's research student in Marine Biotechnology at Universiti Malaysia Terengganu, ChatGPT serves as a guide, but over 90 per cent of the work comes from the student's own efforts in sourcing credible information with proper citations. Muhammad Asyraf Daniyal Abdul Halid 'ChatGPT really helps us search and compile necessary information, develop ideas, and get an overview of the assignments or projects given by lecturers. 'However, plagiarism and failure to fact-check information are common forms of misuse among students,' he adds, noting that not all students in IPTs have a high level of awareness and literacy when using such software. — Bernama