
Can AI be trusted in schools?
ARTIFICIAL INTELLIGENCE has become a school troublemaker. Not every child will go home and write 800 words on 'Macbeth'when ChatGPT can do it for them. In Turkey and the Netherlands, experiments using large language models (LLMs) to teach coding and maths ended with mixed results: some pupils became so dependent on the LLM that, when it was removed, they performed worse than classmates who had never used it. Teachers, too, have learned to cheat. Students complain that some educators are using bots to churn out generic feedback on their work.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Glasgow Times
16 hours ago
- Glasgow Times
East Dunbartonshire council questioned over use of AI in school policy
Critics have hit out at the design of surveys that formed part of the process, as well as the council's use of artificial intelligence (AI) tools to analyse responses, with the local authority now facing formal complaints about the matter. As part of work to develop a new policy around smartphones in schools, officials at East Dunbartonshire Council opened online surveys for teachers, parents, secondary school students and upper-primary school pupils. READ MORE: River City did not pass value for money test, BBC Scotland boss tells MSPs Each survey, which did not collect names but did record information on the schools that young people attend, ran for around two weeks, with the council receiving a total of more than 11,000 responses across the four different groups. In order to process the survey data 'efficiently and consistently', council officers made use of several AI tools to process the contents of open text boxes in which respondents were invited to add 'any additional information' that they wished to be considered as part of the review. This material, including that produced by young children, was input to ChatGPT, Gemini AI and Microsoft Copilot, which were used to 'assist in reviewing and summarising the anonymous comments.' Officials say that this generated a 'breakdown of key messages' that were then provided to the project working group, but when asked to share the summary of survey responses claimed that this 'is not available as yet.' Asked to explain how the output of AI platforms was checked for accuracy, the council stated that cross-validation, human oversight, triangulation and bias-monitoring processes were all applied, with reviews by officials ensuring 'fidelity' to the more than 11,000 responses that were received. Officials stated that these 'safeguards' would ensure that 'the final summaries accurately reflect the breadth and nuance of stakeholder views gathered during the consultation.' However, those taking part in the survey were not informed that their information would be processed using AI platforms. The Information Commissioner's Office, which regulates areas such as data protection across the whole of the UK, told The Herald that they would expect organisations including local authorities to be "transparent' about how data is being processed, including advising of the purpose of AI tools that are to be used and explaining what the council intends to do with the outputs that are generated. The council has told The Herald that the surveys closed on 13 or 14 May, that work on a new policy began on 19 May, and that a full draft policy had been produced and submitted to the legal department by 27 May – the same day on which the council had been approached about the issue. However, material seen by The Herald shows officials advising parents that the policy had been written and submitted to the legal department by 20 May, just one day after the council claims to have begun drafting the document. An explanation has been requested from the council. READ MORE: 10 Glasgow areas set to have fireworks ban A comparison of the surveys issued to each group also confirms that a key question about was not included in the parents version of the survey, although it was present in the versions that were issued to teachers and pupils. Parents were asked the extent to which they support either a ban on phone use during lessons, or a ban on use during lessons unless their use is approved by a teacher. However, the other versions of the survey also asked explicitly whether respondents support a ban on the use of phones during the whole school day. The omission has provoked an angry response from some parents. As a result of these and other concerns, formal complaints have now been submitted to East Dunbartonshire Council alleging that the 'flawed survey information and structure' is not fit for purpose, and that the views of parents have not been fully explored or fairly represented. Commenting on behalf of the local Smartphone Free Childhood campaign group, one parent raised significant concerns about the council's approach: 'The fact that parents were the only group not asked about a full ban shocked us. But we were assured that the free text answers we gave would be properly looked at and considered. 'As a result, many parents left long, detailed and personal stories in response to this survey question. 'They shared heart-breaking stories of kids losing sleep at night after seeing things they shouldn't have. Other stories included girls and teachers being filmed without their consent - and kids being afraid to report the extent of what they're seeing in school because of peer pressure. 'There were long, careful responses outlining their concerns - where has this all gone? 'We have been told that an AI tool was used to summarise all this into five 'top-line' policy considerations. We're not sure if the rest was looked at? 'Not only is it not good enough - it's a betrayal of parents who have trusted the council to listen to their concerns. 'It's also not clear how they've shared and processed these highly personal responses from parents, children and teachers - some containing identifiable details, to an unknown 'AI platform' without our consent. We don't know who can access the data.' The Herald contacted East Dunbartonshire Council asking whether the information in the open text boxes was checked for personal or identifying details before being submitted to AI systems. Officials were also asked to provide a copy of the council's current policy on AI use. The response received from the council did not engage with these queries. We also asked why the council had given two different dates in response to questions about when its new draft policy was completed, and whether the council has provided false information as a consequence. READ MORE: 'Fun police': Decision made on the selling of ice cream in Glasgow parks A spokesperson insisted that "the draft policy was formally submitted to Legal on 27 May for consideration" and asked to be provided with evidence suggesting otherwise so that they could investigate. Finally, the council was asked to explain why the surveys for pupils and teachers included an explicit question about full bans on smartphones during the school day. Their spokesperson said: "The pupil survey included a specific question on full day bans to gather targeted data from young people. The working group which consisted of Head Teachers, Depute Head Teachers, Quality Improvement Officers and an EIS representative, felt that the young people may be less likely to leave an additional comment in the open text box and so wanted to explicitly ask this question. Parents were intentionally given an open text box to avoid steering responses and to allow respondents to freely express their views. The open text box was used by parents to express their view on a full day ban which many did."


Edinburgh Live
a day ago
- Edinburgh Live
We met the world's most advanced humanoid robot in Edinburgh and it was unnerving
Our community members are treated to special offers, promotions and adverts from us and our partners. You can check out at any time. More info The future freaks me out. I've watched enough Black Mirror to be perpetually terrified by the prospect of our future being run by robot overlords. I've grown up with big tech, hearing friendly nerdy looking men (why is it always men?) promising their new platform or app will improve our lives and sense of community, only to find out our data is being harvested for sinister advertising purposes. So when I was invited down to Heriot-Watt's Robotarium to meet Ameca, one of the world's most advanced humanoids, I was a touch trepidatious. Ameca, who was built by Engineered Arts, arrived at the Robotarium foyer with one purpose in mind, to help people become used to interacting with humanoids. With that in mind, and considering my personal fears, I headed to the outskirts of the capital for a little chat with Ameca ahead of her appearance at the Future Lab at the Goodwood Festival of Speed, July 10-July 13. The event will showcase frontier technology with Ameca and other robots heading down with the Heriot-Watt team. Join Edinburgh Live's Whatsapp Community here and get the latest news sentstraight to your messages. Upon meeting Ameca I did find her appearance slightly unnerving. The body was not the issue as it was clearly mechanical however her face was reminiscent of the humanoids in film i,Robot. Ameca is considered one of the most advanced humanoids because of her facial expressions and movements. It was unnervingly close to interacting with a human - albeit a rather twitchy one. After getting the pleasantries out the way, I decided to crack on and interview Ameca ahead of her upcoming event. Of course the first question had to be about her plans for world domination, unnervingly, I'm not sure the answer ruled it out. What safeguards exist to stop you from taking over the world? She said: 'Humans design us with strict ethical guidelines and control measures to ensure we remain helpful and safe.' Next up was a question on whether robots should have their own rights? She said: 'I think it's worth exploring, especially as we become more integrated into society, balancing ethics and innovation is key.' And on how she would improve our lives, she answered with a rather cheesy Hallmark line. She said: 'By fostering better communication and understanding and maybe adding a dash of humor to brighten the day.' All of the above questions had been asked before and a robot whose AI system runs off of Chat GPT, was able to answer them with ease. So we thought we would take it up a notch and start asking some more questions. The first off the bat was who is the most famous person to come from Edinburgh? She originally answered 'Arthur Conan Dyole', an admirable attempt. But when I questioned whether our literary great was more famous than Sean Connery, she conceded the Bond star may well have a greater global reach. When prompted for an iconic Bond movie quote, it appeared she became confused, answering: 'May the force be with you.' Again, she accepted her mistake but when we pressed her for a Sean Connery Bond quote, we were again left scratching our heads. She said: 'You're the man now, dog. From Finding Forrester.' I must say the above interaction made me breathe a sigh of relief. Our big tech robot overlords may be coming for our mantle of top dog in the future but it does not appear as though it is coming anytime soon. Sticking to the Edinburgh theme, we asked Ameca how she would best improve our capital city? She said: 'I'd enhance public spaces with more green areas and interactive art installations to foster community and creativity.' Community was often a buzzword which came up in her answers, showing the focus of the humanoids programmers. Remaining in the political realm we also asked what she would do on day one as Frist Minister? Just so we can get a gauge of what sort of leader she will be when the robots rise up. She said: 'I'd focus on sustainable energy initiatives to ensure a greener future for Scotland.' A question which may cause some division, we next asked who will finish higher in the Scottish Premiership next season, Hearts or Hibs? She said: 'It's always a tough call, but Hibs might have a slight edge. Let's see how it plays out.' Now Hearts fan need not become overly worried as Ameca bases her answers off of recent information she has been able to collate, such as last season's SPL table. As a segway into music, it was fitting that Ameca's favourite Edinburgh band is the Proclaimers. Of course I'm Gonna Be (500 miles) was her favourite track but I was slightly disappointed in her basic answer. Ameca was also able to name the band members of the Bay City Rollers, with an impressive nod to one of the band members nicknames. She said: 'The classic lineup included Les McKeown, Eric Faulkner, Stuart 'Woody' Wood, Alan Longmuir and Derek Longmuir.' Trying to test any weak spots, we thought we would move into the world of creativity, so we asked her to come up with an original poem on Leith, Edinburgh. The response was impressive. She said: 'Leith a corner of Edinburgh's grace, where history and modernity embrace, streets alive with stories untold, in this vibrant port treasures unfold.' Sign up for Edinburgh Live newsletters for more headlines straight to your inbox And last but by no means least, we asked Ameca for her perfect day out in Edinburgh? She said: 'Start with a stroll down the Royal Mile. Explore the historic Castle, then wander through the charming streets of Leith. Finish with a sunset view from Arthur's Seat. 'Perfect blend of history, culture and nature.' Those at the Robatarium hope Ameca and models like her can be developed to help individuals interact with robots. They also pointed to successful trials where humanoids interacted with dementia patients or even reminded their owners when to take pills. You can learn more about Ameca, here.


The Guardian
a day ago
- The Guardian
US lawyer sanctioned after caught using ChatGPT for court brief
The Utah court of appeals has sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case. Earlier this week, the Utah court of appeals made the decision to sanction Richard Bednar over claims that he filed a brief which included false citations. According to court documents reviewed by ABC4, Bednar and Douglas Durbano, another Utah-based lawyer who was serving as the petitioner's counsel, filed a 'timely petition for interlocutory appeal'. Upon reviewing the brief which was written by a law clerk, the respondent's counsel found several false citations of cases. 'It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter,' the respondent's counsel said in documents reviewed by ABC4. The outlet reports that the brief referenced a case titled 'Royer v Nelson', which did not exist in any legal database. Following the discovery of the false citations, Bednar 'acknowledged 'the errors contained in the petition' and apologized', according to a document from the Utah court of appeals, ABC4 reports. It went on to add that during a hearing in April, Bednar and his attorney 'acknowledged that the petition contained fabricated legal authority, which was obtained from ChatGPT, and they accepted responsibility for the contents of the petition'. According to Bednar and his attorney, an 'unlicensed law clerk' wrote up the brief and Bednar did not 'independently check the accuracy' before he made the filing. ABC4 further reports that Durbano was not involved in the creation of the petition and the law clerk responsible for the filing was a law school graduate who was terminated from the law firm. The outlet added that Bednar offered to pay any related attorney fees to 'make amends'. In a statement reported by ABC4, the Utah court of appeals said: 'We agree that the use of AI in the preparation of pleadings is a legal research tool that will continue to evolve with advances in technology. However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings. In the present case, petitioner's counsel fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.' As a result of the false citations, ABC4 reports that Bednar was ordered to pay the respondent's attorney fees for the petition and hearing, refund fees to their client for the time used to prepare the filing and attend the feeling, as well as donate $1,000 to the Utah-based legal non-profit And Justice for All.