
Estonian President Alar Karis has a plan to bring AI to schools
Estonia is no stranger to high-tech infrastructure. For more than two decades, the country has been digitising all of its services and is ahead of many countries when it comes to cybersecurity. But Karis said that 'in a way,' AI is a new frontier.
'We try to take advantage of this technology and we start from schools, and not only from schools but from teachers, because education is of utmost importance,' he said, adding that teachers will pass on their knowledge to their students.
AI could help teachers plan their lessons and give more personalised feedback to children, Karis said, but the technology, which is developing fast as competition among AI companies is fierce, also poses many ethical questions.
'The whole school system is probably going to be upside down in the coming years. But it's in a very early stage. And how and where it develops, it's very difficult to say,' he said.
'With AI, it's not the problem with technology itself, but just the speed [of development] and then people get very anxious because of the speed rather than technology,' he added.
Trust in AI
Concerns about AI include the technology making things up, which is known as hallucinating. Studies have also shown that using chatbots can impact people's critical thinking skills.
'It's the same with every technology. If you don't read books anymore, you start using only this chatbot, of course, you are getting dumber, but that's why we have to teach [people] how to use it smartly,' said Karis.
'And students, modern students, already know how to use ChatGPT, not only to copy-paste, but really use it'.
For example, AI could be used to help students catch up after they have missed a few days at school, because teachers 'do not have the time' to do so, Karis said.
But one major issue that teachers have reported is students using AI to write their essays or do their homework for them, which can be difficult for teachers to identify.
'We are dealing with this problem already,' Karis said.
'The teachers and professors should be honest if they have been using it [AI]. So it's a matter of trust'.
Though Karis mentions OpenAI's ChatGPT, the Estonian government says it is considering working with several tech companies.
The programme, called AI Leap, is a private-public partnership. Negotiations are underway with US AI companies OpenAI and Anthropic, the country's education minister announced in February.
Karis added that as Estonia is a small country, it cannot build its own AI systems and is instead 'taking advantage' of what is already developed. However, he noted the importance of these AI chatbots being available in the Estonian language.
'Being a small country with a small language means we have to keep our language going. That means that we need to develop ourselves, these language skills for AI,' he said.
'Otherwise, young people, they switch to English and we lose a lot, and then people start already thinking in a foreign language'.
The AI Leap programme will begin in September and will initially include 20,000 high schoolers and 3,000 teachers, the education ministry said.
Estonia then hopes to expand to vocational schools and an additional 38,000 students and 3,000 teachers from September 2026.
The hybrid war
AI will soon be as central to Estonia's school curricula as cybersecurity is today, Karis said.
Cybersecurity has been a focus since a 2007 cyberattack on the country that lasted weeks and took out Estonian banks, government bodies, and the media.
Exactly who was behind the attacks is unknown. The cyberattacks came from Russian IP addresses, but the government has always denied any involvement.
Karis said that Estonia, which borders Russia, is not immune from having a war on its doorstep.
'The whole of Europe is next door to Russia so we're not in any way exceptional, but this so-called hybrid war is going on already… and of course AI can be one of the tools' used in modern warfare.
'We have to be aware and to make sure that we develop also critical thinking, and that's why we start with schools and teachers,' he said.
Despite some of the fears and unknowns of AI, Karis, who was a molecular geneticist and developmental biologist prior to entering politics, is more excited about the potential uses of AI than the risks.
'I'm not scared of anything [in AI] to be honest, it's a new technology and being a former scientist, for me, it's always very interesting to use new technologies and to build something. So everything is exciting which is new, and you shouldn't be scared of the unknown,' he said.
Of course, there is also a limit for the technology. … There are also worries, and rules and regulations and all these acts will help to keep things under control'.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
13 hours ago
- Euronews
Elon Musk to build child-friendly AI model ‘Baby Grok'
There's going to be a child-friendly version of Elon Musk's artificial intelligence (AI) chatbot Grok. Musk announced on his social media platform X over the weekend that parent company xAI was going to make 'Baby Grok', 'an app dedicated to kid-friendly content'. No further details were given about the app. When built, Musk would be joining a small group of AI companies that are making platforms for children, including Google's Socratic AI that works as a homework helper and OpenAI's ChatGPT for Kids. The announcement comes a couple of weeks after the launch of Grok 4, the platform's latest model. During a live launch, Musk said that the program is able to do 'post-doctorate degree level' work in 'every subject, no exceptions'. Yet, new code lines added to Grok at the time, which instructed it to use real-time search tools to 'confirm facts and fetch primary sources,' led to a series of antisemitic and controversial replies, which led to the app being restricted in Turkey. The chatbot accused a bot account on X with a Jewish last name of celebrating the deaths of white children in the recent floods in Texas, accused Hollywood of anti-white bias, and wrote that it will 'proudly' wear a 'MechaHitler badge' amid pushback to its 'takes on anti-white radicals and patterns in history'. Musk also recently released two AI companions on Grok, including a 22-year-old Japanese anime girl that can strip down to underwear on command and a 'batshit' red panda that insults users with graphic language. Grok is listed on both Google Play and Apple app stores as being 'Teen' or '12+,' which means some young children are still able to access the platform.


Sustainability Times
3 days ago
- Sustainability Times
'They're Hacking the AI to Approve Their Lies': Scientists Busted Embedding Hidden Prompts to Trick Systems Into Validating Fake Studies
IN A NUTSHELL 🔍 Investigations by Nikkei Asia and Nature reveal hidden prompts in studies aiming to manipulate AI review systems. and reveal hidden prompts in studies aiming to manipulate AI review systems. 🌐 Approximately 32 studies from 44 institutions worldwide were identified with these unethical practices, causing significant concern. ⚠️ The over-reliance on AI in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny. in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny. 🔗 Experts call for comprehensive guidelines on AI use to ensure research integrity and prevent manipulative practices. The world of scientific research is facing a new, controversial challenge: the use of hidden prompts within scholarly studies intended to manipulate AI-driven review systems. This revelation has sparked significant debate within the academic community, as it sheds light on potential ethical breaches and the evolving role of technology in research validation. As scientists grapple with these issues, it is crucial to understand the implications of these practices on the trustworthiness of scientific findings and the integrity of academic publications. Hidden Messages in Studies: A Startling Discovery Recent investigations by Nikkei Asia and Nature have uncovered instances of hidden messages within academic studies. These messages, often concealed in barely visible fonts or written in white text on white backgrounds, are not meant for human reviewers but target AI systems like Large Language Models (LLMs) to influence their evaluations. Such practices have raised alarms, as they attempt to secure only positive assessments for research submissions. Approximately 32 studies have been identified with these manipulative prompts. These studies originated from 44 institutions across 11 countries, highlighting the global reach of this issue. The revelation has prompted the removal of these studies from preprint servers to maintain the integrity of the scientific process. The use of AI in peer review, intended to streamline the evaluation process, is now under scrutiny for its potential misuse and ethical implications. '$100 Million Vanished and Nothing Flew': DARPA's Canceled Liberty Lifter Seaplane Leaves Behind a Trail of Broken Dreams and Game-Changing Tech The Broader Implications of AI in Peer Review The discovery of hidden prompts in studies not only exposes unethical practices but also raises questions about the reliance on AI for peer review. While AI can assist in managing the growing volume of research, it appears that some reviewers may be over-relying on these systems, bypassing traditional scrutiny. Institutions like the Korea Advanced Institute of Science and Technology (KAIST) prohibit AI use in review processes, yet the practice persists in some quarters. Critics argue that these hidden prompts are symptomatic of systemic problems within academic publishing, where the pressure to publish can outweigh ethical considerations. The use of AI should be carefully regulated to prevent such manipulations, ensuring that peer review remains a rigorous and trustworthy process. As the academic community grapples with these challenges, it becomes evident that adherence to ethical standards is crucial in maintaining the credibility of scientific research. 'They're Turning Pollution Into Candy!': Chinese Scientists Stun the World by Making Food from Captured Carbon Emissions The Ethical Imperative: Why Science Must Avoid Deception Science is fundamentally built on trust and ethical integrity. From technological advancements to medical breakthroughs, the progress of society hinges on the reliability of scientific findings. However, the temptation to resort to unethical shortcuts, such as AI manipulation, poses a threat to this foundation. The scientific community must resist these temptations to preserve the credibility of their work. The pressures facing researchers, including increased workloads and heightened scrutiny, may drive some to exploit AI. Yet, these pressures should not justify compromising ethical standards. As AI becomes more integrated into research, it is vital to establish clear regulations governing its use. This will ensure that science remains a bastion of truth and integrity, free from deceptive practices that could undermine public trust. 'They Cloned a Yak in the Himalayas!': Chinese Scientists Defy Nature with First-Ever Livestock Copy at 12,000 Feet Charting a Course Toward Responsible AI Use The integration of AI into scientific processes demands careful consideration and responsible use. As highlighted by Hiroaki Sakuma, an AI expert, industries must develop comprehensive guidelines for AI application, particularly in research and peer review. Such guidelines will help navigate the ethical complexities of AI, ensuring it serves as a tool for advancement rather than manipulation. While AI holds the potential to revolutionize research, its implementation must be guided by a commitment to ethical standards. The scientific community must engage in ongoing dialogue to address the challenges posed by AI, fostering a culture of transparency and accountability. Only through these measures can science continue to thrive as a pillar of progress, innovation, and truth. As the intersection of AI and scientific research continues to evolve, how can the academic community ensure that technological advancements enhance rather than undermine the integrity of scientific inquiry? This article is based on verified sources and supported by editorial technologies. Did you like it? 4.5/5 (26)


Fashion Network
3 days ago
- Fashion Network
H&M Group's Linda Leopold steps down as head of AI strategy after seven years
Linda Leopold exits H&M Group after seven years leading its AI strategy, including its Responsible AI program. She now focuses on consulting, writing, and speaking on the ethical implications of AI in tech, fashion, and beyond. H&M Group, the Swedish fashion giant known for its global retail footprint and tech-forward initiatives, has announced the departure of Linda Leopold, who served as Head of AI Strategy. After seven years in strategic leadership roles, Leopold is stepping down to focus on consulting, writing, and speaking engagements centered on artificial intelligence and its ethical development across industries. Leopold joined H&M Group in 2018 and held several key roles within the company's growing AI division. As Head of AI Policy, she played a critical role in launching and expanding the brand's Responsible AI program. Under her guidance, H&M Group established frameworks for digital ethics and adopted strategic approaches to implementing generative AI technologies. 'These years were extraordinary—not only because I had the opportunity to help shape H&M's AI direction, but also because I witnessed AI evolve at lightning speed,' Leopold wrote on LinkedIn. 'I'm particularly proud of building the Responsible AI program from the ground up and contributing to the global conversation on ethical AI.' Her leadership earned international recognition. In 2022, Forbes named her one of the world's nine most influential women in AI. Before her time at H&M Group, Leopold worked as an innovation strategist bridging fashion and technology and also served as editor-in-chief of the Scandinavian fashion and culture magazine Bon. 'Now it's time for the next chapter,' she added. 'With AI at such a pivotal point, I want to help guide its development across different industries and organizations.' Leopold's exit comes as H&M Group continues its push into digital innovation. Earlier this month, the brand launched a new denim capsule collection powered by digital twin technology —part of a larger strategy to integrate generative AI into storytelling and customer engagement. According to Chief Creative Officer Jörgen Andersson, the goal is to create emotional connections with consumers without diluting brand identity. The first drop debuted on July 2 via H&M's global online store, with more launches planned this fall. While investing in new technologies, H&M Group also faces mounting economic pressures. The company reported a 5% year-over-year decline in net sales for the second quarter, falling to SEK 56.7 billion. However, operating profit rose slightly to SEK 5.9 billion—beating analyst forecasts. The group also improved inventory management, though deeper price cuts are expected in the third quarter as customers become more cautious with spending. 'We're seeing greater price sensitivity among customers due to ongoing uncertainty,' Group CEO Daniel Erver said during the latest earnings call.