logo
AI's turbulent history, from the calculator to ChatGPT

AI's turbulent history, from the calculator to ChatGPT

LeMonde13-07-2025
On an internet forum, a slightly confused student asked a not-so-silly question: "Did the definition of AI [artificial intelligence] change?" Another replied, sounding disillusioned: "I just don't really regard this as an academic term [that I'd use] anymore (…) it may not be clear to a person reading it."
The use of the term "artificial intelligence" has indeed never been fully consistent. Experts generally agree on a preliminary definition: AI is a machine capable of carrying out tasks previously thought to require human intelligence. "But what is human intelligence?" asked Anne Laurent, a computer science researcher and director of the Inria branch at the University of Montpellier. "Philosophers, engineers and cognitive scientists do not agree. Its definition changes over time."
If we look at the broad history of AI, it may have been in the 19 th century that the first human intelligence skill fell, conquered by calculating machines capable of multiplying and dividing. But a narrower view often prevails, placing the birth of AI in the 1940s with the emergence of the computer.
1940s: The foundation
One of the very first electronic computers, the Colossus, was used by the British military to break the encrypted communications of the German high command. It was far more powerful than the electromechanical machines that came before it. The machines that followed would be easier to program and capable of running complex and varied algorithms, finally making it possible to experiment with theories about "thinking" machines, which began to circulate at the time. As early as 1943, Walter Pitts and Warren McCulloch imagined the concept of artificial neurons, a simplified model of how human neurons function.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

YouTube misinformation tools are inconsistent in Europe: report says
YouTube misinformation tools are inconsistent in Europe: report says

Euronews

time07-08-2025

  • Euronews

YouTube misinformation tools are inconsistent in Europe: report says

Two of Google's major tools to fight misinformation on YouTube only work sometimes, and only in certain European languages, a new study has found. When users watch a YouTube video on a topic that is often embroiled in misinformation, such as COVID-19 vaccines or climate change, a label should pop up to direct them to third-party sources with more information on the issue, for example Wikipedia. The information panel is also meant to appear in users' search results in their operating language. However AI Forensics, a Europe-based nonprofit that investigates major tech companies, found that this is often not the case. The researchers built a web crawler to test the panels, and found that across the 83 languages on the video platform, the information panels are applied inconsistently to 12 topics on Wikipedia's list of conspiracy theories as well as four types of publisher labels. All of the panels are available in English. In German, videos on all but one topic – the Armenian genocide – have these labels. Meanwhile, very few panels are available in Icelandic or Lithuanian. AI Forensics said the tests show YouTube pays 'disproportionate attention towards Western languages' and 'neglects' regional languages like Basque, Catalan, and Galician. YouTube's content disclaimers go beyond misinformation-prone topics. It also includes labels for news sources and fact-checkers that explain whether they are partially or fully funded by government sources, as a way to 'help you better understand the sources of news content,' the company says. But public funding was flagged in some countries and not others, the AI Forensics report found. Euronews, for example, receives some funding from the European Union. YouTube flagged this in some countries, but not in Denmark, Greece, Norway, Iceland, Finland, or Portugal. 'We are scared … there might be some group of people that have differential access to safety' measures to fight misinformation on the platform, Salvatore Romano, head of research at AI Forensics, told Euronews Next. 'Instead of increasing [the] trust of users on the platform, it is undermining [it],' Romano added. Unclear whether YouTube will address discrepancies YouTube's website notes that 'information panels may not be available in all countries/regions and languages,' and that the company is working to expand their reach. The AI Forensics team said they have met with YouTube to present their research. During those meetings, YouTube allegedly indicated that the discrepancies between languages were not supposed to happen, and that it didn't have a way to systematically check how the panels are deployed across Europe. AI Forensics said YouTube agreed to address these discrepancies but did not go into detail. YouTube did not respond to a Euronews Next request for clarity on the alleged meetings or the measures it plans to take. YouTube, under parent company Google, is a signatory to the European Commission's Code of Practice on Disinformation, a voluntary agreement that sets standards for tech companies to fight disinformation. In March, YouTube said that as part of its commitment, it will 'assess and update the topics prone to misinformation that receive additional context from information panels'. But Romano believes the haphazard content labeling could be even more extensive than the AI Forensics researchers found, given the report says it shows only a 'partial view' with a small sample of panels. Commission could put 'pressure on YouTube' over inconsistencies: AI Forensics Romano said he's hoping YouTube will correct the discrepancies. If it doesn't, he said he will call on the European Commission, the EU's executive body, to investigate a potential breach of the Digital Services Act (DSA). The Commission could, for example, request more information from YouTube about how it administers its panels, which is the first step of any DSA investigation, he said. 'The EU Commission could put pressure on YouTube to show them that this type of inconsistency is unacceptable across platforms and across products,' he said, noting that it creates 'second-rate markets' throughout Europe where standards meant to protect against misinformation are not being met. Euronews Next reached out to the Commission to confirm whether it would be asking for more information from YouTube but did not receive an immediate reply. Romano, at least, will be watching. He said his team built a tool to track how regularly YouTube's information labels come up in various languages. Through the publicly available tracker, people can see which topics have content labels in their language. It will also allow researchers to identify any changes to the information panels, Romano said.

A UK politician turned himself into an AI chatbot. Here's how it works
A UK politician turned himself into an AI chatbot. Here's how it works

Euronews

time06-08-2025

  • Euronews

A UK politician turned himself into an AI chatbot. Here's how it works

Imagine being able to corral your local politician and grill them about their views, anytime you fancy – no appointments, no red tape. That's exactly what one British Member of Parliament has set out to achieve. Mark Sewards, a Labour Party politician who represents Leeds South West and Morley, has unveiled an artificial intelligence (AI) clone of himself that is designed to answer questions and assist with constituents' queries around the clock. 'I'm very keen to explore how new technologies, such as AI, can help strengthen the connection between an MP's office and the constituents we serve,' Sewards said in a statement. He encouraged his constituents to 'give AI Mark a try' – but noted in an Instagram post that the tool isn't designed to replace his official duties. Rather, the idea is to provide a 24/7 service where constituents can raise local issues, ask about policy matters, or drop a message to their MP's office without waiting weeks for a formal reply. The bot 'simply gives people another option to contact me, any time of day,' Sewards wrote. How a tech firm cloned Sewards The Leeds-based startup Neural Voice developed Seward's AI twin, dubbing him the 'UK's first virtual MP'. The company says it uses cutting-edge neural voice synthesis and advanced natural language processing to help create voice AI assistants for phone systems and websites. It used voice recordings of Sewards in parliamentary sessions, social media profiles, and previous correspondences with constituents to create a digital double that mirrors his speaking style, tone, and mannerisms with uncanny accuracy, the company said. It hopes the new technology will make local politicians more accessible to their constituents. "For too long, politicians have been out of reach for their constituents,' said Jeremy Smith, Neural Voice's CEO and co-founder. 'The vast majority of people do not know the name of their MP, let alone their voice and what they stand for,' Smith added. He believes this technology could soon become par for the course, with other MPs and even businesses potentially following suit in a bid to bridge the communication gap. Digital doubles in politics and business AI is increasingly being deployed in public-facing roles to overcome physical barriers and create constant feedback loops. Last year, the mayor of a Japanese city, Yokosuka, created an AI avatar of himself to improve communication with the thousands of English-speaking US Naval personnel based there. Meanwhile, a political outfit in Denmark called the Synthetic Party created an AI candidate called 'Leader Lars' to stand in the country's 2022 elections, though the party did not win votes. The AI model, which the party says was fed with material from fringe political parties in Denmark dating back to 1970, was designed to engage with citizens who typically do not vote and to demonstrate AI's role in democracy. Companies in Poland and China have also experimented with having AI CEOs front their brands. Yet such tools can be repurposed to impersonate public figures without their approval, or to generate convincing fake personas that appear to be real public figures. High-profile deepfake scams have targeted Italian defense minister Guido Crosetto, US Secretary of State Marco Rubio, and several celebrities, including Taylor Swift and Joe Rogan, whose voices were used to promote a scam that promised people government funds. Deepfakes were created every five minutes in 2024, according to a report from think tank Entrust Cybersecurity Institute. Whether a digital doppelgänger can truly replace an in-person chat with a public figure remains to be seen, but back in the United Kingdom, Sewards believes it will be at least one promising option. 'The AI revolution is happening and we must embrace it or be left behind,' he said on the social media platform X.

German government urged to start proper supervision of AI
German government urged to start proper supervision of AI

Euronews

time06-08-2025

  • Euronews

German government urged to start proper supervision of AI

ADVERTISEMENT German consumer groups and regulators have called upon the government to formally appoint a national authority to begin oversight of artificial intelligence providers. Germany missed the EU deadline of 2 August to notify the European Commission which market surveillance authorities it has appointed to oversee business compliance with the AI Act. The local regulators once appointed will then keep an eye on local providers of AI systems, ensuring they follow the Act. The Hamburg data protection commissioner, Thomas Fuchs, called on the federal government to quickly designate the AI market surveillance authorities – which in some areas also include the data protection supervisory authorities. "Due to the delay, companies and authorities are now missing their binding contact person for questions about the AI regulation. This is also a disadvantage for Germany as a location for AI innovation," Fuchs said. These concerns were echoed by Lina Ehrig, head of digital at the Federation of German Consumer Organisations (VZBV). Without supervision, companies could use AI to manipulate consumers or exploit individual weaknesses, for example via real-time voice analysis in call centres, VZBV warned. 'There needs to be a supervisory authority that keeps an eye on this and acts against violations. That hasn't happened so far," says Ehrig. According to a Commission official, some of the 27 EU member states have sent notifications about the appointments – which are now under consideration – however, it seems that most member states have missed the deadline. Euronews reported in May that with just three months to go until the early August deadline, it remained unclear in at least half of the member states which authority will be nominated. Despite the lack of national regulation, the Hamburg data watchdog said it started building capability and training personnel for the complex tests of AI systems to be ready for the moment of legal designation. The regulator earlier this year asked Meta questions about its AI tools. The AI Act entered into force in August 2024, but the provisions start to apply gradually. This month, national authorities need to be appointed, and rules on general purpose providers – such as ChatGPT, Claud AI and Gemini – start to apply.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store