
What is AI, how do apps like ChatGPT work and why are there concerns?
What is AI and what is it used for?
AI allows computers to learn and solve problems in ways that can seem human.Computers cannot think, empathise or reason.However, scientists have developed systems that can perform tasks which usually require human intelligence, trying to replicate how people acquire and use knowledge.AI programmes can process large amounts of data, identify patterns and follow detailed instructions about what to do with that information.
This could be trying to anticipate what product an online shopper might buy, based on previous purchases, in order to recommend items.The technology is also behind voice-controlled virtual assistants like Apple's Siri and Amazon's Alexa, and is being used to develop systems for self-driving cars.AI also helps social platforms like Facebook, TikTok and X decide what posts to show users. Streaming services Spotify and Deezer use AI to suggest music.Scientists are also using AI as a way to help spot cancers, speed up diagnoses and identify new medicines.Computer vision, a form of AI that enables computers to detect objects or people in images, is being used by radiographers to help them review X-ray results.A simple guide to help you understand AIFive things you really need to know about AI
What is generative AI, and how do apps like ChatGPT and Meta AI work?
Generative AI is used to create new content which may seem like it has been made by a human.It does this by learning from vast quantities of existing data such as online text and images.ChatGPT and Chinese rival DeepSeek's chatbot are popular generative AI tools that can be used to generate text, images, code and more material.Google's Gemini or Meta AI can similarly hold text conversations with users.Some, like Midjourney or Veo 3, are dedicated to creating images or video from simple text prompts.
Generative AI can also be used to make high-quality music.Songs mimicking the style or sound of famous musicians have gone viral, sometimes leaving fans confused about their authenticity.
Why is AI controversial?
While acknowledging AI's potential, some experts are worried about the implications of its rapid growth.The International Monetary Fund (IMF) has warned AI could affect nearly 40% of jobs, and worsen financial inequality.Prof Geoffrey Hinton, a computer scientist regarded as one of the "godfathers" of AI development, has expressed concern that powerful AI systems could even make humans extinct - a fear dismissed by his fellow "AI godfather", Yann LeCun.Critics also highlight the tech's potential to reproduce biased information, or discriminate against some social groups.This is because much of the data used to train AI comes from public material, including social media posts or comments, which can reflect biases such as sexism or racism.Facebook apology as AI labels black men 'primates'Twitter finds racial bias in image-cropping AIAnd while AI programmes are growing more adept, they are still prone to errors. Generative AI systems are known for their ability to "hallucinate" and assert falsehoods as fact.Apple halted a new AI feature in January after it incorrectly summarised news app notifications.The BBC complained about the feature after Apple's AI falsely told readers that Luigi Mangione - the man accused of killing UnitedHealthcare CEO Brian Thompson - had shot himself.Google has also faced criticism over inaccurate answers produced by its AI search overviews.This has added to concerns about the use of AI in schools and workplaces, where it is increasingly used to help summarise texts, write emails or essays and solve bugs in code.There are worries about students using AI technology to "cheat" on assignments, or employees "smuggling" it into work.Writers, musicians and artists have also pushed back against the technology, accusing AI developers of using their work to train systems without consent or compensation.
Thousands of creators - including Abba singer-songwriter Björn Ulvaeus, writers Ian Rankin and Joanne Harris and actress Julianne Moore - signed a statement in October 2024 calling AI a "major, unjust threat" to their livelihoods.Billie Eilish and Nicki Minaj want stop to 'predatory' music AIAI-written book shows why the tech 'terrifies' creatives
How does AI impact the environment?
It is not clear how much energy AI systems use, but some researchers estimate the industry as a whole could soon consume as much as the Netherlands.Creating the powerful computer chips needed to run AI programmes also takes lots of power and water.Demand for generative AI services has meant an increase in the number of data centres.These huge halls - housing thousands of racks of computer servers - use substantial amounts of energy and require large volumes of water to keep them cool.Some large tech companies have invested in ways to reduce or reuse the water needed, or have opted for alternative methods such as air-cooling.However, some experts and activists fear that AI will worsen water supply problems.
The BBC was told in February that government plans to make the UK a "world leader" in AI could put already stretched supplies of drinking water under strain.In September 2024, Google said it would reconsider proposals for a data centre in Chile, which has struggled with drought.Electricity grids creak as AI demands soar
Are there laws governing AI?
Some governments have already introduced rules governing how AI operates.The EU's Artificial Intelligence Act places controls on high risk systems used in areas such as education, healthcare, law enforcement or elections. It bans some AI use altogether.Generative AI developers in China are required to safeguard citizens' data, and promote transparency and accuracy of information. But they are also bound by the country's strict censorship laws.In the UK, Prime Minister Sir Keir Starmer has said the government "will test and understand AI before we regulate it".Both the UK and US have AI Safety Institutes that aim to identify risks and evaluate advanced AI models.In 2024 the two countries signed an agreement to collaborate on developing "robust" AI testing methods.However, in February 2025, neither country signed an international AI declaration which pledged an open, inclusive and sustainable approach to the technology.Several countries including the UK are also clamping down on use of AI systems to create deepfake nude imagery and child sexual abuse material.Man who made 'depraved' child images with AI jailedInside the deepfake porn crisis engulfing Korean schools
Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mirror
an hour ago
- Daily Mirror
Forget the iPhone 17 - this iPhone 16e deal by Sky is not to be ignored
The iPhone 17 rumours are swirling around but for those that don't need the best Apple has, this iPhone 16e deal is worth looking at The iPhone 17 is looming in the distance with plenty of rumours and leaks already in the wild. It appears that Apple is going to massively shake up the design of the phone and speculation there could be a new iPhone 17 Air on the way. But as many of us know, these newly designed iPhones will probably cost an arm and a leg to buy on release. Instead, we've found a deal for those who want all the utility of an iPhone at a much cheaper price. The iPhone 16e is Apple's newest budget handset, incorporating the great parts of an iPhone 16 but at a cheaper price. Sky has dropped the monthly price of the device to £18 a month for payday. For £18 a month, shoppers will get a 6.1‑inch Super Retina XDR display, long battery life, the A18 chip onboard and a 48MP camera to take all the photos Apple fans could ever want. It's basically a trimmed down iPhone 16, but at a cheaper price with some of the best parts of the iPhone. And for those looking for a new mobile, there are some good options out there. For those who want a top-of-the-range flagship, the newest Samsung Galaxy Z Fold7 could be an option, and prices kick off at £44.99 a month. Grab the iPhone 16e for £18 a month thanks to cheap Sky deal £599 £18/a month Sky GET DEAL Product Description Alternatively, there are always Apple's devices as well. Giffgaff offers the newer iPhone 16 Pro for £24 per month, though there's a slightly higher initial cost of £25 upfront, allowing shoppers to split the price into smaller payments. Apple iPhone 16e buyers and reviewers have also been glowing in their reviews. One reviewer said: "I upgraded from my old iPhone and wow what a difference. The screen is edge to edge, and the battery capacity is impressive. Well worth the cost as it's the cheapest iPhone that has been produced in some time." Another shopper simply said: "Very impressed with battery life on this phone." However, one user did note: "The phone is ideal for my needs, long life on battery. Although I did need to buy an adapter to charge and will also need one to use my non wireless earphones." Our tech experts also reviewed the iPhone 16e on the Daily Express when it released and noted just how great the phone was - with all the premium features of an iPhone 16 without the big cost. Tech editor Dave Snelling said: "Apple's new iPhone 16e should definitely not be ignored. This is a great smartphone that packs quite a punch for a much cheaper price than its siblings. "No, it doesn't get all of the features found on the iPhone 16 or iPhone 16 Pro, but if things such as Camera Control and MagSafe don't really bother you, then this is a solid device to consider. What you do get is a beautifully made phone that is built for the future and the next generation of Apple Intelligence updates." He did state the iPhone 16e isn't a complete iPhone - and it's something Apple fans should consider. He added: "Just be aware that its lower cost means fewer features. You'll need to decide what really matters to you and what compromises you are prepared to make."


Reuters
an hour ago
- Reuters
Trump's lightning reactor build program ignites nuclear sector
July 31 - In a flurry of executive orders, President Trump has mandated the Department of Energy (DOE) to authorise and develop three pilot small modular reactors (SMRs) in a bid to accelerate nuclear power deployment and meet soaring demand from AI. The Trump administration wants the pilot reactors to achieve "criticality" by July 4, 2026, requiring completion of design, licensing and testing within a year. Trump also directed the Department of Defense (DOD) to commission its own pilot reactor within three years. SMRs promise lower upfront capital costs and shorter construction times than conventional large reactors, but first of a kind (FOAK) designs have taken years to gain regulatory approval and investors have been wary of development and construction risks. Soaring demand from Big Tech has catalysed interest in nuclear power and developers say small reactors can be built in line with rising demand from data centers. Trump's executive orders also directed the Nuclear Regulatory Commission (NRC) to process licence approvals for new reactors within 18 months and establish a process for "high-volume licensing of micro reactors and modular reactors, including allowing for standardized applications." The DOE and DOD will seek to source private funding for the construction and operation of nuclear fuel recycling, reprocessing, and fabrication capabilities, the White House said. The government aims to increase U.S. nuclear capacity from about 100 GW today to 400 GW by 2050 and only three large reactors have entered commercial operation this century. CHART: Annual US nuclear power installations DOE authorization of SMR designs will help unlock private funding, provide a fast-track licensing approach and help establish the required supply chains and talent pipeline, a DOE spokesperson told Reuters Events. The executive orders provide a "much-needed catalyst" for SMR deployment in the civilian sector by "circumventing some of the structural and regulatory bottlenecks that have historically slowed down progress,' James Walker, CEO of micro reactor developer Nano Nuclear Energy, said. The federal actions will effectively guarantee initial customers and testing grounds for new reactors, unlock procurement pathways and create viable use cases, Walker said. Faster deployment The DOE closed its application window for reactor developers on June 21 and applications will be assessed based on criteria including technological readiness, siting evaluations, financial viability and a detailed plan for achieving criticality. The DOE is seeking designs that have a "reasonable chance' of achieving criticality by July 2026, the DOE spokesperson said. To speed up the process, the department is streamlining its authorization process and eliminating or expediting its environmental reviews for permits and approvals, the spokesperson noted. CHART: Small modular reactor projects by country Developing projects on DOE and DOD land should shorten approval and deployment timelines. The DOE-owned Idaho National Laboratory (INL) is one potential location for the test reactors, as well as Sandia National Labs which has sites in New Mexico and California, as well as numerous DOD sites. Shifting the deployment of FOAK reactors onto lands under DOE or DOD control will help to remove obstacles to development, Walker said. FOAK reactors 'often languish due to lack of customers and high regulatory uncertainty,' Walker said. The executive orders require the NRC to expedite the approval process for designs that the DOE or DOD have demonstrated have the ability to function safely. Download exclusive insights from the Reuters Events: SMR & Advanced Reactor 2025 conference in May. Applicants will be responsible for all design, manufacturing, construction, operating and decommissioning costs. While the projects will not receive federal funding, the DOE will provide federal resources as part of the application process, the DOE spokesperson noted. The executive orders could see multiple FOAK reactors deployed by the end of the decade and these reactors will each generate operational data, workforce expertise and bolster public confidence to catalyze the commercial market, Walker said. 'Difficult' deadline Developers of micro reactors or SMRs that have high technology readiness and a clear pathway to manufacturing will benefit most from the federal development initiatives, Walker said. Companies like BWX, Holtec, Westinghouse and NuScale are developing SMRs based on existing light water reactor (LWR) technology but a number of advanced reactor developers and micro reactor developers are also seeking to deploy rapidly in the coming years and signing early commercial arrangements with large offtakers including tech groups. Trump's orders could "ensure we get more near term deployment of known technologies' but might not help 'more exotic or 4th generation [nuclear] tech," Patrick O'Brien, Director, Government Affairs and Communications at Holtec International, told Reuters Events. Holtec is one of a small group of developers seeking to win DOE funds for SMRs based on LWR technology, allocated through a separate funding round. For exclusive nuclear insights, sign up to our newsletter. Micro reactors would be more likely to achieve the criticality deadline of July 2026 on federal sites, due to their smaller size, O'Brien said. Building a whole advanced reactor system in a year would be 'extremely difficult' because of supply chain constraints, Walker warned. Instead, the DOE could adjust its definition of criticality to specify that only fuel assembly rather than the entire reactor must reach criticality by the July 2026 deadline, he said.


The Independent
an hour ago
- The Independent
Can Mark Zuckerberg really be trusted to oversee the AI revolution?
Mark Zuckerberg's Meta was the belle of the ball on Wall Street after its latest results revealed a windfall driven by investments into AI. Leaning into artificial intelligence is one of the social media king's best bets – it is paying off handsomely, and it will likely do so for years to come. Revenue for the three months to the end of June leapt by 22 per cent to $47.5bn when compared with a year earlier. Profits surged by more than a third (36 per cent) to $18.3bn. If the price of Zuckerberg's AI ambitions is similarly dizzying – overall costs rose by 12 per cent to $27.5bn, with more spending promised – who cares? It's a smart investment and a good business move. Meta's vast resources mean that it will inevitably be at the forefront of this exciting, but potentially dangerous, new tech for years to come. That's more than a little troubling, especially if you've read the former Facebook executive Sarah Wynn-Williams ' bestselling tell-all book Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism about her time at the company. If you only read one business book this year – if you only read one business book in your lifetime – this is the one. Wynn-Williams hasn't been shy about including gory allegations about the behaviour of the company's top staff. Her book is an exposé, and it does that job well. It is also a great, if sometimes horrifying read. But the important part is what she has to say about the thoughtless way Meta was prepared to use its groundbreaking tech. For example, Wynn-Williams alleges that the company actively targeted teenagers with ads based on their emotional state, including when they were depressed. Now add AI into that algorithmic mix. The implications are enough to make you shudder. The company's attempts to court China, and what was offered to the regime there, have also raised eyebrows. Wynn-Williams made the same allegations before US senators, when she was, remember, under oath. Now let's put this in context. The world is in the midst of a hell-for-leather AI race. Donald Trump claims the US has already won it. He might be right. Keir Starmer is determined that Britain should at least find a place on the podium, although government communications bods would probably prefer I use phrases like 'leading role' or 'pioneering', or some such. His latest wheeze is bigging it up with influencers (am I alone in hating the word 'influencer'?). On Thursday, the prime minister held a half-day session for around a 100 of them in Downing Street about how they can work more closely with the government – and vice versa, given the stratospheric rise of news intake on social media. AI has ministers drooling, and with good reason. The British state is in a mess, hamstrung by sparse resources and poor leadership, which often seems more interested in pet projects than it is in solving problems. AI could change that. Justice secretary Shabana Mahmood thinks it will be able to 'predict the risk an offender could pose' and inform 'decisions to put dangerous prisoners under tighter supervision', thus cutting prison violence. The struggling probation service could also benefit, with AI pilots showing 'a 50 per cent reduction in note-taking time, allowing officers to focus on risk management, monitoring and face-to-face meetings with offenders'. Over to the NHS, where we are told that an app using AI to provide physio for people with back pain has cut treatment waiting lists by 55 per cent. I don't know if I particularly want my back problems handled by a bot, but maybe that's a consequence of my reading too many dystopian books. We're already seeing some of the tech's negative effects through the elimination of junior positions in the tech industry and the City, exacerbating an already weak UK jobs market. But the government isn't so keen on talking about that. All this begs the question: is this the sort of tech we want to be controlled by Wynn-Williams' 'careless people', who are so obsessed with the bottom line that they never stop to think about what they're doing? This is not just a criticism of Meta. Britain's politicians are currently falling over each other to worship at the altar of the tech bros. They think it's the future. They might be right. Perhaps this tech fixes the NHS and gets a sclerotic state that is very visibly failing the British people working again. Lives would be improved as a result. The trouble is, if you mix careless people, they could just as easily combine to create a toxic brew.