logo
Fairphone 6 lands a perfect 10 for repairability

Fairphone 6 lands a perfect 10 for repairability

Yahooa day ago
Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.
Dutch company Fairphone continues to lead the charge on consumer- and planet-friendly electronics, proving that a great phone doesn't have to be impossible to repair or environmentally unsustainable. The Fairphone 6 has just been released, coming two years after the last generation of the phone built to last. The folks over at iFixit wasted no time in conducting a teardown of this new entry to see how it stacks up against previous generations. The Fairphone 6 scored a perfect 10 out of 10, like every generation of Fairphone bar the very first.
Fairphones are modular, and have been designed with easy repair in mind, but there is one change from the previous generations that makes things harder. Whereas the last few Fairphones have used hard batteries that could be flipped out with your fingernail, the new handset packs a soft-pouch cell that's thinner than its predecessors. That's slimmed the phone down, but it does mean the battery is now held in place with five screws.
It's the same with every other component on the handset, since none of the components are held in with glue. The lack of adhesives does account for the phone's IP 55 rating, which is lower than the rest of the industry. But given you can't exactly get Samsung to send you a video guide telling you how to open your phone with nothing more than a T5 Torx screwdriver, so there's give and take.
iFixit is quick to point out that the Fairphone 6 isn't a bleeding-edge smartphone, nor is it intended to be. It's designed from the ground up to be as sustainable and repair-friendly as possible, and this means some trade-offs compared with flagship devices. The Fairphone 6's use of USB 2.0, a less pixel-dense screen, and only 8GB of RAM are all necessary design choices when built with longer life cycles in mind. Nevertheless, this almost certainly won't affect the day-to-day use of the handset for most consumers, and owning a device you can truly repair yourself just might be worth it.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This week in EV tech: The shape of efficiency
This week in EV tech: The shape of efficiency

Digital Trends

time2 hours ago

  • Digital Trends

This week in EV tech: The shape of efficiency

The Nissan Leaf helped kick off the modern EV age, but Nissan squandered that lead. It's now looking to make up for lost time with the first redesign of the Leaf in nearly a decade. As Giovanny Arroba, VP of Nissan Design Europe and head of the EV's design team, explained in an interview with Digital Trends, the 2026 Nissan Leaf goes back to this model's roots with an emphasis on compactness and affordability. 'It's obviously a car that we want to be attainable to a mass volume,' Arroba said. That meant not only building the new Leaf down to a certain price point, but maintaining enough range to make it usable. As with all EVs, aerodynamics was key. The 2026 Leaf's 0.26 drag coefficient is a significant improvement over the 0.29 of the outgoing Leaf achieve what Nissan claims will be up to 303 miles of range with a 75-kilowatt-hour battery pack. That's a 42% range increase with just a 25% increase in battery capacity. Concordes and Coke bottles Shaping the Leaf to be both aerodynamically efficient and aesthetically pleasing was as much about what designers didn't do as what they did, Arroba explained. Recommended Videos 'A shape you would think is the most aerodynamic sometimes may cause drag,' he said. Or it may not look right. A low, sleek nose 'like a Concorde' would cut through the air the cleanest, Arroba acknowledged, but designers went with a broader, grinning front end to give the Leaf a more recognizable 'face.' The more upright front end also helps sell the new Leaf's market positioning as a crossover, rather than the less-popular hatchback body style of the old car. Moving along the car, the goal was 'less Coke bottle on the body side,' Arroba said, referring to the sucked-in middle sections that create an appearance similar to the iconic beverage receptacle from above. 'The more shape you have on the body side, the more drag it can create.' Big gains from small details Big styling changes emphasize that Nissan is (finally) turning over a new Leaf, but smaller details are just as important in reducing drag, and creating a distinctive design, Arroba said. 'The angle of the surface trailing each wheel is super important to maintain attached airflow. In all these subtle angle movements, and where the sharp line cuts off the airflow at the rear of the car, all of these things play a big role.' He added that flush-fit components help as well; Nissan is following numerous other automakers in adopting slick, but potentially finnicky flush door handles. Also crucial is a dramatic upswing of the rear fenders, culminating in a rear spoiler mounted at a 45-degree angle. Wind-tunnel tests and mathematical calculations showed that angle to be surprisingly important for aerodynamic efficiency, Arroba said. It also gave designers an opportunity to make a statement. Many EVs — including Nissan's own Ariya — have a horizontal light bar running across the trunk or tailgate. Feeling that this trend is played out, Arroba and his design team went with a mostly-black surface and individual taillights consisting of two horizontal rectangles and three vertical rectangles on each side. This '2-3' arrangement references Nissan's name, as '2-3' is pronounced 'ni-san' in Japanese. This week's other top EV news After pausing production at its Austin, Texas, plant amid growing inventories of unsold vehicles, Tesla is touting what it claims is the first autonomous delivery of a new car to a customer. On June 27, a Tesla Model Y drove itself from the Austin factory to a customer's driveway with no one onboard, the company claims. This feat was performed using the same tech as Tesla's recently launched robotaxi service, which operates with human safety drivers onboard, and is being investigated for potential traffic-rule violations. A next-generation Mercedes-Benz electric van took a somewhat longer journey as part of testing ahead of the van line's 2026 debut. Mercedes claims a prototype was driven from Stuttgart, Germany, to Rome, a distance of about 677 miles, with just two 15-minute charging stops. The vehicle used was a prototype of the VLE luxury passenger van, but cargo vans and other workaday models are planned. At least some of these will reach the U.S., potentially including an even more luxurious version called the VLS, previewed by the Vision V concept revealed earlier this year. One new EV that likely won't be coming to the U.S. is the Xpeng G7, a crossover SUV close in size to the Tesla Model Y that will sell for around $10,000 less than the Tesla in its home market of China. Xpeng boasts of an expansive augmented-reality head-up display, semi-autonomous driving capability (through a planned over-the-air update) and suspension that can read the road surface and automatically adjust.

Google faces EU antitrust complaint over AI Overviews
Google faces EU antitrust complaint over AI Overviews

Yahoo

time3 hours ago

  • Yahoo

Google faces EU antitrust complaint over AI Overviews

A group known as the Independent Publishers Alliance has filed an antitrust complaint with the European Commission over Google's AI Overviews, according to Reuters. The complaint accuses Google of 'misusing web content for Google's AI Overviews in Google Search, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss.' It also says that unless they're willing to disappear from Google search results entirely, publishers 'do not have the option to opt out' of their material being used in AI summaries. It's been a little over a year since Google began adding AI-generated summaries at the top of some web search results, and despite some early answers that were spectacularly off-base, the feature continues to expand, to the point where it's reportedly causing major traffic declines for news publishers. Google told Reuters that 'new AI experiences in Search enable people to ask even more questions, which creates new opportunities for content and businesses to be discovered.' The company also argued that claims about web traffic are often based on incomplete data, and that 'sites can gain and lose traffic for a variety of reasons.' Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty
AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty

Yahoo

time3 hours ago

  • Yahoo

AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty

When you buy through links on our articles, Future and its syndication partners may earn a commission. Large language models (LLMs) are becoming less "intelligent" in each new version as they oversimplify and, in some cases, misrepresent important scientific and medical findings, a new study has found. Scientists discovered that versions of ChatGPT, Llama and DeepSeek were five times more likely to oversimplify scientific findings than human experts in an analysis of 4,900 summaries of research papers. When given a prompt for accuracy, chatbots were twice as likely to overgeneralize findings than when prompted for a simple summary. The testing also revealed an increase in overgeneralizations among newer chatbot versions compared to previous generations. The researchers published their findings in a new study April 30 in the journal Royal Society Open Science. "I think one of the biggest challenges is that generalization can seem benign, or even helpful, until you realize it's changed the meaning of the original research," study author Uwe Peters, a postdoctoral researcher at the University of Bonn in Germany, wrote in an email to Live Science. "What we add here is a systematic method for detecting when models generalize beyond what's warranted in the original text." It's like a photocopier with a broken lens that makes the subsequent copies bigger and bolder than the original. LLMs filter information through a series of computational layers. Along the way, some information can be lost or change meaning in subtle ways. This is especially true with scientific studies, since scientists must frequently include qualifications, context and limitations in their research results. Providing a simple yet accurate summary of findings becomes quite difficult. "Earlier LLMs were more likely to avoid answering difficult questions, whereas newer, larger, and more instructible models, instead of refusing to answer, often produced misleadingly authoritative yet flawed responses," the researchers wrote. Related: AI is just as overconfident and biased as humans can be, study shows In one example from the study, DeepSeek produced a medical recommendation in one summary by changing the phrase "was safe and could be performed successfully" to "is a safe and effective treatment option." Another test in the study showed Llama broadened the scope of effectiveness for a drug treating type 2 diabetes in young people by eliminating information about the dosage, frequency, and effects of the medication. If published, this chatbot-generated summary could cause medical professionals to prescribe drugs outside of their effective parameters. In the new study, researchers worked to answer three questions about 10 of the most popular LLMs (four versions of ChatGPT, three versions of Claude, two versions of Llama, and one of DeepSeek). They wanted to see if, when presented with a human summary of an academic journal article and prompted to summarize it, the LLM would overgeneralize the summary and, if so, whether asking it for a more accurate answer would yield a better result. The team also aimed to find whether the LLMs would overgeneralize more than humans do. The findings revealed that LLMs — with the exception of Claude, which performed well on all testing criteria — that were given a prompt for accuracy were twice as likely to produce overgeneralized results. LLM summaries were nearly five times more likely than human-generated summaries to render generalized conclusions. The researchers also noted that LLMs transitioning quantified data into generic information were the most common overgeneralizations and the most likely to create unsafe treatment options. These transitions and overgeneralizations have led to biases, according to experts at the intersection of AI and healthcare. "This study highlights that biases can also take more subtle forms — like the quiet inflation of a claim's scope," Max Rollwage, vice president of AI and research at Limbic, a clinical mental health AI technology company, told Live Science in an email. "In domains like medicine, LLM summarization is already a routine part of workflows. That makes it even more important to examine how these systems perform and whether their outputs can be trusted to represent the original evidence faithfully." Such discoveries should prompt developers to create workflow guardrails that identify oversimplifications and omissions of critical information before putting findings into the hands of public or professional groups, Rollwage said. While comprehensive, the study had limitations; future studies would benefit from extending the testing to other scientific tasks and non-English texts, as well as from testing which types of scientific claims are more subject to overgeneralization, said Patricia Thaine, co-founder and CEO of Private AI — an AI development company. Rollwage also noted that "a deeper prompt engineering analysis might have improved or clarified results," while Peters sees larger risks on the horizon as our dependence on chatbots grows. "Tools like ChatGPT, Claude and DeepSeek are increasingly part of how people understand scientific findings," he wrote. "As their usage continues to grow, this poses a real risk of large-scale misinterpretation of science at a moment when public trust and scientific literacy are already under pressure." RELATED STORIES —Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals —'Foolhardy at best, and deceptive and dangerous at worst': Don't believe the hype — here's why artificial general intelligence isn't what the billionaires tell you it is —Current AI models a 'dead end' for human-level intelligence, scientists agree For other experts in the field, the challenge we face lies in ignoring specialized knowledge and protections. "Models are trained on simplified science journalism rather than, or in addition to, primary sources, inheriting those oversimplifications," Thaine wrote to Live Science. "But, importantly, we're applying general-purpose models to specialized domains without appropriate expert oversight, which is a fundamental misuse of the technology which often requires more task-specific training." In December 2024, Future Publishing agreed a deal with OpenAI in which the AI company would bring content from Future's 200-plus media brands to OpenAI's users. You can read more about the partnership here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store