
Amazon launches a cheaper color Kindle and its first model for kids
The Kids version includes a year of Amazon Kids Plus, a colorful case, and parental controls.
Both devices use the same color screen as last year's model, which we didn't love.
Amazon is still betting on color E-Ink, even if it had a somewhat rocky start last year. The company just added two more devices to its Kindle Colorsoft lineup: a cheaper 16GB version and a new Kindle Colorsoft Kids. Both are available starting today.
As outlined in Amazon's press release, the 16GB Colorsoft costs $249.99, shaving $30 off the Signature Edition from last year by dropping the wireless charging and cutting the storage in half. Otherwise, it's the same device, complete with the same color screen, adjustable warm light, and ability to highlight text in different colors.
The new Kids version is the first color Kindle aimed at younger readers. It's priced at $269.99 and includes a year of Amazon Kids Plus, a colorful protective case, and Amazon's usual suite of parental controls. Amazon says kids who read comics and graphic novels on Kindle spend nearly 50% more time reading than average, so this move makes sense.
Both models use the same Colorsoft display that debuted last year. At the time, Amazon's first color Kindle was met with some skepticism from us. We found the display resolution frustrating, which drops to 150ppi in color mode, as well as the lack of any real note-taking features. Some buyers also reported screen tint issues that led to delays and a promised software fix.
Amazon hasn't changed the hardware here, but it's clearly hoping these new additions will broaden the appeal, either to those who didn't want to spend $280 for the Signature Edition or to families looking for a kid-friendly option.
Got a tip? Talk to us! Email our staff at
Email our staff at news@androidauthority.com . You can stay anonymous or get credit for the info, it's your choice.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
2 minutes ago
- Forbes
Your Face Could Cost You The Job: The Dangerous Rise Of Facial Recognition At Work
The Covid-19 pandemic ushered in a new era of remote work. Many employers desperate to track and monitor employees working away from the office implemented different technology tools to surveille employees. According to one 2025 report, more than half of fortune 100 employees were required to return back to the office fulltime. With many back-to-office mandates in place, remnants of surveillance culture have remained. Many companies are using facial recognition software to manage employees. A recent survey by ExpressVPN indicated that 74% of U.S. employers are using online monitoring and surveillance tools with 67% using biometric tracking like facial recognition and fingerprint scans. Employers use facial recognition software in a number of different ways: to track employee attendance, to identify employees, to interview and screen job candidates, to reduce the number of employee touchpoints, and to track employees (this is common for delivery and gig workers). What are the vulnerabilities and limitations of using facial recognition software in the workplace and how does it reinforce biases? There have been several different cases where facial recognition software has caused harm, reinforcing biases in the workplace. In 2025, a complaint was filed with the Colorado Civil Rights Division and the Equal Employment Opportunity Commission (EEOC) against the software company Intuit and the human resources assessment software vendor HireVue. The complaint alleges that the AI used by HireVue resulted in an Indigenous and Deaf woman being denied a promotion based on her race and disability. In a separate case, a makeup artist at a leading brand claimed to have been fired in 2020 because of a video interview through HireVue, where the facial recognition software marked her poorly for her body language during the interview. In 2024, an Uber Eats driver won a case where he alleged that that company fired him because of racist facial recognition software. The former employee claimed that he was fired after the company's verification checks, which use facial recognition software, failed to recognize his face. Scholar and writer Dr. Joy Buolamwini has focused much of her research on the flaws with facial recognition technology discussing in her book Unmasking AI as well as the documentary Coded Bias how the technology is less accurate at identifying darker skin tones. There is a wealth of evidence that indicates that facial recognition technology disproportionately impacts marginalized communities. This technology frequently misidentifies Black people leading to wrongful arrests. One 2025 study indicated that facial recognition tools had higher error rates for adults with Down syndrome. Researchers also note that facial recognition tools are less accurate for transgender individuals and these tools struggle to identify non-binary folks. Integrating facial recognition tools into the workplace can have deleterious effects on employees. A 2023 assessment of feedback shared with the White House Office of Science and Technology Policy indicated that digital surveillance in the workplace creates a sense of distrust among employees, making them feel constantly monitored and leading to a decline in productivity and morale. Workers also noted that digital surveillance could limit unionizing, deterring this type of behavior in the workplace. There were also employee concerns about data privacy and how the data collected would be used. Employers should think twice about implementing facial recognition software in the workplace. Not only is the type of technology prone to bias, but it can also erode employee trust and morale. If organizations have this type of technology in place already, they should request more information from the vendor about audits and what accountability measures are in place to ensure accuracy and mitigate bias. Employees should know their rights and there must be transparency around how data is collected, stored, and used. We must deeply consider the future we are creating when our face holds the key to whether we praised or punished.


CBS News
2 minutes ago
- CBS News
Law enforcement department in Denver metro area begins using AI to assist with police reports
Police in Wheat Ridge are getting a technology upgrade. As of Aug. 1, the department has begun using artificial intelligence to help officers write their reports. "It was hand notes. You could go through a notepad in two shifts because you're writing everything down," said Sgt. Jamie Watson. The department's AI technology works directly with body-worn cameras. "I don't have to sit there and go back in my memory and go now 'What did she say about this?' It's right there -- it's transcribed for you," Watson said. To demonstrate, Watson simulated a traffic stop: "Speed limit there is 40 (mph). And I hate to tell you, you were doing 52," she said during the demonstration. Once the body camera is activated, the system begins running and listening to the interaction in real time. Within seconds, a report is generated for the officer to review. "It says, 'On August 5, 2025, at approximately 2:29 p.m., I conducted a traffic stop in the parking lot of the Wheat Ridge Police Department.' Now, I never said that -- it just picks up your location based on where you are," Watson explained. Wheat Ridge police spokesperson Alex Rose said the program costs about $100,000 a year and is an addition to the department's existing contract with Axon for body cameras. "In a net-net, you ask our officers, it's saving about 40% to 60% of their time behind a computer writing reports," Rose said. Rose says that means there's more time that officers can now spend out in the community. The department's new AI tool joins a growing list of technologies used by law enforcement agencies, including body-worn cameras and automatic license plate readers. Anaya Robertson, policy director at the ACLU Colorado, said the use of AI in police reporting raises legal and ethical questions. "When we're talking about incident reports and case notes -- those are often admissible in court," Robertson said. "There's a general bias that AI doesn't make mistakes, which is problematic when you think about something being used as evidence." He emphasized the need for transparency. "(It's) needed so the community knows their law enforcement agency wants to utilize this kind of technology," she said. In Wheat Ridge, the AI-generated report is only considered a first draft. Officers are prompted to include additional observations and personal reflections the AI may not capture. Built-in safeguards require officers to review the report carefully. Errors are intentionally placed in the drafts and must be corrected before submission. Watson, who participated in the testing phase, said the technology allows her to spend less time behind a desk and more time in the field. "It is an incredible benefit to the officer," she said. According to Rose, the department consulted with the district attorney's office, which approved the use of AI-generated reports. Each report includes a disclaimer noting the use of artificial intelligence in its creation.


Bloomberg
2 minutes ago
- Bloomberg
Will Both Parties Support Trump's Domestic Investment?
Jim Messina, The Messina Group Founder & CEO and Democratic strategist, & Sarah Chamberlain, Republican Main Street Partnership President & CEO, discuss President Trump having Apple CEO Tim Cook in the Oval Office to speak on Apple's domestic investment and whether or not Democrats and Republicans will see eye to eye on this program. They also talk about the President's claim that there will be a 100% tariff on chips & semiconductors, but explain this claim is more of a negotiation tactic. Both Jim and Sarah speak with Joe Mathieu on the late edition of Bloomberg's "Balance of Power." (Source: Bloomberg)