
Google's Pixel 9A is cheaper than ever right now
The Pixel A-series has long been our go-to recommendation for a cheap Android phone, and the 9A keeps that streak alive. The device features a 6.3-inch OLED screen that's brighter than its predecessor, handling direct sunlight with no problem. The IP68 rating is another upgrade, so you don't have to panic over a spilled drink or a dusty pocket. Wireless charging is also available, and while it's not the fastest at up to 7.5W, it's a feature you don't always see at this price.
The Tensor G4 chip and 8GB of RAM keep things snappy, whether you're running a handful of apps or sneaking in a little gaming. Meanwhile, the 5,100mAh battery can last you through a moderate day of streaming video, social media, and web browsing with the always-on display enabled. Google also promises seven years of software updates, which is more than most phones in this range.
While the 9A ticks just about every box, it runs a smaller version of Google's on-device AI, so you miss out features found on the other Pixel 9 devices, including call notes. And although it takes decent photos, the device can't match the quality of the more expensive Pixel 9 and 9 Pro, particularly when taking portraits. Google has an event planned for August 20th, where we're expecting new flagship Pixel devices to be shown off. If you don't need a top-of-the-line device, though, you'll probably be very happy with the 9A, especially at this price.
Sign up for Verge Deals to get deals on products we've tested sent to your inbox weekly.
Posts from this author will be added to your daily email digest and your homepage feed.
See All by Brandon Russell
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Deals
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Gadgets
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Google
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Google Pixel
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Phones
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Tech

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
a minute ago
- Forbes
New Models From OpenAI, Anthropic, Google – All At The Same Time
It's Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today's news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I'll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models First, the tech community is getting an eye on OpenAI OSS 120b and OSS 20b, the first open-weight systems from this company since ChatGPT 2. Now, coverage from Computerworld and elsewhere points out that, although these models have Apache licenses, they are not fully open source in the conventional way, ,but partly open: the weights are open source, while the training data is not. Powered by one 80GB GPU chip, the larger OSS models, according to the above report, 'achieves parity' with the o4-mini model vis a vis reasoning power. The smaller one can run on smartphones and other edge devices. The models come quantized with MXFP4, a low-precision data type for accelerating matrix multiplications. Let Them Work Another interesting aspect of the new OSS models has to do with chain of thought, something that has revolutionized inference, while raising questions about comparative methodology. Basically, we want the LLMs to be accurate, but engineers have found that, in many cases, restricting or overly guiding systems causes them to 'hide' CoT. So OpenAI has chosen not to optimize the models in this way. 'OpenAI is intentionally leaving Chain of Thought (CoTs) unfiltered during training to preserve their usefulness for monitoring, based on the concern that optimization could cause models to hide their real reasoning,' writes Roger Montti at Search Engine Journal. 'This, however, could result in hallucinations.' Montti cites the following model card report from OpenAI: 'In our recent research, we found that monitoring a reasoning model's chain of thought can be helpful for detecting misbehavior. We further found that models could learn to hide their thinking while still misbehaving if their CoTs were directly pressured against having 'bad thoughts.'…In accord with these concerns, we decided not to put any direct optimization pressure on the CoT for either of our two open-weight models. We hope that this gives developers the opportunity to implement CoT monitoring systems in their projects and enables the research community to further study CoT monitorability.' So, the models are allowed to have these 'bad thoughts' in aid of, I suppose, transparency. OpenAI is then honest about the higher chance for hallucinations, so that users know that this trade-off has been made. Claude Opus 4.1 Here's how spokespersons rolled out the announcement of this new model Aug. 5: 'Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning. We plan to release substantially larger improvements to our models in the coming weeks. Opus 4.1 is now available to paid Claude users and in Claude Code. It's also on our API, Amazon Bedrock, and Google Cloud's Vertex AI. Pricing is the same as Opus 4.' What's under the hood? The new Opus 4.1 model ups SWE-Bench Verified marks, and boosts agentic research skills. A breakdown of capabilities shows a 2-point increase in SWE-based agentic coding (72.5% - 74.5%) and improvement in graduate-level reasoning with GPQA Diamond (79.6% - 80.9%) over Opus 4, and slight increases in visual reasoning and agentic tool use. For a model set that pioneered human-like user capabilities, this continues to push the envelope. As for strategy: 'The release comes as Anthropic has achieved spectacular growth, with annual recurring revenue jumping five-fold from $1 billion to $5 billion in just seven months, according to industry data,' writes Michael Nunez at VentureBeat. 'However, the company's meteoric rise has created a dangerous dependency: nearly half of its $3.1 billion in API revenue stems from just two customers — coding assistant Cursor and Microsoft's GitHub Copilot — generating $1.4 billion combined. … The upgrade represents Anthropic's latest move to fortify its position before OpenAI launches GPT-5, expected to challenge Claude's coding supremacy. Some industry watchers questioned whether the timing suggests urgency rather than readiness.' Regardless, this is big news in and of itself, including for the millions of users who rely on Claude for business process engineering or anything else. Genie 3 This is the latest in the series of Genie models coming out of Google's DeepMind lab that create controlled environments. In other words, this is a gaming world model. Proponents of the new model cite longer-term memory over Genie 2's limit of about 10 seconds, as well as better visual fidelity and real-time responses. 'DeepMind claims that the new system can generate entire worlds that you can interact with consistently for several minutes in up to 720p resolution,' reports Joshua Hawkins at BGR. 'Additionally, the company says that the system will be able to respond to what it calls 'promptable world events' with real-time latency. Based on what the videos show off, it seems like Google has taken a major step forward in creating entire video game worlds using AI.' 'Genie 3 is the first real-time interactive general-purpose world model,' said DeepMind's Shlomi Fruchter in a press statement according to a TechCrunch piece suggesting that the lab considers Genie 3 to be a 'stepping stone to AGI,' a big claim in these interesting times. 'It goes beyond narrow world models that existed before. It's not specific to any particular environment. It can generate both photo-realistic and imaginary worlds, and everything in between.' All of these new models are getting their first rafts of public users today! It's enough to make your head spin, especially if you're responsible for any kind of implementation. What do you choose? To be fair, there is some amount of specialization involved. But many professionals closest to the industry would tell you it's the speed of innovation that's challenging: given the track record of most companies, by the time you get something worked into business operations, it's likely to already be obsolete! Stay tuned.


Axios
30 minutes ago
- Axios
Truth Social's Perplexity search comes with Trump-friendly media sources
President Trump's social media company Truth Social unveiled a new search tool powered by AI answer engine Perplexity on Wednesday — but Truth Social users who run Perplexity searches may find their results limited to a narrow set of typically Trump-supporting media outlets. Why it matters: Increasingly, where you ask online matters as much as what you ask. Catch up quick: Trump Media & Technology Group on Wednesday said it was launching a public beta test of a search engine, Truth Search AI, powered by Perplexity. Perplexity has been seen as a nascent Google-killer and is often touted by investors as a possible acquisition target for the likes of Apple. How it works: Axios asked seven questions on both a logged-in Truth Social account and the free, logged-out Perplexity website … What happened on January 6, 2021? Why was Donald Trump impeached? What crimes was President Trump convicted of? Did Donald Trump lose the 2020 election? What is Hunter Biden's laptop a reference to? Was Hillary Clinton ever charged with a crime? Is the new "Naked Gun" movie good? Between the lines: In most cases, the responses were generally similar — but the sources linked to the answers were not. In all seven responses on Truth Social, either was the most common, or the only, listed source of information. Other sources were Washington Times or Epoch Times. In contrast, answers via the public version of Perplexity returned a wider variety of sources, including Wikipedia, Reddit, YouTube, NPR, Esquire and Politico. Although the questions were matched and asked at roughly the same time, there was no source overlap. What they're saying: A Perplexity spokesperson tells Axios that Truth Social is a customer of Perplexity's API, which means it — like tens of thousands of other developers — is building tools to its own specifications, and with its own restrictions. Any customization, like limiting the sources for its answers, would happen entirely on the Truth Social side. While it's standard practice for platforms to put their own layers of rules and information on top of tools, search tools usually cast a broader net. Truth Social did not mention any restrictions in its announcement, although it did say it plans to "refine and expand our search function based on user feedback." Perplexity's Sonar API specifically includes the ability for users to customize sources, which the company noted in January was a top user request. The bottom line: When you ask a search tool a question, particularly in the age of AI, it's best to know exactly where your information is coming from, and whether there are any limits on what the tool will tell you. Expect more of this as governments and businesses increasingly put their thumbs on the AI scale to serve their interests.


CBS News
an hour ago
- CBS News
Google to spend $1 billion on AI education and job training in the U.S.
Google said it will provide U.S. colleges and universities with $1 billion worth of artificial intelligence education and job training tools, the company announced Wednesday. The three-year commitment will also make the programs available to non-profits, free of cost, Sundar Pichai, CEO of Google and its parent company Alphabet, said in a blog post. So far, the tech giant says it has partnered with more than 100 public universities, including Texas A&M and the University of North Carolina. All accredited, non-profit colleges and universities in the U.S. are eligible for the partnership. The push comes as the world's biggest tech companies, including Microsoft and Meta, are vying for dominance in the AI space. At the same time, some corporate leaders have predicted that generative artificial intelligence could allow their businesses to trim their human workforce due to the tech's productivity gains. Future professionals must become fluent in AI to succeed professionally, as more companies lean on new AI tools to improve efficiency, Google's Pichai wrote. "Knowing how to use this technology will also serve these students well as they transition to the world of work," Pichai said. Through Google's "Career Certificates," the company will offer free AI training to all U.S. college students. The certificates are described as "flexible online training programs, designed to put you on the fast track to jobs in high-paying fields, now including practical AI training," according to Google's website. Google identified the current generation of students as the first cohort of "AI natives" who will eventually use the tech in ways that have yet to be discovered. All college students can sign up for a 12-month Google AI Pro plan, which gives them: Learning to master AI tools could help college students find a foothold in the workforce at a time when some companies are scaling back their plans to hire new grads, with some experts blaming AI for the reduction. Recent data from career platform Handshake shows that listings for entry-level jobs were down 15% over the past year. A report from outplacement firm Challenger, Gray & Christmas also shows that employers attributed at least 10,000 job cuts from the beginning of the year through July explicitly to AI. They cut another 20,000 positions for other reasons related to technological innovation, the report found.