Infinix 2025: Smarter, sexier & built for Nigerian lifestyles
NOTE 50 PRO – WHEN DAVIDO IS YOUR AI ASSISTANT
The NOTE 50 Pro blends premium performance with real-world personality. The standout feature? One-Tap AI ∞ Assistant (Folax) with Davido as the official voice. Imagine hearing your favorite Afrobeats star respond when you activate AI features - like summarizing long texts, editing documents, masking sensitive info from images, or translating real-time calls.
And it doesn't stop at smarts. The NOTE 50 Pro also brings:
It's a flagship phone with attitude, and yes, it talks back.
HOT 60 PRO+ – THE PHONE THAT SERVES LOOKS AND SMARTS
Gen Z and content creators, this one is for you. The HOT 60 Pro+ is the world's slimmest 3D curved AMOLED phone at just 5.95mm. But its good looks are backed by tech that understands your vibe:
50MP SONY IMX882 camera, VLOG mode, and fast uploads with Helio G200 processor.
AI Photo Tools: Retouch with AI Eraser, extend images beyond the frame, and stylize with Vogue Portraits.
Google Circle to Search lets you search anything on your screen instantly—just draw a circle.
45W Fast Charging, 5160mAh battery, and JBL speakers.
Whether you're shooting TikToks in Lagos or recording podcasts in Port Harcourt, the HOT 60 Pro+ is your style-savvy sidekick.
Launched early in the year, the SMART 10 Plus proves that entry-level doesn't mean low performance:
Android 14 Go Edition with clean, fluid interface.
6.6' HD+ Screen and 5000mAh battery.
Up to 6+6GB RAM + 128GB Storage.
AI-enhanced camera with scene detection and beautification.
Bonus: You also get Circle to Search, just like the big boys.
These phones are more than specs, they're smart, stylish, and made to match how you live.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
2 minutes ago
- Android Authority
YouTube Music just got a new podcast superpower that's been a long time coming
TL;DR Google Podcasts used to offer the ability to remove gaps of silence from podcasts. When Podcasts was retired, YouTube Music did not offer a similar tool. Work on a 'trim silence' option for YouTube Music was spotted last year, and the feature is finally available now. Progress is supposed to mean improvement, right? While we're no strangers to seeing Google transform one app into another, merging features in the process, it's extremely frustrating when we lose some functionality along the way. Something along those lines happened when we lost Google Podcasts and users were pushed over to YouTube Music, but we're happy to say the situation has finally righted itself. We're talking specifically about the ability to for podcast apps to automatically remove any lengthy periods of silence. Ideally, producers would edit those out prior to publishing their podcasts, but we have to live with the reality we have, and it ended up being a useful option to have. Google Podcasts picked up that tool all the way back in 2018, but we haven't had a similar ability for YouTube Music. Don't want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. Last year, though, it started to look like that was changing. Back in March of 2024, evidence was spotted in an APK teardown that revealed work towards a new 'trim silence' option for YouTube Music. We were hopeful that would be going live shortly, but months passed and we never saw it arrive. Here in the summer of 2025, we nearly forgot YouTube Music was even working on this tool, but then over on the YouTube Reddit sub, user Timely-Junket-2851 shared that they had spotted the option showing up at the bottom of their playback speed controls. Sure enough, we've confirmed it's here on our own devices. After uninstalling updates and walking things back through recent releases, we can see that Google finally delivered the feature sometime over the course of the past month. Has it been too long coming? Absolutely. Should Google have done a little more work to make sure YouTube Music was just as full-featured as Google Podcasts before shutting that down? You won't find any argument here. But despite a few lingering frustrations, we're mainly just happy to finally see this new ability land. Follow

Business Insider
an hour ago
- Business Insider
Claude Code turned a 3-week project into a 2-day task — but nearly broke it
Hugh Williams, former Google and eBay engineering VP, recently spent two days deep in the weeds with Claude Code, Anthropic's AI-powered coding assistant. His verdict? "It's like being a carpenter who suddenly has awesome power tools." Williams built a fully functional system running on AWS in 48 hours, a task he says would normally take at least three weeks. He didn't just write code; he learned a new graph database (AWS Neptune) on the fly, delivering clean, modular, and scalable output. Claude Code had problems, too, though. Williams warned that every hour or so, the coding assistant compressed its conversation context, and things broke. "I learnt to wrap up a 'milestone,' update the README, and start a new conversation afresh before compaction occurs," he wrote on LinkedIn. Williams also made the now-classic mistake of asking Claude to "clean up" code directories, and watched it wipe out important features, working code, and data in one stroke. "I learnt the hard way to back up code every hour or so manually, so I could go back to previous versions when things went awry," he explained. Occasionally, Claude Code would add random extra or duplicative features, so he needed to inspect the code holistically every hour or so to make sure his projects were headed in the right direction. Still, Williams believes Claude Code is a true accelerator for those who already know how to code. For junior devs or non-coders? Not yet. He estimates Claude Code can already take on 50% of the workload for a seasoned engineer, and that might top out at roughly 75%, a productivity jump with serious implications. Bottom line: For experienced builders, Claude Code is here, and it's powerful. Just don't forget to back up your work!


Gizmodo
an hour ago
- Gizmodo
Open-Sourced AI Models May Be More Costly in the Long Run, Study Finds
As more businesses adopt AI, picking which model to go with is a major decision. While open-sourced models may seem cheaper initially, a new study warns that those savings can evaporate fast, due to the extra computing power they require. In fact, open-source AI models burn through significantly more computing resources than their closed-source rivals when performing the same tasks, according to a study published Thursday by Nous Research. The researchers tested dozens of AI models, including closed systems from Google and OpenAI, as well as open-source models from DeepSeek and Magistral. They measured how much computing effort each required to complete identical tasks across three categories: simple knowledge questions, math problems, and logic puzzles. To do this, they used the number of tokens each model used to solve and answer questions as for computing resources consumed. 'Open-weight models use 1.5–4× more tokens than closed ones—and up to 10× for simple knowledge questions—making them sometimes more expensive per query despite lower per-token costs,' the study authors wrote. In AI, a token is a piece of text or data—it could be a word, part of a word, or even punctuation—that models use to understand language. Models process and generate text one token at a time, so the more tokens they use, the more computing power and time a task requires. Since most closed-source models don't reveal their raw reasoning process or chain of thought (CoT), the researchers measured their computing efficiency by counting the tokens they used instead. Because models are billed by total output tokens used in their reasoning process and outputting the final answer, completion tokens serve as a proxy for the effort needed to produce a response. This is an important consideration for companies using AI for many reasons. 'First, while hosting open weight models may be cheaper, this cost advantage could be easily offset if they require more tokens to reason about a given problem,' the researchers wrote. 'Second, an increased number of tokens will lead to longer generation times and increased latency.' The study found that open models consistently use more tokens than closed models for the same tasks, sometimes three times as many for simple knowledge questions. The gap narrowed to less than twice for math and logic problems. 'Closed models (OpenAI, Grok-4) optimize for fewer tokens to cut costs, while open models (DeepSeek, Qwen) use more tokens, possibly for better reasoning,' the study authors wrote. Among open models, llama-3.3-nemotron-super-49b-v1 was the most efficient, while Magistral models were the most inefficient. OpenAI's models were standouts as well. Both its o4‑mini and the new open-weight gpt‑oss models showed impressive token efficiency, especially on math problems. The researchers noted that OpenAI's gpt‑oss models, with their concise chain-of-thoughts, could serve as a benchmark for improving token efficiency in other open models.