
OnePlus 13 owners get a major new AI tool for free
OnePlus has started to roll out its Plus Mind AI tool to 13 and 13R handsets around the world, giving users the ability to receive intelligently suggesting actions while also storing the information for you to search through at a later date.
Recommended Videos
Plus Mind actually debuted on the OnePlus 13s, but it's only available in India, and the tool has since arrived on the OnePlus Nord 5 in Europe.
With the rollout to the 13 and 13R, the feature is now available to those in more countries – including the US – although triggering Plus Mind on these handsets is a little different to the first two.
The 13s and Nord 5 each have a dedicated 'Plus Key' on the left edge, allowing you to instantly trigger Plus Mind, but this hardware button doesn't feature on the older 13 and 13R. Instead, you'll need to swipe up on the screen with three fingers to trigger the AI tool.
What can Plus Mind do?
Think of Plus Mind as a Google Lens-style feature which can understand the context of what's on screen, and everything you deem important and want quick access to in the near future is stored in a dedicated memory box app called Mind Space.
Quick snaps of a poster, web pages, social media posts, messages conversations, or virtually anything appearing on the screen can be analysed by Plus Mind and stored in Mind Space.
A three finger gesture up the screen will add whatever's on screen to your digital memory box – but that's only part of what Plus Mind can do.
If, for example, you're reading a message stream with friends which includes details of an upcoming event, trigger Plus Mind and the AI tool will recognize the event information and suggest a calendar entry. A simple tap to confirm the action, and Plus Mind will add an entry to the default calendar app for you.
It's also able to translate text in Instagram posts, apply smart tags to articles you want to read later, and recognise products and places directly from the camera viewfinder.
As everything which is analysed by Plus Mind is stored in Mind Space, you can return to previous memories by searching for them in the app or via the native AI search bar on the handset.
I've experienced Plus Mind and Mind Space for short bursts with the OnePlus 13s and Nord 5, and while the foundations are there for a useful feature, it does feel a little limited at times.
It can do the basics well, from calendar entries to object recognition in images, but it's not yet at the deeper level of understanding. However, given time that will change.
Just the start
OnePlus is adamant that Plus Mind and Mind Space are not the end game for its AI ecosystem, they're merely the first stage of a three stage strategy to offer customers a truly personalized AI.
It's calling stage 2 'Your Secondary Mind', where it'll integrate Mind Space with LLMs (Large Language Models) allowing Plus Mind to understand all your content and everything about you, to create a dedicated AI persona.
Then with stage 3 we'll get 'Your Personal Assistant', where Plus Mind will evolve the persona into an assistant which will pro-actively offer recommendations based on your activity. (e.g. suggesting a taxi so you're not late a flight due to traffic).
We currently don't know when these stages will be rolled out, so for now you'll have to play around with Plus Mind.
If your OnePlus 13 or 13R is yet to receive the update, fear not as it will take a couple of weeks to roll out to every handset.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
32 minutes ago
- Yahoo
Nvidia Stock Is at a Peak - What's the Best Play Here for NVDA?
Three weeks ago, we recommended Nvidia Inc. (NVDA) stock in a June 22 Barchart article and shorting out-of-the-money puts. Now, NVDA is near its target prices, and the short play is successful. What is the best play here? NVDA is at $171.30, up over 4.5% today. Trump OK'd an export license to sell its powerful H20 AI chips to China after Nvidia's CEO, Jensen Huang, met with President Trump. The Wall Street Journal said this has been a top seller for Nvidia in China and was specially designed for the Chinese market. How to Buy Tesla for a 13% Discount, or Achieve a 26% Annual Return Alibaba Stock is Well Off Its Highs - What is the Best Way to Play BABA? Generate Income on MSTR Without Owning The Stock (Yet) Tired of missing midday reversals? The FREE Barchart Brief newsletter keeps you in the know. Sign up now! My prior price target was $178 per share using an estimated 55% forward free cash flow (FCF) margin (i.e., FCF/sales), as well as a 2.85% FCF yield valuation metric (i.e., 35x FCF multiple). Last quarter, Nvidia made a 59% FCF margin. So, if this continues over the coming year, NVDA stock could have further to go. Moreover, analysts have lifted their price targets. Let's look at this. In Q1 ending April 27, Nvidia generated $26.1 billion in FCF on $44.06 billion in sales. That represents a 59.2% FCF margin. Over the trailing months, according to Stock Analysis, it's generated $72.06 billion FCF on $148.5 billion in sales, or a 48.5% FCF margin. So, it seems reasonable to assume Nvidia could make at least a 57% FCF margin going forward. Here's how that would work out. Analysts expect sales to rise to a range between $199.89 billion this year ending Jan. 2026 and $251.2 billion next year. That puts it on a next 12 months (NTM) run rate of $225.5 billion. Moreover, now that it will be able to sell to China again, let's assume this pushes sales at least 5% higher to $236.8 billion: $236.8b NTM sales x 57% FCF margin = $135 billion FCF Just to be conservative, let's use a 55% margin on the lower NTM sales estimates: $225.5 billion x 55% margin = $124 billion FCF So, our estimate is that FCF over the next 12 months could range between $124 billion and $135 billion, or about $130 billion on average Therefore, using a 30x FCF multiple (i.e., the same as dividing by 3.33% FCF yield): $230b x 30 = $6,900 billion market cap (i.e., $6.9 trillion) That is 65% over today's market cap of $4.178 trillion, according to Yahoo! Finance (i.e., at $171.35 p/sh). In other words, NVDA stock could be worth 65% more, or $291.55 per share. $171.35 x 1.65 = $282.73 price target That is what might happen over the next 12 months (NTM) if analysts' revenue targets are hit and its FCF margin averages 56%. Analysts have closer price targets. The average of 66 analysts surveyed by Yahoo! Finance is $173.92. However, that is higher than three weeks ago, when I reported that the average was $172.60. Moreover, which tracks recent analyst recommendations, now reports that 39 analysts have a $200.71 price target, up from $179.87 three weeks ago. One way to play this is to sell short out-of-the-money puts. That way, an investor can set a lower buy-in price and still get paid extra income. In my last Barchart article on June 22 ("Make Over a 2.4% One-Month Yield Shorting Nvidia Out-of-the-Money Puts"), I suggested shorting the $137 strike price put option expiring July 25. The yield was 2.48% over the next 34 days (i.e., $3.40/$137.00). Today, that contract is almost worthless, as it's trading for just 8 cents. In other words, the short seller of these puts has made almost all the money (i.e., the stock has risen, making the short-put play successful). The investor's account has little chance of getting assigned to buy 100 shares per put contract at $137.00 on or before July 25. It makes sense to roll this over by doing a 'Buy to Close" and entering a new trade to 'Sell to Open' at a later expiry period and higher strike price. For example, the Aug. 29 expiry period, 45 days to expiry or DTE (which is after the expected Aug. 27 Q2 earnings release date), shows that $155.00 strike price put has a midpoint premium of $3.93. So, the short-put yield is: $393/$155.00 = 0.2535 = 2.535% over 45 days That works out to an annualized expected return (ER) of +20.28% (i.e., 2.535% x 8). So, even if NVDA stock stays flat, the investor stands to make good money here shorting these puts every 45 days (assuming the same yield occurs). There seems to be a low risk here, given that the delta ratio is just 23%. But, given how volatile NVDA has been, and that the stock is at a peak, it might make sense to use some of the income received to buy puts at lower strike prices. Keep in mind that the breakeven point, i.e., the price where an unrealized loss could occur, is $151.07: $155-$3.93 = $151.07 That is 11.8% below today's price. But it is not uncommon for a stock like NVDA to fall 20% from its peak. That would put it at $137.00. So, using some of the income to buy long puts at $140 or $145 is not unreasonable. That would cost between $144 and $204 ($174 on average) for the $15,500 investment (net of $393 already received): $393 income - $174 long hedge = $219, or $219 / $15,500 invested in short put play = 1.41% New Breakeven = $15,500 = $174 = $15,326 or $153.26 per put contract This means that the investor's potential (unrealized) loss is between $14,250 and $15,326, or -$1,076 net on the $15,326 net investment, or -7%. But keep in mind that this is only an unrealized loss. The investor would have protected himself from a much lower downside by buying long puts from the income received. And, after all, the price target is substantially higher, so the investor might be willing to hold on or even sell out-of-the-money call options to recoup some of the unrealized loss. The bottom line here is that NVDA has room to move higher. Shorting OTM puts with a lower strike price long put hedge is one good way to play this. On the date of publication, Mark R. Hake, CFA did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Sign in to access your portfolio


Forbes
37 minutes ago
- Forbes
McDonald's AI Breach Reveals The Dark Side Of Automated Recruitment
Millions of McDonald's job applicants had their personal data exposed after basic security failures ... More left the company's AI hiring system wide open. If you've ever wondered what could go wrong with an AI-powered hiring system, McDonald's just served up a cautionary tale. This week, security researchers revealed that the company's McHire website—a recruitment platform used by over 90% of McDonald's franchisees—left the personal information of millions of job applicants exposed to anyone with a browser and a little curiosity. The culprit: Olivia, an AI chatbot from designed to handle job applications, collect personal information, and even conduct personality tests. On paper, it's a vision of modern efficiency. In reality, the system was wide open due to security flaws so basic they'd be comical if the consequences weren't so serious. What Went Wrong? It didn't take a sophisticated hacker to find the holes. Researchers Ian Carroll and Sam Curry started investigating after Reddit users complained that Olivia gave nonsensical responses during the application process. After failing to find more complex vulnerabilities, the pair simply tried logging into the site's backend using '123456' for both the username and password. In less than half an hour, they had access to nearly every applicant's personal data—names, email addresses, phone numbers, and complete chat histories—with no multifactor authentication required. Worse still, the researchers discovered that anyone could access records just by tweaking the ID numbers in the URL, exposing over 64 million unique applicant profiles. One compromised account had not even been used since 2019, yet remained active and linked to live data. As Carroll told Wired, 'I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that's what made me want to look into it more.' Why Security Fundamentals Still Matter Experts agree that the real shock isn't the technology itself—it's the lack of security basics that made the breach possible. As Aditi Gupta of Black Duck noted, the McDonald's incident was less a case of advanced hacking and more a 'series of critical failures,' ranging from unchanged default credentials and inactive accounts left open for years, to missing access controls and weak monitoring. The result: an old admin account that hadn't been touched since 2019 was all it took to unlock a massive trove of personal data. For many in the industry, this raises bigger questions. Randolph Barr, CISO at Cequence Security, points out that the use of weak, guessable credentials like '123456' in a live production system is not just a technical slip—it signals deeper problems with security culture and governance. When basic measures like credential management, access controls, and even multi-factor authentication are missing, the entire security posture comes into question. If a security professional can spot these flaws in minutes, Barr says, 'bad actors absolutely will—and they'll be encouraged to dig deeper for other easy wins.' And this isn't just about AI or McDonald's. Security missteps of this kind tend to follow each new 'game-changing' technology. As PointGuard AI's William Leichter observes, organizations often rush to deploy the latest tools, driven by hype and immediate gains, while seasoned security professionals get sidelined. It happened with cloud, and now, he says, 'it's AI's turn: tools are being rolled out hastily, with immature controls and sloppy practices.' Automation and the Illusion of Security McDonald's isn't alone in betting big on AI to speed up hiring and make life easier for franchisees and HR teams. Automated chatbots like Olivia are supposed to streamline applications, assess candidates, and remove human bottlenecks. But as this incident shows, convenience can't come at the expense of basic digital hygiene. Simple safeguards—unique credentials, robust authentication, and proper access controls—were missing entirely. The rush to digitize and automate HR brings with it a false sense of security. When sensitive data is managed by machines, it's easy to assume the system is secure. But technology is only as strong as the practices behind it. Lessons for the Future If there's a lesson here, it's that technology should never substitute for common sense. Automated hiring systems, especially those powered by AI, are only as secure as the most basic controls. The ease with which researchers accessed the McHire backend shows that old problems—default passwords, missing MFA—are still some of the biggest threats, even in the age of chatbots. Companies embracing automation need to build security into the foundations, not as an afterthought. And applicants should remember that behind every 'friendly' AI bot is a company making choices about how to protect—or neglect—their privacy. The Price of Convenience The McDonald's McHire data leak is a warning to every company automating hiring, and to every job seeker trusting a bot with their future. Technology can streamline the process, but it should never circumvent or subvert security. The real world isn't as neat as a chatbot's conversation tree. If we aren't careful, the push for convenience will keep putting real people at risk.


Time Magazine
an hour ago
- Time Magazine
AI Promised Faster Coding. This Study Disagrees
Welcome back to In the Loop, TIME's new twice-weekly newsletter about the world of AI. We're publishing installments both as stories on and as emails. If you're reading this in your browser, you can subscribe to have the next one delivered straight to your inbox. What to Know:Could coding with AI slow you down? In just the last couple of years, AI has totally transformed the world of software engineering. Writing your own code (from scratch, at least,) has become quaint. Now, with tools like Cursor and Copilot, human developers can marshal AI to write code for them. The human role is now to understand what to ask the models for the best results, and to iron out the inevitable problems that crop up along the way. Conventional wisdom states that this has accelerated software engineering significantly. But has it? A new study by METR, published last week, set out to measure the degree to which AI speeds up the work of experienced software developers. The results were very unexpected. What the study found — METR measured the speed of 16 developers working on complex software projects, both with and without AI assistance. After finishing their tasks, the developers estimated that access to AI had accelerated their work by 20% on average. In fact, the measurements showed that AI had slowed them down by about 20%. The results were roundly met with surprise in the AI community. 'I was pretty skeptical that this study was worth running, because I thought that obviously we would see significant speedup,' wrote David Rein, a staffer at METR, in a post on X. Why did this happen? — The simple technical answer seems to be: while today's LLMs are good at coding, they're often not good enough to intuit exactly what a developer wants and answer perfectly in one shot. That means they can require a lot of back and forth, which might take longer than if you just wrote the code yourself. But participants in the study offered several more human hypotheses, too. 'LLMs are a big dopamine shortcut button that may one-shot your problem,' wrote Quentin Anthony, one of the 16 coders who participated in the experiment. 'Do you keep pressing the button that has a 1% chance of fixing everything? It's a lot more enjoyable than the grueling alternative.' (It's also easy to get sucked into scrolling social media while you wait for your LLM to generate an answer, he added.) What it means for AI — The study's authors urged readers not to generalize too broadly from the results. For one, the study only measures the impact of LLMs on experienced coders, not new ones, who might benefit more from their help. And developers are still learning how to get the most out of LLMs, which are relatively new tools with strange idiosyncrasies. Other METR research, they noted, shows the duration of software tasks that AI is able to do doubling every seven months—meaning that even if today's AI is detrimental to one's productivity, tomorrow's might not be. Who to Know:Jensen Huang, CEO of Nvidia Huang finds himself in the news today after he proclaimed on CNN that the U.S. government doesn't 'have to worry' about the possibility of the Chinese military using the market-leading AI chips that his company, Nvidia, produces. 'They simply can't rely on it,' he said. 'It could be, of course, limited at any time.' Chipping away — Huang was arguing against policies that have seen the U.S. heavily restrict the export of graphics processing units, or GPUs, to China, in a bid to hamstring Beijing's military capabilities and AI progress. Nvidia claims that these policies have simply incentivized China to build its own rival chip supply chain, while hurting U.S. companies and by extension the U.S. economy. Self-serving argument — Huang of course would say that, as CEO of a company that has lost out on billions as a result of being blocked from selling its most advanced chips to the Chinese market. He has been attempting to convince President Donald Trump of his viewpoints in a recent meeting at the White House, Bloomberg reported. In fact… The Chinese military does use Nvidia chips, according to research by Georgetown's Center for Security and Emerging Technology, which analyzed 66,000 military purchasing records to come to that conclusion. A large black market has also sprung up to smuggle Nvidia chips into China since the export controls came into place, the New York Times reported last year. AI in Action Anthropic's AI assistant, Claude, is transforming the way the company's scientists keep up with the thousands of pages of scientific literature published every day in their field. Instead of reading papers, many Anthropic researchers now simply upload them into Claude and chat with the assistant to distill the main findings. 'I've changed my habits of how I read papers,' Jan Leike, a senior alignment researcher at Anthropic, told TIME earlier this year. 'Where now, usually I just put them into Claude, and ask: can you explain?' To be clear, Leike adds, sometimes Claude gets important stuff wrong. 'But also, if I just skim-read the paper, I'm also gonna get important stuff wrong sometimes,' Leike says. 'I think the bigger effect here is, it allows me to read much more papers than I did before.' That, he says, is having a positive impact on his productivity. 'A lot of time when you're reading papers is just about figuring out whether the paper is relevant to what you're trying to do at all,' he says. 'And that part is so fast [now], you can just focus on the papers that actually matter.' What We're Reading Microsoft and OpenAI's AGI Fight Is Bigger Than a Contract — By Steven Levy in Wired Steven Levy goes deep on the 'AGI' clause in the contract between OpenAI and Microsoft, which could decide the fate of their multi-billion dollar partnership. It's worth reading to better understand how both sides are thinking about defining AGI. They could do worse than Levy's own description: 'a technology that makes Sauron's Ring of Power look like a dime-store plastic doodad.'