
Adding calendar events with a screenshot is AI at its finest
I've been testing this feature out for the past few weeks in the developer beta, and I'm pleased to report that it works, easily making it my favorite Apple Intelligence feature to date. That's admittedly a low bar to clear — and it's not quite as capable as Android's version — but boy is it a nice change of pace to use an AI feature on a phone that feels like it's actually saving me time.
Maybe adding things to your calendar doesn't sound all that exciting, but I am a person who is Bad At Calendars. I will confidently add events to the wrong day, put them on the wrong calendar, or forget to add them at all. Not my finest quality.
The iOS version of 'use AI to add things to your calendar' taps into Visual Intelligence. iOS 18 included the ability to create calendar events based on photos, and now iOS 26 is extending that to anything on your screen. You just take a screenshot and a prompt will appear with the words 'Add to calendar.' Tap it, and after a few moments you'll see a preview of the event to be added with the top-level details. You can tap to edit the event or just create it if everything looks good and you're ready to move on with your life.
None of this would be useful if it didn't work consistently; thankfully, it does. I've yet to see it hallucinate the wrong day, time, or location for an event — though it didn't account for a timezone difference in one case. For the most part though, everything goes on my calendar as it should, and I rejoice a little bit every time it saves me a trip to the calendar app. The only limitation I've come across is that it can't create multiple events from a screenshot. It kind of just lands on the first one it sees and suggests an event based on that. If you want that kind of functionality from your AI, you'll need an Android phone.
Gemini Assistant has been able to add events based on what's on your screen since August of last year, and in January it added support for Samsung Calendar. To access it, you can summon Google Assistant and tap an icon that says 'Ask about screen.' Gemini creates a screenshot that it references, then you just type or speak your prompt to have it add the event to your calendar. This has failed to work for me as recently as a couple of months ago, but it's miles better now.
I gave Gemini Assistant on the Pixel 9 Pro the task of adding a bunch of preschool events to my calendar all listed at the end of an email — and it created an event for every one of them on the correct day. In a separate case, it also clocked that the events I was adding were listed in Eastern Time and accounted for that difference. In some instances it even fills in a description for the event based on text on the screen. I also used Gemini in Google Calendar on my laptop, because Gemini is always lurking around the corner when you use literally any Google product, and it turned a list of school closure dates into calendar events.
This is great and all, but is this just an AI-rebranding of some existing feature? As far as I can tell, not exactly. Versions of this feature already existed on both platforms, but in a much more basic form. On my Apple Intelligence-less iPhone 13 Mini, you can tap on a date in an email for an option to add it to your calendar. But it uses the email subject line as the event title; a decent starting point, but adding five events to my calendar with the heading 'Preschool July Newsletter' isn't ideal. Android will also prompt you to add an event to your calendar from a screenshot, but it frequently gets dates and times wrong. AI does seem to be better suited for this particular task, and I'm ready to embrace it.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNBC
10 minutes ago
- CNBC
If Apple makes a foldable phone, analysts say this stock will benefit
Even if Apple's long-awaited foldable may still be months or years away, buyers are lining up for a potential major supplier: Chinese glass maker Lens Technology . It's "one of the key beneficiaries of [the] foldable iOS smartphone supply chain," Citi analysts led by Kyna Wong said in a July 14 report about Lens Tech. They estimate the device could contribute to 5% of the Chinese company's revenue next year, and 12% in 2027. In a report last week, reputable Apple analyst Ming-Chi Kuo kept up expectations that the iPhone maker will begin making a foldable next year. Apple has not made any announcements, and did not immediately respond to a request for comment on whether Lens Tech would provide the "ultra thin glass" that covers the phone display panel. The Citi report initiated coverage of Lens Tech after its stand out Hong Kong public offering on July 9 — raising 4.77 billion HKD ($610 million) on the exchange's busiest day ever for listings — five IPOs at once. Bank of America Securities was one of the underwriters. The Citi analysts rate Lens Tech a buy, with a price target of 26 HKD ($3.13), or nearly 25% upside from Friday's close. "We believe the H-share listing is positive for the company," the analysts said, "as it should provide more cash for potential R & D initiatives and overseas capacity expansion, and expand the overseas institutional investor base." U.S. hedge fund magnate Steve Cohen has jumped in, with multiple purchases over three days that brought his stake in Lens Tech to 8.41% of issued voting shares, according to filings with the Hong Kong stock exchange. About 30% of proceeds raised from the Hong Kong offering will go towards developing foldable screens, Lens Tech said in its prospectus. "By increasing our production capacity in China, we aim to ensure robust support for the mass production of middle and high-end foldable smartphones for our customers and improve our market share in foldable screens," the company said. Lens Tech did not name Apple, but disclosed that its largest customer has been a Nasdaq-listed U.S. multinational founded in 1976 — and said the business relationship started "almost two decades ago when Customer/Supplier A was developing the industry's first touch-enabled smartphone with full-sized screen." Lens Tech's latest listing follows a trend this year of more mainland Chinese companies going public in Hong Kong, which is easier for international investors to access. The company already trades on the Shenzhen Exchange. "We expect Lens to benefit from the foldable iPhone launch [and] AI glasses & robotics" that drive earnings growth of at least 20% a year in 2026 and 2027, UBS analyst Zoe Xu and a team said in a separate July 14 report. They resumed coverage of the mainland China-traded shares with a buy rating and a new price target of 26.20 yuan, up from 16 yuan previously. The Citi analysts also raised their price target on Lens Tech's mainland-traded shares to 32 yuan ($4.45) from 25 yuan. Lens Tech has been reducing its reliance on Apple revenue, and states in its prospectus that "Customer B" is a "Nasdaq-listed American company founded in 2003 that designs and sells smart vehicles." Other listed customers include South Korean, French and Chinese companies. The UBS analysts pointed out that Lens Tech shares remain depressed after a 30% tumble in the week following U.S. "Liberation Day" tariffs in early April. But they expect the company can benefit from selling to Chinese startups such as smart glasses company Rokid and Zhiyuan Robotics. And even without a potential foldable iPhone, Lens Tech can likely benefit from selling cover glass for this year's iPhone 17 which is expected to offer a slimmer design. Lens Tech noted in its prospectus that its direct exports to the U.S. are limited and that it plans to use Hong Kong listing proceeds to expand production overseas in Vietnam and Thailand. In addition to China, the company said it already has factories in Vietnam and Mexico. — CNBC's Michael Bloom contributed to this report.


Associated Press
10 minutes ago
- Associated Press
AAPL SECURITIES NEWS: Did Apple Inc. (NASDAQ:AAPL) Commit Securities Fraud? Contact BFA Law by August 19 Class Action Deadline
NEW YORK, July 20, 2025 (GLOBE NEWSWIRE) -- Leading securities law firm Bleichmar Fonti & Auld LLP announces that a lawsuit has been filed against Apple Inc. (NASDAQ: AAPL) and certain of the Company's senior executives for potential violations of the federal securities laws. If you invested in Apple, you are encouraged to obtain additional information by visiting: Investors have until August 19, 2025, to ask the Court to be appointed to lead the case. The complaint asserts claims under Sections 10(b) and 20(a) of the Securities Exchange Act of 1934 on behalf of investors who purchased Apple securities. The case is pending in the U.S. District Court for the Northern District of California and is captioned Tucker v. Apple Inc., et al., No. 5:25-cv-05197. Why was Apple Sued for Securities Fraud? Apple is a multinational technology company that engages primarily in the businesses of smart-devices and artificial intelligence ('AI'). Apple's software includes a digital personal assistant called 'Siri,' which was first introduced in October 2011. The complaint alleges that Apple misrepresented Siri's advanced AI-based features as well as its ability to deliver them within the iPhone 16 product cycle. In truth, as alleged, Apple lacked a functional prototype of Siri's purported advanced AI-based features and misrepresented the time it would take to integrate such features into its devices. The Stock Declines as the Truth is Revealed On March 7, 2025, Apple announced it was indefinitely delaying several AI-based Siri features, citing development delays and pushing their release to sometime 'in the coming year.' On this news, the price of Apple stock declined $11.59 per share, or almost 5%, from $239.07 per share on March 7, 2025, to $227.48 per share on March 10, 2025, the following trading day. Then, on June 9, 2025, Apple hosted its Worldwide Developer Conference for 2025. Noticeably, Apple failed to announce any new updates regarding advanced Siri features. Analysts and media outlets described the WWDC as 'underwhelming' and 'disappointing,' with CNN stating that 'it's unlikely that any of the announcements made at Monday's event will change the perception that Apple is behind its competitors in AI.' On this news, the price of Apple stock declined $2.47 per share, or over 1%, from $203.92 on June 6, 2025, to $201.45 per share on June 9, 2025, the following trading day. Click here for more information: What Can You Do? If you invested in Apple you may have legal options and are encouraged to submit your information to the firm. All representation is on a contingency fee basis, there is no cost to you. Shareholders are not responsible for any court costs or expenses of litigation. The firm will seek court approval for any potential fees and expenses. Submit your information by visiting: Or contact: Ross Shikowitz [email protected] 212-789-3619 Why Bleichmar Fonti & Auld LLP? BFA is a leading international law firm representing plaintiffs in securities class actions and shareholder litigation. It has been named a top plaintiff law firm by Chambers USA, The Legal 500, and ISS SCAS, and its attorneys have been named 'Elite Trial Lawyers' by the National Law Journal, among the top '500 Leading Plaintiff Financial Lawyers' by Lawdragon, 'Titans of the Plaintiffs' Bar' by Law360 and 'SuperLawyers' by Thomson Reuters. Among its recent notable successes, BFA recovered over $900 million in value from Tesla, Inc.'s Board of Directors, as well as $420 million from Teva Pharmaceutical Ind. Ltd. For more information about BFA and its attorneys, please visit Attorney advertising. Past results do not guarantee future outcomes.


Tom's Guide
39 minutes ago
- Tom's Guide
I used Google's Veo 3 to create AI ASMR food videos
Google's Veo 3 AI video model is a league above any of its competitors for one key reason — sound. You can prompt not just what you see on screen, but also what you hear. Built by Google's DeepMind lab, the first Veo model debuted in May 2024, and each new generation has added more functionality. It has always excelled in motion accuracy and physics understanding compared to competitors, but the addition of sound was a game-changer. You can use it to prompt a short commercial, a scene from a movie you're writing, or even a music video. But there's one use I've seen more than any other — ASMR (autonomous sensory meridian response): those gentle tapping, whispering, and ambient sounds that trigger a tingling sensation for some people. To see just how far this could go, I created a series of ASMR food prompts — each designed to generate a matching video and sound around something culinary. Veo 3 is now available in the Gemini app. Just select the Video option when starting a new prompt, type what you want, and an 8-second clip is generated. While Gemini isn't necessarily the best way to access Veo 3 — I'd recommend Freepik, Fal, Higgsfield, or Google Flow — it's easy to use and gets the job done. A key advantage of using Gemini directly is that it automatically interprets and enhances your prompts. So if you ask for 'a cool ASMR video featuring lasagna,' that's what you'll get. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. You can also be more specific using something called structured prompting — labeling each moment with timestamps and scene descriptions. But unless you need precise control, a simple paragraph (aka narrative prompting) is usually more effective. The first task in any AI project is thinking about your prompt. Models are getting better at interpreting intent, but it's still better to be specific if you know what you want. I knew I wanted ASMR food videos, so I started with a test: 'ASMR food video with sound.' The result? Decent. It essentially gave me the lasagna I had in mind. Then I refined it — outlining specific food types, adding sound descriptions, and even trying a structured prompt for a fizzy drink with ice. Most of the time, narrative prompts work best. Just describe what you want to see, the flow of the video, and how sound should come through. The first prompt, 'ASMR food video with sound,' produced a stunning clip of someone sliding a fork into a slice of lasagna. You hear the squish as the fork enters, then the clunk as it hits the plate. This is one case where I wish Veo 3 had an 'extend clip' button. There was no other prompting involved, so I had no way of identifying what the food would be, how the sound would come out or even if the sound would work. This is why it's important to be specific when prompting AI models, even ones in chatbots like Gemini. Next, I went more specific — a longer, narrative-style prompt asking Veo 3 to generate a close-up of a chef preparing and eating satisfying food in a well-lit kitchen. I asked for slow-motion visuals of ingredients being chopped, the sizzling sound of butter melting in a pan, and a crunch as the chef takes a bite. I also added this line: 'Emphasize audio quality: clean, layered ASMR soundscape without music' to direct not just the sound, but to the style of sound and what I don't want to hear. For the final prompt I started with an image. I used Midjourney v7 to create a picture of a woman looking at rainbow popcorn, then added the prompt 'ASMR food' in Gemini. Visually, the result was stunning — but for some reason, the woman says in a voiceover, 'This is delicious, this rainbow popcorn.' That's on me — I didn't specify whether she should speak, or what she should say. A simple fix: put any speech you want in quotes. For example, I could have prompted her to say 'I love to watch popcorn pop,' and emphasized the word pop. I also could've specified that she was speaking on camera — and Veo 3 would have synced the lip movement to match. Overall, Veo 3 delivers impressive results, especially when it comes to generating high-quality sound that accurately reflects the visuals. While there are a few quirks to navigate, like unintended voiceovers or slightly underbaked looking lasagna — these are easily addressed with more specific prompting.