
Gemini now rivals ChatGPT in another key feature — here's how to schedule AI tasks
Google's Gemini app is evolving to become more proactive.
Starting today, some Gemini users can take advantage of a new feature called "Scheduled Actions" which lets the AI assistant handle recurring or time-specific tasks automatically.
Announced during Google I/O and now rolling out to both Android and iOS users, Scheduled Actions allows users to tell Gemini exactly what you want it to do and when. Now, whether it's a weekly blog brainstorm, a daily news digest or a one-time event reminder, Gemini can take care of it without you having to ask twice.
Similar to ChatGPT's Tasks, Scheduled Actions let you automate prompts withinthe Gemini app. You can say things like:
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Once you set a recurring task, you can manage it anytime from the Scheduled Actions page in the Gemini settings menu.
This puts Gemini in closer competition with ChatGPT's memory and third-party integrations, offering a more personalized and hands-off experience that Google says will help users 'stay informed, inspired and on track.'
ChatGPT Tasks lets users schedule everything from daily reminders to appointments within the ChatGPT app. The reminder is then sent in the app as well as via email. But what makes Gemini's new Scheduled Actions competitive is that it is built directly into the app and works within the Google ecosystem, letting users automate recurring prompts and get proactive updates without workarounds or leaving Google Workspace. This could be a game-changer for those who regularly use Gmail, Google Docs and more.
If you aren't seeing the Scheduled Tasks page yet, it's because not everyone has access to it immediately.
The feature is rolling out now for users with a Google AI Premium subscription or qualifying Google Workspace business or education plan.
That means users on lower tiers might need to stick to ChatGPT Tasks for now; you won't see the Scheduled Actions option yet.
However, knowing how Google often gives away some of their best features for free, I can only assume that the rest of us won't be waiting long to give it a try.
Gemini has evolved quickly with new live conversation features, video generation options and deeper Google app integration. This latest feature sounds promising and one users have wanted for a while: true automation without needing third-party tools or scripting.
Scheduled Actions essentially turns Gemini into a personal AI scheduler that does more than respond to prompts and remembers to actually act on them for you, underscoring that we are that much closer to autonomous AI.
If you've ever wanted your AI assistant to be a little more like a real assistant, this is a big step in the right direction—as long as you have the right monthly subscription.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 hours ago
- Yahoo
Why Smart People Make Dumb Money Decisions, According to Humphrey Yang
According to the TIAA Institute-GFLEC Personal Finance Index, about half of American adults lack financial literacy, and even more fall short when it comes to decisions regarding risk. According to financial YouTuber Humphrey Yang, being smart can put you at a greater risk of making poor choices. Read More: Find Out: In a recent YouTube video, Yang covered three biases that often trap smart people into making money decisions that leave them poor. But even if you consider yourself intelligent and financially literate, that doesn't guarantee you'll do the best things with your money. Here are the signs to watch out for if you're making dumb money decisions, and tips to avoid falling for them. Authority bias is when you believe what a person — like a CEO, celebrity or financial advisor — says because of their high influence or position. This can get you in trouble since their advice might be completely wrong or not based on the reality of your situation. Yang gave the example of quantum computing stock prices. In December 2024, a Google Willow announcement led many investors to buy these stocks, which boosted their prices. But in January 2025, Nvidia's CEO said the tech had many years to go, and stock prices fell a lot. 'The truth is that many people probably didn't do any due diligence when it came to these stocks, and they probably bought them on a speculative future after the Willow announcement, and then they sold them on a whim after a negative comment,' Yang said. To protect yourself from this bias, don't rely solely on what a single person says to do with your money. Yang said you should also forget whatever is special about that person to improve your objectivity, see what other people say differently about the topic, and trust your instincts. Discover More: If you often look only for information that aligns with your beliefs about money and brush off anything that says differently, you've fallen for confirmation bias. Besides leading to bad money moves, this bias can make you an easier person to scam, according to the Ohio Attorney General. Yang explained, 'It's especially dangerous for those that are super logical because if you're a super methodical thinker, you can actually build a logical sounding argument to defend your pre-existing opinion.' He gave an example of how this can play out with tech stocks. If you favor those stocks, you might watch for positive news reports, listen to influencers who are fans of tech, and focus on friends who profited big. You might not consider any bad earnings projections or the investors who went broke. According to Yang, asking 'why' several times helps avoid bad decisions due to confirmation bias. This lets you dig into your motivation and reasoning for making the money move. He also suggested writing down the decisions you make so you can later look back on why you did certain things and what you expected. 'This is arguably the most dangerous cognitive bias for smart people, and that's basically when people overestimate their knowledge, abilities and their predictions,' said Yang. Overconfidence bias can cause you to not consider risks since you mistakenly think you have an advantage with money over other people, and that could even be due to expertise in an unrelated area. Yang explained that this mistake played a role in various financial crises over the last few decades. Being overconfident might also lead you to not diversify your money enough and risk major losses. Yang gave examples of copying Warren Buffett's portfolio with limited investment choices or investing substantially in your own employer's stock due to familiarity. To avoid letting overconfidence damage your finances, consider that some successes might have come from pure luck rather than a wise choice you made. Yang said you should also regularly compare your predictions to reality and stick to simple investing strategies, like using index funds instead of betting on the next big individual stock. More From GOBankingRates 3 Luxury SUVs That Will Have Massive Price Drops in Summer 2025 These Cars May Seem Expensive, but They Rarely Need Repairs Clever Ways To Save Money That Actually Work in 2025 This article originally appeared on Why Smart People Make Dumb Money Decisions, According to Humphrey Yang

Business Insider
5 hours ago
- Business Insider
AI leaders have a new term for the fact that their models are not always so intelligent
Progress is rarely linear, and AI is no exception. As academics, independent developers, and the biggest tech companies in the world drive us closer to artificial general intelligence — a still hypothetical form of intelligence that matches human capabilities — they've hit some roadblocks. Many emerging models are prone to hallucinating, misinformation, and simple errors. Google CEO Sundar Pichai referred to this phase of AI as AJI, or "artificial jagged intelligence," on a recent episode of Lex Fridman's podcast. "I don't know who used it first, maybe Karpathy did," Pichai said, referring to deep learning and computer vision specialist Andrej Karpathy, who cofounded OpenAI before leaving last year. AJI is a bit of a metaphor for the trajectory of AI development — jagged, marked at once by sparks of genius and basic mistakes. "You see what they can do and then you can trivially find they make numerical errors or counting R's in strawberry or something, which seems to trip up most models," Pichai said. "I feel like we are in the AJI phase where dramatic progress, some things don't work well, but overall, you're seeing lots of progress." In 2010, when Google DeepMind launched, its team would talk about a 20-year timeline for AGI, Pichai said. Google subsequently acquired DeepMind in 2014. Pichai thinks it'll take a little longer than that, but by 2030, "I would stress it doesn't matter what that definition is because you will have mind-blowing progress on many dimensions." By then the world will also need a clear system for labeling AI-generated content to "distinguish reality," he said. "Progress" is a vague term, but Pichai has spoken at length about the benefits we'll see from AI development. At the UN's Summit of the Future in September 2024, he outlined four specific ways that AI would advance humanity — improving access to knowledge in native languages, accelerating scientific discovery, mitigating climate disaster, and contributing to economic progress.


Gizmodo
7 hours ago
- Gizmodo
OpenAI Wants to get College Kids Hooked on AI
AI chatbots like OpenAI's ChatGPT have been shown repeatedly to provide false information, hallucinate completely made-up sources and facts, and lead people astray with their confidently wrong answers to questions. For that reason, AI tools are viewed with skepticism by many educators. So, of course, OpenAI and its competitors are targeting colleges and pushing its services on students—concerns be damned. According to the New York Times, OpenAI is in the midst of a major push to make ChatGPT a fixture on college campuses, replacing many aspects of the college experience with AI alternatives. According to the report, the company wants college students to have a 'personalized AI account' as soon as they step on campus, same as how they receive a school email address. It envisions ChatGPT serving as everything from a personal tutor to a teacher's aide to a career assistant that helps students find work after graduation. Some schools are already buying in, despite the educational world initially greeting AI with distrust and outright bans. Per the Times, schools like the University of Maryland, Duke University, and California State University have all signed up for OpenAI's premium service, ChatGPT Edu, and have started to integrate the chatbot into different parts of the educational experience. It's not alone in setting its sights on higher education, either. Elon Musk's xAI offered free access to its chatbot Grok to students during exam season, and Google is currently offering its Gemini AI suite to students for free through the end of the 2025-26 academic year. But that is outside of the actual infrastructure of higher education, which is where OpenAI is attempting to operate. Universities opting to embrace AI, after initially taking hardline positions against it over fears of cheating, is unfortunate. There is already a fair amount of evidence piling up that AI is not all that beneficial if your goal is to learn and retain accurate information. A study published earlier this year found that reliance on AI can erode critical thinking skills. Others have similarly found that people will 'offload' the more difficult cognitive work and rely on AI as a shortcut. If the idea of university is to help students learn how to think, AI undermines it. And that's before you get into the misinformation of it all. In an attempt to see how AI could serve in a focused education setting, researchers tried training different models on a patent law casebook to see how they performed when asked questions about the material. They all produced false information, hallucinated cases that did not exist, and made errors. The researchers reported that OpenAI's GPT model offered answers that were 'unacceptable' and 'harmful for learning' about a quarter of the time. That's not ideal. Considering that OpenAI and other companies want to get their chatbots ingrained not just in the classroom, but in every aspect of student life, there are other harms to consider, too. Reliance on AI chatbots can have a negative impact on social skills. And the simple fact that universities are investing in AI means they aren't investing in areas that would create more human interactions. A student going to see a tutor, for example, creates a social interaction that requires using emotional intelligence and establishing trust and connection, ultimately adding to a sense of community and belonging. A chatbot just spits out an answer, which may or may not be correct.