logo
#

Latest news with #GeminiPro

Google reminds Pixel users of this key benefit they have which they may not be fully using
Google reminds Pixel users of this key benefit they have which they may not be fully using

Phone Arena

time6 days ago

  • Phone Arena

Google reminds Pixel users of this key benefit they have which they may not be fully using

Receive the latest Google news Subscribe By subscribing you agree to our terms and conditions and privacy policy Pixel 9 Pro Google has posted a new video reminding Pixel users of a benefit that may have been overlooked during this month's feature drop. While most of the attention on July 9 went to the Gemini smartwatch update, another perk quietly launched at the same time. Pixel 9 Pro owners now get a free one-year subscription to Google AI Pro. This unlocks access to premium Gemini features, including the new Veo 3 video tool. With Veo 3, users can describe an idea to Gemini and receive a high-quality video in return, complete with natural sound and smooth video shows how users can start by uploading an image from their photo gallery and entering a prompt like "a child running through a sunflower field." Gemini uses that input to generate a short video that brings the scene to life. The results are meant to showcase what's possible with creative AI on mobile. Gemini AI Pro is Google's top-tier AI subscription. It includes tools powered by Gemini 2.5 Pro, which is more capable than the free version. Veo 3 is one of the newest tools in this bundle. It stands out because it can generate realistic motion and audio, rather than just still move continues Google's pattern of adding exclusive AI features to its high-end Pixel devices. In 2023, Pixel 8 Pro was the first to get Gemini Nano. Now,users are the first to try Veo 3 through the Gemini app, thanks to this free latest update also shows how AI is playing a bigger role in Google's feature drop strategy. For Pixel users, Veo 3 offers a new kind of creative tool that goes beyond editing photos or writing text. The tool has received attention for its ability to produce video content that appears surprisingly realistic, with natural motion, coherent scenes, and believable soundscapes.

Gemini Pro Review: My Go-To AI Chatbot Costs $20 Per Month
Gemini Pro Review: My Go-To AI Chatbot Costs $20 Per Month

CNET

time19-07-2025

  • Business
  • CNET

Gemini Pro Review: My Go-To AI Chatbot Costs $20 Per Month

CNET's expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise. 9.0 / 10 SCORE Google Gemini Pro Review Pros Higher-quality responses Accurate, with greatly reduced hallucinations Connection to internet and to other Google services Fast image generation 2TB cloud storage Cons Can still make logical errors Coding assist can make repeated mistakes Failed certain obscure requests Google Gemini Pro Review 9/10/10 CNET Score If the Gemini twins in Greek mythology are meant to guide sailors, then the name Google gave to its AI chatbot finally matches its demigodly ambitions. Compared to when I tested it last year, Gemini has seen tremendous improvements in accuracy and usefulness. While the free version of Gemini is highly capable and is good for most use cases, the paid version I'm reviewing here brings a more powerful AI model that can handle requests of higher complexity with greater "reasoning" abilities. I've found responses to be more informative, with greater nuance. Where 2.5 Flash aims to be a light model that can output answers quickly at little cost, 2.5 Pro takes the extra processing time to give better outputs. At $20 per month, Gemini Pro is worth the upgrade for people looking to accomplish more-complex research and coding tasks, or who want a deeper AI model to communicate with. Considering that a Gemini Pro subscription also comes with 2TB of cloud storage, along with some video generation tools, it could easily become a justifiable expense for some people. At the same time, Gemini isn't all-knowing. It still can make some logical mistakes, because AI chatbots don't truly understand the world around them. They're highly complex autocomplete tools and, as a result, can get information wrong because they don't experience the world like we do. Compared with ChatGPT Plus, Gemini Pro can still lag behind in some scenarios, despite its numerous improvements and further integrations with other Google services, such as Search and Maps. What Google ultimately delivers is a highly capable AI chatbot that can handle a wide host of challenges. From coding to research, Gemini can handle pretty much anything thrown at it. Both ChatGPT Plus, specifically the "reasoning" o3 model, and Gemini Pro offer tremendous functionality, and differentiating between the two comes down to very specific use cases. How CNET reviews AI models Last year, I treated my reviews of AI chatbots as if I were reviewing any other tech product at CNET, running a series of comparison tests to see which came out on top. Though that's a handy way to test camera quality between the iPhone 16 Pro and the Samsung Galaxy S25 Ultra, it's a little less useful when reviewing AI chatbots. Because AI chatbots are machines that can do practically anything, performing side-by-side A-B testing isn't indicative of how most people interact with AI. Imagine if I were reviewing Google Search and Bing. In this scenario, it would make sense to do comparative searches and record my results. But AI chatbots don't work like traditional online search, which indexes sites to pull up the most relevant results. AIs give novel answers to every question asked, even if it's the same question. It makes standard comparative testing less of a reflection of real-world use. This year, I'm opting for a more experiential approach. It's less scientific, sure, but I feel it gives a better sense of the vibe each AI chatbot brings. And considering that Google, OpenAI, Anthropic and others want to give their AI chatbots a bit of a personality, unpacking that vibe is core to evaluating the AI experience. Research and accuracy Compared with 2024, Gemini Pro is leagues more accurate this year. It seldom makes up facts or links to nonexistent YouTube videos, as the previous version did during my tests last year. Google has also done a much better job of integrating information gathered via Search to pull up the most relevant sourcing. To test how Gemini could help me research current events, I asked the chatbot to analyze talking points from the recent New York mayoral primary. It did an excellent job of pulling together facts with proper sourcing, including Radio Free Europe, PBS, official government sources and, in some instances, YouTube videos from major news channels. Because Google owns YouTube, Gemini can link directly to any of the site's vast trove of videos. That gives Google an edge over the companies behind other AI engines. YouTube, by default, blocks creators from allowing their videos to be used for AI training. Other AI companies have crawled YouTube to train their models, a practice that violated YouTube's terms of service. Something I've found particularly handy is using Gemini as a sounding board for some of my crazier ideas. I own a wide selection of video game consoles, all plugged into my television with multiple HDMI switches and power strips. I asked Gemini if it would be possible to make a superlong HDMI switch that could fit more than 20 devices. Gemini explained that creating a circuit board that could handle 4K, HDR and high refresh rates across multiple inputs would be extremely challenging and beyond the scope of a DIY project. When I asked Gemini to create a schematic of what this project might look like, it attempted to do so with ASCII characters but ultimately failed. Google Gemini Pro fails at making a schematic for a 24-port HDMI switch. Imad Khan/CNET At least Gemini is real with me. Despite Gemini's accuracy, it doesn't understand the world around it. I've recently been trying to cut more sugar out of my diet, (apologies to my local bubble tea shop). I've been wanting to make a basic milk tea using monkfruit sweetener and nondairy creamer. I asked Gemini to create me a healthy milk tea recipe, but it didn't work out so well. Gemini suggested I make a tea base with one cup of water and two bags of black tea. For sweetness, Gemini said to add only 1 tablespoon of monkfruit sweetener. That, plus three-quarters of a cup of milk and 1 to 2 tablespoons of creamer would create that ideal low-calorie milk tea. The result was a chunky mess. After I used these exact measurements, I found that the drink wasn't anywhere sweet enough and that the creamer-to-liquid ratio was all off, leading to clumps in the final product. Gemini has never sipped bubble tea before, so it makes sense that it doesn't understand what 2 tablespoons of nondairy creamer would do in only 14 ounces of liquid. Gemini's 'vibe coding' blew me away I'm not a coder. Back in college, I took an introductory Python course and after much struggle managed to get a C. But with AI chatbots, you don't need to be a coding wiz, or even know how to display "Hello World" at all. With Gemini Pro, I was able to make a fully functioning Chrome extension with virtually no experience (I didn't test coding with Gemini Free). Vibe coding is a term that essentially means to code by talking it out. You express to an AI what you're hoping to accomplish, and through your back-and-forth conversation, the AI will generate the code and help you get a program up and running. I vibe coded with Gemini, and not only was the experience fascinating and a lot of fun, I also found it to be just as impactful as when I used ChatGPT for the first time in late 2023. (Gemini isn't the only AI chatbot with coding assistance. All the other major AI chatbots tout their coding capabilities. But Gemini Pro is the first I've meaningfully tested for that purpose.) I asked Gemini to build me a tool that could scan my articles and add appropriate links to other CNET articles. In my conversation with Gemini to build and test the tool, I explained in plain language any issues, and Gemini would come up with a solution and fix the code. It wasn't a perfect experience, though. There were instances when Gemini would generate an updated piece of code only to leave out a feature that was in the prior version. I'd then ask Gemini where that feature went, and it would apologize and fix the code. Interestingly, when we'd hit roadblocks and I'd suggest that maybe the feature I was envisioning was simply too difficult to implement, Gemini would push back. It would say the feature was still totally doable and would generate revised code. This back-and-forth would go on until Gemini got it right. The larger computer science job market is currently going through an upheaval as Big Tech executives continue laying off thousands of workers while boasting about how much coding AI is doing. After using Gemini to code, I understand why students are worried. Regardless, coding with Gemini has changed my understanding of the power of AI chatbots, and I plan to vibe code more as part of the testing I do for reviews. Gemini Pro is (surprisingly) worse than ChatGPT for shopping Searching for any product on Google Search leads to an obnoxious mix of product carousels and sponsored listings, drowning out all other reviews and articles (including CNET's). Considering how much Google invests in monetizing online shopping, it's surprising how quaint the product research is on Gemini by comparison. ChatGPT has a far more robust shopping experience. Gemini is an excellent tool for basic product research. When I asked it to compare various models of Panasonic micro four-thirds cameras, for example, Gemini could pull up models that met my specifications and could tabulate their features in a handy list when asked. It could add more products to that list as I continued to fall down the DSLR rabbit hole. At the same time, unlike ChatGPT, it doesn't provide links to stores and it doesn't incorporate images. Product research on Gemini required me to have a separate Google Search window open just so I could see current pricing and what various camera models looked like, side by side. Gemini can also sometimes get product details wrong. But ChatGPT would also link to incorrect products. Shopping is one of those instances where Gemini needs to act less like an AI chatbot and more like Google Search. I looked for a piece of furniture to hold my record player and store my vinyl, and Gemini gave me a guide to what to look for when shopping but didn't actually recommend any products. Jumping over to ChatGPT, it was an entirely different experience. There, it was like I was working with a sales associate at a furniture store, going through the various options to find something that fit my needs. Image generation: At least it's better than Gemini Free I didn't extensively test image generation with Gemini Pro, but I found it more than adequate for basic tasks. (CNET will have a separate hands-on of Google's various image and video generation tools.) At the very least, when compared with Gemini Free, Gemini Pro did a better job of following my intent when creating images. Like with my Gemini Free review, I wanted to create a nostalgic image that evoked the feeling of playing a Game Boy on a late night drive in the back seat of a car. Gemini Pro got it on the first go. Google Gemini Pro generates an image of a boy playing his Game Boy in the backseat of a car at night. Imad Khan/CNET My prompt that generated the image above: "I want to create an image. One that evokes a feeling of nostalgia. The image is that of a boy playing his Game Boy in the back of his parents' car on a long road trip at night. Because the screen isn't backlit, he's sitting near the window, trying to catch whatever passing light he can to see where to go. This image should use cool colors accented by the warmth of the light entering in. Feature anime-style artwork with a slight Western design. Should feel hand-drawn with intricately detailed linework. Analog VHS distortion. '90s cartoon aesthetic." My experience with Gemini Free image generation was much more frustrating. The model simply didn't understand world logic and would often place the boy in the front seat facing backward, or with surrounding vehicles driving in the wrong direction. Eventually, I gave up. Redemption Google's done it. After Bard's dismal launch and a bumpy rebrand as Gemini, the latest build of the company's AI chatbot can compete with the best from OpenAI. This time around, Gemini brings with it greater accuracy, collaborative capability, coding power and image generation to make an overall compelling product. The Gemini 2.5 Pro model is simply better than the free Gemini Flash model. Answers have more nuance and density, and features like the image generator work considerably better. It's a tad annoying when Gemini will default to the Flash model even for Pro subscribers. I suspect it does so when traffic is high and Google is trying to lessen the load. It's easy enough to switch back, but you have to notice the switch to do so. Compared with ChatGPT's o3 model, in particular, Gemini 2.5 Pro is faster while maintaining comparable answer quality. Google also says it has a 1 million token context window, which would dwarf what's been reported regarding ChatGPT. Being able to pull in data from other Google services gives Gemini another edge. Gemini isn't perfect, however. It can still stumble with some types of queries, and using it for shopping is lackluster. Despite my qualms, I found myself increasingly reliant on Gemini, moving myself further away from Google Search. AI is slowly moving Google away from being a search company to an answer company. Last year, Gemini's answers were too often wrong to be worth recommending. This time, I have much more confidence in Gemini's answers. Of course, if I ever publish anything incorrect, the responsibility will fall on me. Ultimately, Gemini Pro acts as a professional and handy know-it-all assistant. It doesn't have the attitude of Claude or the controversy of Grok. Instead, it's there to help even if you find yourself giving up. It's that assertiveness that makes Gemini Pro a standout AI product.

Airtel, Perplexity AI team up to offer 1-yr free plan to 390 mn users
Airtel, Perplexity AI team up to offer 1-yr free plan to 390 mn users

Business Standard

time17-07-2025

  • Business
  • Business Standard

Airtel, Perplexity AI team up to offer 1-yr free plan to 390 mn users

India's second-largest telecom operator, Bharti Airtel, on Thursday announced a partnership with Perplexity, an AI-powered search platform, to provide a free 12-month subscription to Perplexity Pro for all its customers, CNBC-TV18 reported. Perplexity Pro to be free for Airtel's 390 million users Perplexity is an artificial intelligence-based search and answer engine that offers real-time, conversational responses and moves beyond traditional search results by delivering directly consumable answers. According to the report, all Airtel users across mobile, Direct-to-Home (DTH), and broadband services will get complimentary access to Perplexity Pro, a service that typically costs ₹17,000 annually. The offer will be made available via the Airtel Thanks app. This marks Perplexity's first collaboration with an Indian telecom operator as part of its strategy to expand adoption in India, the world's second-largest internet market. As per data from the Telecom Regulatory Authority of India (Trai), Airtel currently serves more than 390 million users, second only to Reliance Jio's 475 million. Airtel calls it a 'first-of-its-kind' Gen-AI partnership Gopal Vittal, Vice Chairman & Managing Director of Bharti Airtel, said, 'We're thrilled to announce a game-changing partnership with Perplexity, bringing their cutting-edge AI capabilities exclusively for Airtel customers. This collaboration will bring the powerful and real-time knowledge tool for millions of users at their fingertips, at no extra cost. This first-of-its-kind Gen-AI partnership in India is focused on helping our customers navigate the emerging trends in the digital world with confidence and ease.' Perplexity expands global reach through telecom deals Perplexity has previously partnered with global players like SoftBank and T-Mobile. Speaking about the latest deal, Perplexity's Co-founder Aravind Srinivas said, 'This partnership is an exciting way to make accurate, trustworthy, and professional-grade AI accessible to more people in India—whether a student, working professional, or managing a household. With Perplexity Pro, users get a smarter, easier way to find information, learn, and get more done.' According to Moneycontrol, the partnership comes amid growing competition in the AI space in India. Google has also been offering one-year free subscriptions to Gemini Pro and Veo3 to college students in India. Perplexity's Motorola, T-Mobile deals widen access Earlier this year, Perplexity also announced a global partnership with Motorola. The rollout includes embedding its platform on Motorola's Edge 60 series and Razr smartphones, giving users access to advanced AI features such as GPT-4o, Claude 3.5 Sonnet, and Sonar, along with unlimited file uploads and smart shopping tools. With its Airtel partnership, Perplexity joins a growing trend of AI firms aligning with telecom providers to scale rapidly in emerging markets.

Perplexity Pro worth ₹17,000 free with Airtel: Here's how students and professionals can benefit
Perplexity Pro worth ₹17,000 free with Airtel: Here's how students and professionals can benefit

Time of India

time17-07-2025

  • Business
  • Time of India

Perplexity Pro worth ₹17,000 free with Airtel: Here's how students and professionals can benefit

After Google gave college students free access to Gemini Pro yesterday, Airtel has now stepped into the spotlight. This time, it's not just students, but the entire base of 360 million Airtel users including working professionals, researchers and curious learners who get a free pass to a powerful AI tool. Airtel has partnered with Perplexity, a San Francisco-based AI company, to offer Perplexity Pro, an advanced subscription worth ₹17,000 a year, free for 12 months to all Airtel customers. Whether you use Airtel for mobile, broadband or DTH, the Pro version is now available at zero cost via the Airtel Thanks app. What is Perplexity Pro? So what does Perplexity Pro actually do? Think of it as a search engine that skipped the outdated listing pages and jumped straight to the answer key. Powered by top AI models like GPT-4.1 and Claude, Perplexity Pro doesn't just fetch information. It understands what you're looking for, digs into credible sources, and builds structured, conversational answers in real time. How can students benefit? For students juggling assignments and portfolio deadlines, Perplexity Pro is like a research assistant on call 24/7. It can break down complex topics into simpler explanations, generate citation-backed answers from verified sources, and help streamline project work. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like This Wrinkle Cream Keeps Selling Out At Costco (Find Out Why) The Skincare Magazine Undo Need to compare climate policy models for a geography assignment? Or want to summarise a dense 30-page academic article on behavioural economics before tomorrow's internal? You can upload documents directly, ask follow-up questions, and organise findings in an easy-to-read format, all without switching between dozens of tabs. Students preparing for competitive exams can also use it to revisit core concepts, build quick revision guides, or even draft sample answers in a clear, conversational tone. Whether you're in high school, college or grad school, the tool can significantly reduce time spent researching and formatting, leaving you more room to focus on actual learning. How professionals can use it to work smarter For professionals, Perplexity Pro can become an everyday tool that handles the grunt work while you focus on the big picture. Whether you're drafting client briefs, building reports or navigating daily admin, the AI assistant can help you save time, cut noise and find relevant answers quickly. Imagine you're creating a pitch deck and need data on regional consumer trends. Instead of scouring multiple sites, Perplexity Pro can collate credible, up-to-date insights in seconds, complete with citations. Working with PDFs? The tool can summarise key sections or extract specific information — ideal for professionals in consulting, policy, media, law, or design. From boosting productivity to supporting smarter decision-making, the tool works like a quiet but efficient colleague who never takes a day off. If you're in a creative field, Perplexity Labs, part of the Pro suite, allows users to prototype content ideas or visualise data with ease. Image generation, code assistance, and writing suggestions are all baked into the workspace. How to claim Perplexity Pro for free If you're an Airtel user, whether mobile, broadband or DTH, getting access is simple: • Open the Airtel Thanks app on your phone. • Look for the Perplexity Pro offer banner or navigate to the "Rewards" section. • Tap to activate your 12-month free subscription. 'Whether a student, working professional or someone managing a household, this brings professional-grade AI to everyday decision-making,' Perplexity's co-founder and CEO Aravind Srinivas said. While the offer is valid for a year, the productivity leap might last far longer and in a time where students are drafting SOPs with Gemini and professionals are rewriting pitch decks with Claude, this latest AI freebie from Airtel may be the easiest upgrade to your daily workflow. TOI Education is on WhatsApp now. Follow us here . Ready to navigate global policies? Secure your overseas future. Get expert guidance now!

Elon Musk's New Grok 4 Takes on ‘Humanity's Last Exam' as the AI Race Heats Up
Elon Musk's New Grok 4 Takes on ‘Humanity's Last Exam' as the AI Race Heats Up

Yahoo

time12-07-2025

  • Science
  • Yahoo

Elon Musk's New Grok 4 Takes on ‘Humanity's Last Exam' as the AI Race Heats Up

Elon Musk released the newest artificial intelligence model from his company xAI on Wednesday night. In an hour-long public reveal session, he called the model, Grok 4, 'the smartest AI in the world' and claimed it was capable of getting perfect SAT scores and near-perfect GRE results in every subject, from the humanities to the sciences. During the online launch, Musk and members of his team described testing Grok 4 on a metric called Humanity's Last Exam (HLE)—a 2,500-question benchmark designed to evaluate an AI's academic knowledge and reasoning skill. Created by nearly 1,000 human experts across more than 100 disciplines and released in January 2025, the test spans topics from the classics to quantum chemistry and mixes text with images. Grok 4 reportedly scored 25.4 percent on its own. But given access to tools (such as external aids for code execution or Web searches), it hit 38.6 percent. That jumped to 44.4 percent with a version called Grok 4 Heavy, which uses multiple AI agents to solve problems. The two next best-performing AI models are Google's Gemini-Pro (which achieved 26.9 percent with the tools) and OpenAI's o3 model (which got 24.9 percent, also with the tools). The results from xAI's internal testing have yet to appear on the leaderboard for HLE, however, and it remains unclear whether this is because xAI has yet to submit the results or because those results are pending review. Manifold, a social prediction market platform where users bet play money (called 'Mana') on future events in politics, technology and other subjects, predicted a 1 percent chance, as of Friday morning, that Grok 4 would debut on HLE's leaderboard with a 45 percent score or greater on the exam within a month of its release. (Meanwhile xAI has claimed a score of only 44.4.) During the launch, the xAI team also ran live demonstrations showing Grok 4 crunching baseball odds, determining which xAI employee has the 'weirdest' profile picture on X and generating a simulated visualization of a black hole. Musk suggested that the system may discover entirely new technologies by later this year—and possibly 'new physics' by the end of next year. Games and movies are on the horizon, too, with Musk predicting that Grok 4 will be able to make playable titles and watchable films by 2026. Grok 4 also has new audio capabilities, including a voice that sang during the launch, and Musk said new image generation and coding tools are soon to be released. The regular version of Grok 4 costs $30 a month; SuperGrok Heavy—the deluxe package with multiple agents and research tools—runs at $300. [Sign up for Today in Science, a free daily newsletter] Artificial Analysis, an independent benchmarking platform that ranks AI models, now lists Grok 4 as highest on its Artificial Analysis Intelligence Index, slightly ahead of Gemini 2.5 Pro and OpenAI's o4-mini-high. And Grok 4 appears as the top-performing publicly available model on the leaderboards for the Abstraction and Reasoning Corpus, or ARC-AGI-1, and its second edition, ARC-AGI-2—benchmarks that measure progress toward 'humanlike' general intelligence. Greg Kamradt, president of ARC Prize Foundation, a nonprofit organization that maintains the two leaderboards, says that when the xAI team contacted the foundation with Grok 4's results, the organization then independently tested Grok 4 on a dataset to which the xAI team did not have access and confirmed the results. 'Before we report performance for any lab, it's not verified unless we verify it,' Kamradt says. 'We approved the [testing results] slide that [the xAI team] showed in the launch.' According to xAI, Grok 4 also outstrips other AI systems on a number of additional benchmarks that suggest its strength in STEM subjects (read a full breakdown of the benchmarks here). Alex Olteanu, a senior data science editor at AI education platform DataCamp, has tested it. 'Grok has been strong on math and programming in my tests, and I've been impressed by the quality of its chain-of-thought reasoning, which shows an ingenious and logically sound approach to problem-solving,' Olteanu says. 'Its context window, however, isn't very competitive, and it may struggle with large code bases like those you encounter in production. It also fell short when I asked it to analyze a 170-page PDF, likely due to its limited context window and weak multimodal abilities.' (Multimodal abilities refer to a model's capacity to analyze more than one kind of data at the same time, such as a combination of text, images, audio and video.) On a more nuanced front, issues with Grok 4 have surfaced since its release. Several posters on X—owned by Musk himself—as well as tech-industry news outlets have reported that when Grok 4 was asked questions about the Israeli-Palestinian conflict, abortion and U.S. immigration law, it often searched for Musk's stance on these issues by referencing his X posts and articles written about him. And the release of Grok 4 comes after several controversies with Grok 3, the previous model, which issued outputs that included antisemitic comments, praise for Hitler and claims of 'white genocide'—incidents that xAI publicly acknowledged, attributing them to unauthorized manipulations and stating that the company was implementing corrective measures. At one point during the launch, Musk commented on how making an AI smarter than humans is frightening, though he said he believes the ultimate result will be good—probably. 'I somewhat reconciled myself to the fact that, even if it wasn't going to be good, I'd at least like to be alive to see it happen,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store