logo
Google just launched a new AI tool for developers — here's why it matters to everyone else

Google just launched a new AI tool for developers — here's why it matters to everyone else

Tom's Guide6 hours ago

Google just launched a new AI tool called Gemini CLI; and while it's designed for developers, it could lead to smarter, more flexible AI tools for everyone else. In simple terms, Gemini CLI lets people run Google's powerful Gemini AI model right from their computer's command line.
For those that don't know, the "command line" (or terminal) is a tool that lets you type instructions directly to your computer; instead of clicking buttons or using apps. It looks like a plain black-and-white window where you type commands to make things happen. You've probably seen it before and may not have known the name of it.
Developers and power users often use the command line because it's fast, flexible, and lets them automate tasks or control their system more precisely than with regular apps.
And while all of this might sound just a little bit too technical for the casual user, the bigger picture is this: by making Gemini more open and customizable, Google is giving developers new ways to build creative AI tools; everyday users will likely benefit down the line.
Gemini CLI lets users bring Google's latest Gemini AI model — Gemini 2.5 Pro — into their terminal, with full support for writing and debugging code, automating tasks, generating content and integrating AI into custom workflows.
It's free, open-source, and comes with generous usage limits: up to 1,000 requests per day, no API key required.
Open-source means the software's code is made public, so anyone can view it, use it or modify it and it's typically free to do so (ChatGPT is an exception).
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
For example, if a tool is open-source, developers around the world can improve it, fix bugs or build their own versions with it. Being open-source also means you can see exactly how the software works. In other words, it's not a 'black box' controlled only by the company that made it.
This latest development shows Google's AI strategy is shifting toward open access and customization.
By releasing Gemini CLI as an open-source tool (under Apache 2.0), Google is inviting developers everywhere to build new ways to use Gemini — not just through official apps, but through personalized tools and scripts.
In short: expect to see a wave of new Gemini-powered tools to emerge in the coming months; many created by the community, not by Google alone.
Whether you use AI for productivity, creativity or problem-solving, this kind of open access helps the ecosystem grow faster, and potentially leads to more useful options for all users.
Even if you never touch the terminal, Gemini CLI is a clear sign that Google is pushing to make its AI tools more open, flexible and customizable.
That means more developers (and hobbyists) will be able to build creative new ways to use Gemini, and going beyond official Google apps.
In the coming months, we'll likely see more community-built tools, scripts, and AI-powered shortcuts start to surface, making it easier for everyone to take advantage of AI in new and unexpected ways.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

US senators reintroduce bill to open Apple and Google's app stores
US senators reintroduce bill to open Apple and Google's app stores

Engadget

time23 minutes ago

  • Engadget

US senators reintroduce bill to open Apple and Google's app stores

Senators Marsha Blacburn (R-Tenn.), Mike Blumenthal (D-Conn.), Amy Klobuchar (D-Minn.) Dick Durbin (D-Ill.) and Mike Lee (R-Utah) have reintroduced a bill that would force app store owners like Apple and Google to allow third-party payment systems and sideloading apps, among a collection of other developer-friendly changes. The bill, called the Open App Markets App, was originally introduced in 2021, but it never came up for a vote after passing through the Senate Judiciary Committee in 2022. The Open App Markets Act applies its changes to app stores with 50,000 monthly users or more, most obviously applicable to the Apple App Store and the Google Play Store. Like the original bill, the reintroduced Open App Markets Act wants covered companies to allow things like sideloading, third-party app stores and alternative payments systems, while protecting developers ability to "tell consumers about lower prices and offer competitive pricing." It would also prevent app store operators from privileging their own apps and services in app store search results. While the aims of the new bill are largely the same as the original one, the legal environment is meaningfully different. Apple has been forced to allow third-party app stores and alternative payment systems in the European Union following the introduction of the Digital Markets Act in 2022. Thanks to its failure to make good on the small concession Epic won via its lawsuit, Apple has also been forced to allow developers to direct customers to pay for things outside of the App Store and its in-app payments system. The Open App Markets Act would make these kinds of changes the law in the US. It seems possible the bill could pass, too. Regulatory pressure on tech companies has only increased since 2021. For example, Utah recently passed an age-verification law that would require app stores to only allow users 18 and up to make an account.

Scale AI locked down Big Tech client documents after BI revealed security holes
Scale AI locked down Big Tech client documents after BI revealed security holes

Business Insider

timean hour ago

  • Business Insider

Scale AI locked down Big Tech client documents after BI revealed security holes

Scale AI has now locked down project materials for clients like Meta and xAI following a Business Insider report that thousands of sensitive files were stored as Google Docs that were publicly accessible with links. The company — which uses human gig workers to improve Big Tech's latest AI models and is receiving a $14 billion investment from Meta — scrambled to secure its files this week, according to four Scale contractors who asked to remain anonymous because of the sensitivity of the matter. That left teams of workers temporarily unable to open training documents. Thousands of Scale AI files previously reviewed by BI that had been public are now private. "What is happening is a knee-jerk reaction to being in the headlines," Stephanie Kurtz, a regional director with cybersecurity firm Trace3, told BI. She said that locking down the documents and inviting the correct users "should have been done in the first place." By Wednesday, teams had resolved many of the document access issues, one worker said. Another said contributors were now being granted individual access to documents. BI first flagged the public Google Docs to Scale AI for a June 13 article about how they showed Google using ChatGPT to improve its AI chatbot. BI also encountered public Scale AI documents during prior reports about how xAI and Meta were training their latest AI models. At least 85 individual Google Docs containing thousands of pages remained up and fully accessible until BI published an article on Tuesday that focused on the security issue this practice created. BI reported that Scale AI had left open thousands of project documents tied to its work with clients, including Google and Meta, allowing anyone with a link to access them. Several documents also contained contact information for numerous Scale AI workers, some of whom were surprised to discover their details were accessible when BI contacted them. Scale AI has routinely used public Google Docs to track work for high-profile customers, as it's an efficient way to share information with its more than 240,000 contractors. BI found that those documents often contain sensitive information about how workers train AI models for Big Tech clients. Multiple AI training documents reviewed by BI were labeled "confidential" and accessible to anyone with the link. After Scale AI's lockdown, one contractor described a "site-wide" problem accessing project materials on Tuesday. Another said that many teams' work had ground to a halt due to the new restrictions, with one even losing access in the middle of a critical presentation. "We are basically chilling out here," the contractor said. Scale AI told BI on Monday that it was conducting a thorough investigation and had disabled any user's ability to publicly share documents from Scale's systems. It reiterated that statement for this article and did not comment further on specific changes it has made to Scale AI's document security. "We take data security seriously," a Scale AI spokesperson said. "We remain committed to robust technical and policy safeguards to protect confidential information and are always working to strengthen our practices." There's no indication that Scale AI had suffered a data breach. Cybersecurity experts told BI that the practice could leave the company vulnerable to hacking. The document lockout was another bit of whiplash for Scale AI contractors, who were affected by Meta's mega-investment and its decision to hire CEO Alexandr Wang, for its new AI superintelligence group. After the deal announcement, Google halted several of its projects with Scale AI. OpenAI and Elon Musk's xAI have also paused projects with Scale, BI previously reported, and one smaller investor said they were selling their remaining stake in the startup. Many contractors discovered that some of their projects had been paused. While Scale AI sent its contractors a memo announcing the Meta investment, many workers said they were left in the dark about clients pausing projects, mostly without prior warning. Meta declined to comment.

Sam Altman comes out swinging at The New York Times
Sam Altman comes out swinging at The New York Times

Yahoo

timean hour ago

  • Yahoo

Sam Altman comes out swinging at The New York Times

From the moment OpenAI CEO Sam Altman stepped onstage, it was clear this was not going to be a normal interview. Altman and his chief operating officer, Brad Lightcap, stood awkwardly toward the back of the stage at a jam-packed San Francisco venue that typically hosts jazz concerts. Hundreds of people filled steep theatre-style seating on Wednesday night to watch Kevin Roose, a columnist with The New York Times, and Platformer's Casey Newton record a live episode of their popular technology podcast, Hard Fork. Altman and Lightcap were the main event, but they'd walked out too early. Roose explained that he and Newton were planning to — ideally, before OpenAI's executives were supposed to come out — list off several headlines that had been written about OpenAI in the weeks leading up to the event. 'This is more fun that we're out here for this,' said Altman. Seconds later, the OpenAI CEO asked, 'Are you going to talk about where you sue us because you don't like user privacy?' Within minutes of the program starting, Altman hijacked the conversation to talk about The New York Times lawsuit against OpenAI and its largest investor, Microsoft, in which the publisher alleges that Altman's company improperly used its articles to train large language models. Altman was particularly peeved about a recent development in the lawsuit, in which lawyers representing The New York Times asked OpenAI to retain consumer ChatGPT and API customer data. 'The New York Times, one of the great institutions, truly, for a long time, is taking a position that we should have to preserve our users' logs even if they're chatting in private mode, even if they've asked us to delete them,' said Altman. 'Still love The New York Times, but that one we feel strongly about.' For a few minutes, OpenAI's CEO pressed the podcasters to share their personal opinions about the New York Times lawsuit — they demurred, noting that as journalists whose work appears in The New York Times, they are not involved in the lawsuit. Altman and Lightcap's brash entrance lasted only a few minutes, and the rest of the interview proceeded, seemingly, as planned. However, the flare-up felt indicative of the inflection point Silicon Valley seems to be approaching in its relationship with the media industry. In the last several years, multiple publishers have brought lawsuits against OpenAI, Anthropic, Google, and Meta for training their AI models on copyrighted works. At a high level, these lawsuits argue that AI models have the potential to devalue, and even replace, the copyrighted works produced by media institutions. But the tides may be turning in favor of the tech companies. Earlier this week, OpenAI competitor Anthropic received a major win in its legal battle against publishers. A federal judge ruled that Anthropic's use of books to train its AI models was legal in some circumstances, which could have broad implications for other publishers' lawsuits against OpenAI, Google, and Meta. Perhaps Altman and Lightcap felt emboldened by the industry win heading into their live interview with The New York Times journalists. But these days, OpenAI is fending off threats from every direction, and that became clear throughout the night. Mark Zuckerberg has recently been trying to recruit OpenAI's top talent by offering them $100 million compensation packages to join Meta's AI superintelligence lab, Altman revealed weeks ago on his brother's podcast. When asked whether the Meta CEO really believes in superintelligent AI systems, or if it's just a recruiting strategy, Lightcap quipped: 'I think [Zuckerberg] believes he is superintelligent.' Later, Roose asked Altman about OpenAI's relationship with Microsoft, which has reportedly been pushed to a boiling point in recent months as the partners negotiate a new contract. While Microsoft was once a major accelerant to OpenAI, the two are now competing in enterprise software and other domains. 'In any deep partnership, there are points of tension and we certainly have those,' said Altman. 'We're both ambitious companies, so we do find some flashpoints, but I would expect that it is something that we find deep value in for both sides for a very long time to come.' OpenAI's leadership today seems to spend a lot of time swatting down competitors and lawsuits. That may get in the way of OpenAI's ability to solve broader issues around AI, such as how to safely deploy highly intelligent AI systems at scale. At one point, Newton asked OpenAI's leaders how they were thinking about recent stories of mentally unstable people using ChatGPT to traverse dangerous rabbit holes, including to discuss conspiracy theories or suicide with the chatbot. Altman said OpenAI takes many steps to prevent these conversations, such as by cutting them off early, or directing users to professional services where they can get help. 'We don't want to slide into the mistakes that I think the previous generation of tech companies made by not reacting quickly enough,' said Altman. To a follow-up question, the OpenAI CEO added, 'However, to users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven't yet figured out how a warning gets through.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store