Latest news with #ChatGPTEnterprise
&w=3840&q=100)

First Post
2 days ago
- Business
- First Post
What is the new ChatGPT Agent that can ‘control' your computer?
ChatGPT Agent is OpenAI's latest Artificial Intelligence tool. The tech firm helmed by Sam Altman claims that ChatGPT Agent goes far beyond being a mere chatbot and uses its own virtual computer to 'think' and 'act'. ChatGPT Agent became available on Thursday (July 17) for subscribers of OpenAI's Pro, Plus, and Team plans. Here's what we know about it read more The software, known as ChatGPT Agent, can undertake a wide variety of computer-based tasks for users. OpenAI has launched a new software for ChatGPT. The software, known as ChatGPT Agent, can undertake a wide variety of computer-based tasks for users. ChatGPT Agent became available on Thursday for subscribers of OpenAI's Pro, Plus, and Team plans. Users simply have to choose 'agent mode' in ChatGPT's dropdown menu to activate it. But what is it? What do we know about it? How does it work? Why is it significant? What is it? OpenAI, which is helmed by Sam Altman, claims that ChatGPT Agent goes far beyond being a mere chatbot. STORY CONTINUES BELOW THIS AD The company says that ChatGPT Agent uses its own virtual computer to 'think' and 'act'. It essentially functions like a personal assistant to which you can delegate tasks. This includes executing code, going to websites, managing your calendar, making meal plans, creating presentations and spreadsheets, and summarising meetings. The company says users can interact with ChatGPT Agent in a 'natural language'. The company in its blog said users can issue commands such as 'look at my calendar and brief me on upcoming client meetings based on recent news' or 'plan and buy ingredients to make Japanese breakfast for four'. ChatGPT Pro subscribers will be allowed 400 queries per month. Meanwhile, ChatGPT Team/Plus users will receive 40 queries per month. Distressing news and traumatic stories can cause stress and ChatGPT Agent uses its own virtual computer to 'think' and 'act'. It will available to ChatGPT Enterprise and Education users later this year. Interestingly, the brains behind ChatGPT agent is an Indian. He is currently in charge of the team behind ChatGPT Agent, Kumar and Isa Fulford, the research lead on ChatGPT Agent, unveiled the software in a demonstration with The Verge. Kumar and Fulford in the demonstration asked ChatGPT Agent to plan a date night for a couple. STORY CONTINUES BELOW THIS AD They also requested that ChatGPT Agent write a research report on Labubus and compare their ascendancy with Beanie Babies. This includes the web browser tool Operator and Deep Research, an analysis tool which can take information from different websites and write a research report. Open AI had described Operator as 'an agent that can go to the web to perform tasks for you'. The model behind ChatGPT Agent, which does not have a name, was trained through reinforcement learning – which is the standard technique for all OpenAI's reasoning models. OpenAI says the model delivers state-of-the-art performance on several benchmarks, according to OpenAI. This includes Humanity's Last Exam in which it scored 41.6 per cent – which is around double of what OpenAI's o3 and o4-mini got on the test. When it comes to the toughest maths tests, OpenAI said ChatGPT Agent hit 27.4 per cent using a terminal that allows it to execute code. STORY CONTINUES BELOW THIS AD The o4-mini, meanwhile, which was considered the top scorer on FrontierMath, netted a mere 6.3 per cent. The company combined teams from both Operator and Deep Research to work on ChatGPT Agent. The team comprised between 20 and 35 people. Why is it significant? Because until now, all the AI chatbots have simply sought to answer questions from users. OpenAI is taking things a step further than its rivals by making ChatGPT Agent more of a personal assistant. Countries and companies across the world are currently in an AI arms race. DeepSeek, a previously little-known Chinese firm, shook up Wall Street and Silicon Valley earlier this year. The Trump administration has vowed to spend $500 billion in the AI race in a project it has dubbed 'Stargate'. OpenAI says users can even instruct ChatGPT Agent while the task is unfolding. 'Likewise, ChatGPT itself may proactively seek additional details from you when needed to ensure the task remains aligned with your goals. If a task takes longer than anticipated or feels stuck, you can pause it, ask it for a progress summary, or stop it entirely and receive partial results. If you have the ChatGPT app on your phone, it will send you a notification when it's done with your task,' says OpenAI in its blog post. STORY CONTINUES BELOW THIS AD However, the team behind the software warns that ChatGPT Agent is still a bit slow – relatively speaking of course. 'Even if it takes 15 minutes, half an hour, it's quite a big speed-up compared to how long it would take you to do it,' Fulford told The Verge. 'It's one of those things where you can kick something off in the background and then come back to it.' OpenAI CEO Sam Altman took to social media to tout ChatGPT Agent's potential. Fulford told Wired she asked ChatGPT agent to order cupcakes for her. 'I was very specific about what I wanted, and it was a lot of cupcakes,' she says. 'That one took almost an hour—but it was easier than me doing it myself, because I didn't want to do it.' For those who worry, ChatGPT Agent also asks for user permission before doing important things such as sending an email or making a booking. The firm has said that it has built many protections into ChatGPT Agent. STORY CONTINUES BELOW THIS AD This includes refusing to work on 'high-risk' tasks such as bank transfers. 'We have built a lot of safeguards and warnings into it, and broader mitigations than we've ever developed before from robust training to system safeguards to user controls, but we can't anticipate everything. In the spirit of iterative deployment, we are going to warn users heavily and give users freedom to take actions carefully if they want to,' Altman said. He recommended that users refrain from giving ChatGPT Agent too much personal information. 'We recommend giving agents the minimum access required to complete a task to reduce privacy and security risks,' he added. Still, Altman took to social media to tout ChatGPT Agent's potential. 'Agent represents a new level of capability for AI systems and can accomplish some remarkable, complex tasks for you using its own computer. It combines the spirit of Deep Research and Operator, but is more powerful than that may sound—it can think for a long time, use some tools, think some more, take some actions, think some more, etc,' Altman wrote on X. STORY CONTINUES BELOW THIS AD 'For example, we showed a demo in our launch of preparing for a friend's wedding: buying an outfit, booking travel, choosing a gift, etc. We also showed an example of analysing data and creating a presentation for work,' he added. With inputs from agencies

Business Insider
2 days ago
- Business
- Business Insider
Inside the AI boom that's changing how Big Law attorneys work
DLA Piper rolls out Microsoft Copilot firmwide Assess: DLA Piper has defended Microsoft in a defamation suit over AI-generated content and helped OpenAI put forward its views to Congress on how AI should be regulated. It's leaning into the tech internally, too. Danny Tobey, chair of DLA Piper's AI and data analytics practice, said the firm has an internal group of lawyers and technologists who test tools and develop metrics for quality and accuracy. The team runs A/B tests on real cases, comparing results from traditional legal teams against AI-assisted ones to evaluate performance across speed, accuracy, and cost. Apply: Microsoft has highlighted DLA Piper as the first major law firm to adopt Copilot firmwide, after starting with several hundred licenses in late 2023. Lawyers use Copilot within their existing Microsoft 365 apps, Tobey said. Think drafting documents, poring over spreadsheets, and creating PowerPoint slides. For more advanced legal research and analysis, he said, attorneys turn to legal-specific tools like Harvey, CoCounsel, and LexisNexis Protégé. DLA Piper has also developed custom language models to help clients spot compliance risks early, including under laws like the Foreign Corrupt Practices Act and the Anti-Kickback Statute. "We've found a number of issues before they metastasized into outright violations," Tobey said, "and that allowed the company to step in and do some education and compliance refreshing before there was a problem." Align: Tobey said the firm provides detailed training for lawyers on how to use its tools. "We train on a per-tool basis because they all have strengths and weaknesses," Tobey said. "If you were a doctor, you would not adopt a new tool without being trained in its limitations." Gibson Dunn pilots ChatGPT Enterprise with its lawyers and staff Assess: Before adopting any tool, Gibson Dunn runs a three-step review process, said Meredith Williams-Range, the firm's chief legal operations officer. Tools must first pass an internal audit covering security, privacy, and risk. Next, they undergo proof-of-concept testing with a small group. Finally, tools must demonstrate real value to lawyers through hands-on use, a process that can take days or, as with a Harvey pilot, stretch over several months. Apply: ChatGPT Enterprise is one tool making its way through Gibson Dunn's internal processes. In June, the firm launched a pilot with more than 500 participants — a mix of lawyers and staff — to put the product through its paces. Williams-Range said she emailed practice group leaders and managing partners around the world, asking them to submit lawyers willing to test the tool. Three days later, 450 people had signed up — more than twice what she expected. Gibson Dunn says it's also evaluating using rival AI models Google Gemini and Claude Enterprise. The firm works with a range of vendors, including Harvey, Thomson Reuters, and Microsoft. Some tools, like Harvey and CoCounsel, are used to support legal work, while Copilot helps with administrative tasks. For more specific use cases, the firm collaborates with developers to build custom workflows tailored to its practices and data, Williams-Range said. Align: The firm's AI policy is reviewed quarterly to stay current with changing regulations, she said. It also includes a procurement playbook with specific terms around security and how it shares learnings about the tools. Gibson Dunn also has a strategic advisory board made up of over 30 partners across offices globally. This brain trust meets monthly to guide policy decisions, debate use cases, and determine whether tools like ChatGPT Enterprise should be limited, expanded, or customized. "Just because we can doesn't mean we should," Williams-Range said, referring to the principle that guides the board's work. Sidley Austin hones prompt engineering skills during associate orientation Assess: Over her 29 years with the firm, corporate lawyer Sharon Flanagan has watched Sidley embrace new tech, but with guardrails in place. The firm formed an AI council with members from its management committee, executive committee, and strategy team to set policies and identify use cases. Sidley typically starts with small-scale rollouts to pilot new tools before expanding. Apply: Sidley has explored a range of AI tools, says Jane Rheem, Sidley's chief data and AI officer — from legal-specific platforms, to broader foundation models, to point solutions that help with timekeeping or narrative writing. The firm declined to identify the AI tools it's testing, saying it doesn't want to endorse products that may not be part of its long-term strategy. Flanagan says uptake has been organic among litigators and corporate and regulatory attorneys. Align: Implementation is only the beginning, Rheem says. The firm tracks usage after deployment, gathering data and feedback from "superusers" — early adopters who experiment broadly and flag where tools are working (or not). Sidley is also focused on making sure its youngest lawyers are fluent in the tools. This year, nearly 300 incoming associates participated in a generative AI hackathon as part of their orientation. Ropes & Gray uses AI tools like Harvey and Hebbia to squeeze more hours out of the day Assess: When Ropes & Gray finds an AI service it likes, Ed Black and the IT and practice technology teams put on their investment banker hats. "We phone them up every few weeks and say, 'Tell us about your updates,'" said Black, the firm's technology strategy leader. Before a tool can move to testing, it must pass a security and risk audit; only "qualified vendors" make it to the next phase. From there, testing is twofold. First, a technical evaluation by the firm's technology team aims to ensure the product works as promised. Then a second round with lawyers examines usability and actual value in practice. Apply: Ropes & Gray rolled out Harvey firmwide in June, after a year of use with a smaller test group, Black said. The firm has also collaborated with Harvey on a "workflow builder" that lets users design and deploy custom agents — software that can carry out tasks on its own. Hebbia, an AI agent company focused on professional services, has proven particularly useful to lawyers like Melissa Bender, a partner in the asset management group and cohead of the private funds practice. When institutional investors need fund documents reviewed, Bender uses Hebbia to extract key terms and speed up summaries. She estimates the process now takes two to three hours, less than what would typically be a 10-hour matter. Align: Black stresses responsible use of the tools, starting with the principle that the results of using these tools are first drafts, not the final product. The private funds practice requires tool-specific training for junior and mid-level associates, Bender says, while more senior lawyers are "strongly encouraged" to take the training. The goal is to ensure lawyers know how to use the tools appropriately and empower them to speak with clients about the firm's technology capabilities. "We are in the business of selling legal services," Bender said. "I want our associates to understand the differentiated nature of our offering." Morgan Lewis requires staff to get credentialed before they can use the tools Assess: At Morgan Lewis, the first step in adopting AI isn't picking the tool. It's diagnosing the problem, said attorney Timothy Levin, who leads the firm's investment management practice. Understanding how legal work can be improved with AI is important to ensure tools are applied where they can have a real impact, rather than just throwing tech at a problem, Levin said. Once a tool passes security and risk checks, it's piloted by an attorney and C-suite advisory group spanning 15 practice areas and firm operations — a cross-section designed to vet the tool's value across the firm's legal work. Apply: Morgan Lewis has been inundated with startup pitches, says Colleen Nihill, its chief AI and knowledge officer, as the legal tech gold rush draws a wave of new founders. To cut through the noise, Morgan Lewis favors larger enterprise partners that align with its technical standards. For example, Thomson Reuters is a strategic partner. The firm's advisory group meets regularly with Thomson Reuters to review existing tools, preview the product road map, and beta test unreleased features. They also collaborate to co-develop tools tailored to Morgan Lewis's needs. One use case at Morgan Lewis involves reviewing fund documents for institutional investors, where CoCounsel Core helps attorneys summarize key terms and flag client-specific dealbreakers. Align: Nihill said the firm requires its staff to get credentialed for tools before they can use them. Partners and firm leadership were the first to get CoCounsel Core-certified, a process that included Coursera-based coursework, hands-on exercises, and a final assessment. Once certified, users receive a digital badge displayed on their internal profiles. Nihill says this signals to associates that these tools aren't just approved; they're a professional priority for the firm.

Business Insider
07-07-2025
- Business
- Business Insider
Ex-OpenAI VP says the most successful company teams are like the Avengers
When investor Peter Deng worked at OpenAI, he treated building a team like a puzzle. All the right pieces had to be in the right places. "As a leader, you have to set up your team the right way," Deng, who previously was OpenAI's VP of consumer product, said on an episode of Lenny's Podcast. "You have to really think about your team as a product and what are the various pieces you need to really stretch the gamut of what you're thinking about." Deng, now a general partner at Felicis Ventures, has previously contributed to a series of well-known features, including ChatGPT Enterprise, Facebook's Messenger app, and Uber Reserve. The VC said the best teams he's worked with throughout his career were those composed of people with diverse skill sets. "The teams that I've helped build are — the most successful ones are a team of Avengers that are just very different, have very different superpowers," he said. "But together, you as the leader are the one who's helping adjudicate any differences or any disagreements, but you know you're getting the best outcome when everyone's pulling and obsessing over a different thing." Deng looks to staff his teams with a series of problem-solvers, he said. He thinks about needs that aren't being met, and then works to hire specialists who can close the gap. "It's almost like you're playing an RPG where everyone has different sliders and you have to create this super team where everyone actually spikes in different ways," he said. When Deng would search for new additions, he said he largely looked for two traits in applicants: the potential for autonomy and an appetite for continued improvement. Deng did not respond to a request for comment by Business Insider. "I think the growth mindset thing is so important to me — that we build an org where people are self-reflective, and want to get better, and take that feedback, and give that feedback," he said. "And it just is this meta unlock that I found to be true."
Yahoo
03-07-2025
- Business
- Yahoo
How BBVA is using AI to save employees 3 hours a week
BBVA is expanding its AI offerings to its employees after the bank determined employees who used artificial intelligence saved hours of time on their work. The Spanish bank announced a partnership with Google Cloud to offer Google's AI assistant Gemini to all employees on Wednesday. The partnership grants BBVA employees access to the standalone gen-AI powered Gemini app, Google Workspace with Gemini and Google's AI-powered research assistant NotebookLM. This partnership is the second time BBVA has introduced generative AI tools to its workforce. A year ago, BBVA signed up for 3,300 ChatGPT Enterprise licenses through a partnership with OpenAI. As of May 2025, 83% of the bank's licensed users used ChatGPT at least once a day and created approximately 3,000 specialized assistants for specific tasks, according to a company statement. The tasks automated by ChatGPT in BBVA include translations, document summaries, coding assistance and legal analysis of financial information. According to the company, internal data revealed using ChatGPT to automate tasks saved the bank's employees an average of 2.8 hours a week. The time can then be used on more strategic or customer-focused work instead of on rote tasks. BBVA then increased the number of ChatGPT Enterprise licenses to 11,000 in May 2025. The newly announced deal with Google Cloud opens up the capacity for BBVA's 100,000 global employees to incorporate AI into their daily work. "We expect that the widespread adoption of generative AI across these tools will improve productivity and the work experience of all employees, regardless of their role," said BBVA's global head of workplace Juan Ortigosa. The bank also announced the launch of a mandatory AI training program on the use of AI for its employees. The program, called "AI Express," follows the European Union's AI Act and BBVA's internal data protection and confidentiality policies. Access to Google Workspace with Gemini, the Gemini app and NotebookLM will be granted specifically to employees who have completed the internal training programs. Sign in to access your portfolio


Time Business News
02-07-2025
- Time Business News
Is ChatGPT Safe to Use? What You Need to Know
ChatGPT has quickly become one of the most talked-about technologies in recent years. From writing assistance to coding help, its capabilities are extensive. But with such rapid adoption comes an equally pressing question: Is ChatGPT safe to use? Let's explore the facts, concerns, and safety measures surrounding ChatGPT in great detail. ChatGPT is an AI language model developed by OpenAI. It generates human-like responses based on user input and is widely used for content writing, customer support, education, and even SEO services. Its popularity has surged across industries, making it essential to understand what makes this tool tick—and how secure it is for everyday use. One of the primary safety concerns users have is about data privacy. According to OpenAI's official documentation, ChatGPT does not actively collect personal data. However, the data shared with ChatGPT during a session may be used to improve its performance unless you are using a version that explicitly disables data logging. It's important to know: Free users' chats may be reviewed to fine-tune the model. to fine-tune the model. ChatGPT Enterprise and Pro versions offer enhanced data protection features. offer enhanced data protection features. No data is sold to third parties. To stay secure, avoid sharing sensitive personal information such as passwords, account numbers, or confidential business strategies. Yes, like any tool, ChatGPT can be misused, especially in the hands of malicious users. There have been cases where people attempted to generate harmful content or use AI-generated messages to mimic real people or companies. OpenAI has implemented safeguards, including: Content filters that block harmful or inappropriate outputs. that block harmful or inappropriate outputs. Ethical training datasets that reduce bias and misinformation. datasets that reduce bias and misinformation. Usage monitoring for abusive patterns. Still, no system is perfect. Responsible use is key. For businesses using AI tools in SEO services or customer interactions, it's wise to double-check the content before publishing. Businesses integrating ChatGPT—whether through APIs or embedded chat interfaces—often wonder how secure this AI assistant is. OpenAI provides: Secure API connections via HTTPS Data encryption in transit and at rest User-level access control for team usage Moreover, businesses using ChatGPT Enterprise get access to features like zero data retention, ensuring that prompts are not stored or used for training. For SEO service providers, these enhanced features allow safer collaboration with clients when generating content, keyword plans, or technical audits. The legal and ethical aspects of using AI tools like ChatGPT are evolving. Here's what you should be aware of: Content ownership: OpenAI grants users the rights to the outputs they generate, but generated content may still need to be checked for originality to avoid plagiarism. OpenAI grants users the rights to the outputs they generate, but generated content may still need to be checked for originality to avoid plagiarism. Copyright violations: If ChatGPT unintentionally generates content that mimics protected works, the user is responsible for vetting and editing the material. If ChatGPT unintentionally generates content that mimics protected works, the user is responsible for vetting and editing the material. Fair use policies: Be sure to comply with copyright and content-sharing rules when using AI for blog posts, advertisements, or SEO projects. Being transparent with users or clients about the use of AI tools enhances trust and credibility. ChatGPT, in its core design, isn't a cybersecurity risk. However, there are indirect ways it could be misused: Phishing simulation : Hackers may use AI to craft convincing phishing emails. : Hackers may use AI to craft convincing phishing emails. Social engineering : The model could potentially mimic tone and structure used by trusted sources. : The model could potentially mimic tone and structure used by trusted sources. Malware code suggestions: Though OpenAI has safety nets, determined users may attempt to generate code snippets that serve malicious purposes. To minimize risks, always supervise the AI's output—especially when used for development, scripting, or automating business operations. This is a common misconception. While ChatGPT automates repetitive tasks and content generation, it doesn't replace skilled professionals—it augments them. Writers, marketers, and SEO service providers can use it to enhance productivity, scale projects, and brainstorm ideas faster. It's especially effective in: Generating keyword-optimized blog drafts Writing SEO meta titles and descriptions Creating structured outlines for landing pages Drafting ad copy for Meta or Google Ads Human review and expertise are still essential for context, accuracy, and brand tone. Here are a few smart ways to use ChatGPT safely: Avoid inputting personal or confidential data . . Use trusted platforms or the official OpenAI portal . . Do not treat AI-generated responses as facts without validation. without validation. Enable privacy settings , especially when using ChatGPT in apps or browser plugins. , especially when using ChatGPT in apps or browser plugins. Review all generated content for accuracy, tone, and legal compliance before publishing. For agencies offering SEO services, this means combining the efficiency of AI with strategic human judgment. OpenAI and other tech firms are working toward stricter safety protocols, more transparent AI behavior, and ethical use guidelines. As users and developers, it's crucial to contribute to this ecosystem by using AI responsibly and reporting any misuse or vulnerabilities found during use. Whether you're an individual blogger or a digital agency specializing in SEO services, ChatGPT can be a powerful assistant—if used wisely and securely. Yes, ChatGPT is generally safe to use, especially when basic safety practices are followed. Businesses and professionals using it in digital marketing, customer support, or content creation can benefit greatly from its speed and versatility. However, it's not a set-it-and-forget-it tool. It works best when paired with human expertise, critical thinking, and ethical consideration. Agencies providing SEO services can find ChatGPT particularly useful in scaling efforts, crafting SEO-optimized content, and brainstorming strategies faster than ever before—while maintaining full control and editorial oversight. TIME BUSINESS NEWS