
AI Agents Are Truly Here And We Need To Learn To Live With Them
AI chatbots like ChatGPT started all the buzz in the market more than 2 years back, and now we are seeing the market move to the next best thing, Agentic AI or AI agents. You have used AI apps of all kinds, and even your regular apps now have some level AI-ness built into them. But AI agents are a completely new kettle of fish that is making its mark in positive and to some degree, worrying ways.
Companies like OpenAI and Google are already making the big moves to get things right which is easier said than done when you have AI involved. Now that we have done a fair job of introducing AI agents, we'll let the experts do the talking and even telling you the nitty gritties in the world of these new agents.
'AI agents represent a new way of working — systems that understand intent, act across workflows, and adapt with context. se agents are built on advanced models, operate within clear boundaries, and are tailored for tasks from cybersecurity to human resources," explains Sandeep Alur, CTO, Microsoft Innovation Hub, India
It is intriguing to hear Alur talk about tasks being handled by some of the regular verticals where a hands-on approach has been demanded for years. And it seems that more AI companies see these jobs becoming more handled by the tech than humans.
'Imagine now in large organizations there are 100 of people doing a certain task when the lead comes in, their job is to update the CRM etc. This can be now completely automated with Agentic AI," as highlighted by Dikshant Dave, CEO, Zigment, one of the companies to have deployed WhatsApp agents handling these tasks for its clients.
In fact, a report by ServiceNow says agentic AI will catapult and reform the job market to generate over 10 million jobs in the next five years in India.
AI Agents Everywhere
These agents are programmed to operate tasks, and let humans manage the more complex part of their work or even regular stuff. For instance, you can ask the AI agents built into browsers to scan through a series of tabs and find you the best restaurant for dinner.
'It will be defined because it has a certain intellect, basis the conversation or whatever actions are happening. It takes a decision," Dave adds, giving more emphasis on its use and application across sectors.
This definitely feels like the next phase of AI in our lives and the industry is gradually making it a core part of their structure. But does that come at the cost of human jobs? The boundaries are being already set, and the path is getting defined, which tells people where the AI threat to jobs will be less and where it will have a big impact.
'We can actually switch a mid-level manager or executive, a lot of the tasks that they were doing with an AI Agent. A lot of these underlying activities that were done, like the operation by the juniors are getting transferred to these agents," Dave opines.
And the recent job cut trends align with his views, especially as you see the likes of Google and even Microsoft raise the pink slips for a variety of roles to embed AI further into their systems. Having said that, AI agents will only get some degree of control on the tasks, as businesses feel the pinch of adopting fully autonomous AI agents to handle their client work that earns them millions or even billions, as pointed out by Capgemini in its Rise of Agentic AI report.
'This is not about replacing people — it's about removing repetitive work and giving individuals more space to focus on creativity, problem-solving, and human connection," Alur shares his own insights.
Even then, it clearly feels like companies are still getting the hang of the technology and would rather prefer a mix of humans and AI rather than fully let the tech do the bulk of the work. However, the trends will change quickly and by 2028 we might see more businesses get comfortable using AI agents and fully trust them to handle A to Z of their tasks.
When it comes to business there is a very high degree of concern amongst customers about the data and the hallucinations that bring further trust factor issues in AI. 'We also kind of build a guardrails layer to ensure that there are zero hallucinations and the whole activity is that the LLM will be helm," Dave cites the best way to prevent such lapses.
He also mentions that companies have to build another later on the top of the LLMs to ensure the accuracy of information. The risks clearly outweigh its benefits but he feels that with time we will have better checks in place to guard against such problems.
'Those concerns are happening and I would say that a lot of new ways and better ways will come up over the next few years where we will have a very stable system," he concludes our chat giving a positive outlook to the future even though AI will inflict some level of course correction in the larger scheme of things.
tags :
AI model
view comments
Location :
Delhi, India, India
First Published:
August 05, 2025, 09:36 IST
News tech AI Agents Are Truly Here And We Need To Learn To Live With Them
Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Hindustan Times
16 minutes ago
- Hindustan Times
OpenAI introduces free customizable AI models: Price, features and more
OpenAI announced two new open-weight AI language models, gpt-oss-120b and gpt-oss-20b, on Tuesday, August 5. These models offer similar capabilities to the company's O-series, as reported by TechCrunch. This marks the first time since OpenAI rolled out GPT-2 in 2019. OpenAI releases two new AI models.(Reuters) The company stated that the text-only models have been specially designed to be used as lower-cost options, which can be easily run and customized by developers and researchers. OpenAI releases lower-cost models OpenAI's latest release comes after tech giants like Meta and Mistral AI, backed by Microsoft, as well as Chinese startup DeepSeek, unveiled their open-weight models in the past few years. According to CNBC, the AI model is considered open weight when all its parameters or elements that have been utilized to improve the output and prediction at the time of training are publicly made available. Such models not only offer control and transparency in the respective field, but also remain quite different from the open-source models, where the full source code is made available to customers to modify and use. OpenAI President Greg Brockman said it is "exciting" to see the developing ecosystem and that the company is looking forward to contributing to it. "...we are excited to contribute to that and really push the frontier and then see what happens from there,' the outlet quoted Brockman as saying. Also read: Google's AI tool 'Big Sleep' flags 20 security flaws in open-source software without human input All about OpenAI's new models Among the two models, gpt-oss-120b is large and holds the capability to run on a single Nvidia GPU, while the gpt-oss-20b model is the lighter version to run on a consumer laptop having 16GB of memory. The two models, which have been dubbed "state-of-the-art," remain freely available to download from the Hugging Face platform. For this project, OpenAI partnered with Nvidia, Advanced Micro Devices, Cerebras, and Groq to see how the models work on various chips. In a statement, Nvidia CEO Jensen Huang said that OpenAI has shown "what could be built on Nvidia AI — and now they're advancing innovation in open-source software.' During a briefing, the company highlighted that its new open models are capable of sending complex queries to AI models in the cloud. This means whenever it is unable to perform a specific task, then the developers will be able to connect it to the more capable closed models of the company, per TechCrunch. The two new AI models, gpt-oss-120b and gpt-oss-20b, underwent a coding test on Codeforces and scored 2622 and 2516. They have outperformed DeepSeek's R1 model. The release of OpenAI's open weight models comes after the company delayed the launch on a repeated basis. Last month, OpenAI CEO Sam Altman informed in an X post that it required extra time to 'run additional safety tests and review high-risk areas.' FAQs: 1. How to try new OpenAI models? As per the company, users can download them from Hugging Face and GitHub under the Apache 2.0 license. These are being made available to run on PCs via LM Studio and Ollama. 2. What are the new AI models from OpenAI? These include open-weight AI language models, gpt-oss-120b and gpt-oss-20b. 3. What's the price for new OpenAI models? They are free to download and users can customize them accordingly.


Time of India
34 minutes ago
- Time of India
OpenAI responds to DeepSeek with two 'open' models
OpenAI is releasing a pair of open and freely available artificial intelligence models that can mimic the human process of reasoning, months after China's DeepSeek gained global attention with its own open AI software. The two models, called GPT-oss-120b and GPT-oss-20b , will be available on AI software hosting platform Hugging Face and can produce text - but not images or videos-in response to user prompts, OpenAI said on Tuesday. These models can also carry out complex tasks like writing code and looking up information online on a user's behalf, the company said. Productivity Tool Zero to Hero in Microsoft Excel: Complete Excel guide By Metla Sudha Sekhar View Program Finance Introduction to Technical Analysis & Candlestick Theory By Dinesh Nagpal View Program Finance Financial Literacy i e Lets Crack the Billionaire Code By CA Rahul Gupta View Program Digital Marketing Digital Marketing Masterclass by Neil Patel By Neil Patel View Program Finance Technical Analysis Demystified- A Complete Guide to Trading By Kunal Patel View Program Productivity Tool Excel Essentials to Expert: Your Complete Guide By Study at home View Program Artificial Intelligence AI For Business Professionals Batch 2 By Ansh Mehra View Program Crucially, the models are both open-weight systems, similar to Meta Platforms Inc. 's Llama. The term "weight" refers to the parameters in an AI model. OpenAI is disclosing the many numerical values the models picked up and were tweaked with during the training process, allowing developers to better customise them. However, OpenAI is not revealing the data used to train them, falling short of the definition for a truly open source AI model. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Despite its name, most of OpenAI's models are closed systems -the kind of software that's controlled by the developer, can't be modified by users, and includes less transparency about its technical underpinnings. Like many of its US rivals, OpenAI has guarded its training data and focused on charging more for its most powerful models to offset the immense cost of development.


NDTV
an hour ago
- NDTV
Australian Regulator Says YouTube, Others 'Turning Blind Eye' To Child Abuse Material
Australia's internet watchdog has said the world's biggest social media firms are still "turning a blind eye" to online child sex abuse material on their platforms, and said YouTube in particular had been unresponsive to its enquiries. In a report released on Wednesday, the eSafety Commissioner said YouTube, along with Apple, failed to track the number of user reports it received of child sex abuse appearing on their platforms and also could not say how long it took them to respond to such reports. The Australian government decided last week to include YouTube in its world-first social media ban for teenagers, following eSafety's advice to overturn its planned exemption for the Alphabet-owned Google's video-sharing site. "When left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services," eSafety Commissioner Julie Inman Grant said in a statement. "No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services." Google has said previously that abuse material has no place on its platforms and that it uses a range of industry-standard techniques to identify and remove such material. Meta - owner of Facebook, Instagram and Threads, three of the biggest platforms with more than 3 billion users worldwide - says it prohibits graphic videos. The eSafety Commissioner, an office set up to protect internet users, has mandated Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp to report on the measures they take to address child exploitation and abuse material in Australia. The report on their responses so far found a "range of safety deficiencies on their services which increases the risk that child sexual exploitation and abuse material and activity appear on the services". Safety gaps included failures to detect and prevent livestreaming of the material or block links to known child abuse material, as well as inadequate reporting mechanisms. It said platforms were also not using "hash-matching" technology on all parts of their services to identify images of child sexual abuse by checking them against a database. Google has said before that its anti-abuse measures include hash-matching technology and artificial intelligence. The Australian regulator said some providers had not made improvements to address these safety gaps on their services despite it putting them on notice in previous years. "In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on-staff," Inman Grant said.