
Siri-ously? AI Got Clever, Not Conscious
By 2025, artificial intelligence will be an influential social force, not just a technological trend. AI systems have composed code, authored legislation, diagnosed medical conditions, and even composed music. But it became clear that despite the fact that machines got better at speed, cleverness, and oddly creative powers, they were lacking one essential element of intelligence: common sense, empathy, and humanity.Technically, 2025 saw many breakthroughs. OpenAI 's GPT-4.5 and Anthropic's Claude 3.5 became popular choices for solving complex problems in business. Google DeepMind 's Gemini amazed researchers with its strong reasoning skills. Meta's open-source Llama 3 models made cutting-edge tools available to more people. AI agents like Devin and Rabbit R1 were introduced to handle tasks ranging from personal chores to business processes.Yet, beyond such revolutions, a grim reality set in: AI still does not really get us. Meanwhile, generative models flirted with creativity but faltered with ethics. Deepfakes, which were previously easy to detect, were now nearly impossible to distinguish from actual videos and created confusion during political campaigns in various nations. Governments scrambled to codify the origins of content, whereas firms such as Adobe and OpenAI inserted cryptographic watermarks, which were hacked or disregarded shortly after.AI struggled most with social and emotional knowledge. Even with advances in multimodal learning and feedback, AI agents were unable to mimic true empathy. This was especially evident in healthcare and education, where communications centered on the human. Patients were not eager to trust the diagnoses from emotionless avatars, and students were more nervous when interacting with robotic tutors that weren't flexible.The year wasn't filled with alarm bells. Open sourcing low-barrier models initiated a surge in bottom-up innovation, particularly in the Global South, where AI facilitated solutions in agriculture, education, and infrastructure. India's Bhashini project, based on local-language AI, became a template for inclusive tech development.One thing is certain in 2025: AI is fantastic but fragile. It cannot deal well with deeper meaning, but it can convincingly simulate intelligence. While not intelligent enough to guide us, machines are now intelligent enough to astonish us. While at present humans enjoy the advantage, the gap is closing faster than we imagined.It was less about machines outsmarting humans than about redefining what intelligence is. AI showed limits in judgment, compassion, and moral awareness, even as it exhibited speed, scope, and intricacy. These are not flaws; they are reminders that context is as vital to intelligence as computation. The actual innovation is not in choosing between machines and humans but in creating a partnership in which the two complement each other's strengths. Real advancement starts there.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Mint
an hour ago
- Mint
Zhipu challenges OpenAI with upcoming GLM-4.5 open-source model, launch likely next week
Zhipu, a Chinese artificial intelligence firm now rebranded as is reportedly preparing to launch the latest version of its open-source language model, GLM-4.5, early next week. The move places it among a rising number of Chinese tech companies accelerating their efforts to make advanced AI models freely accessible. According to a Bloomberg report, the release of the LLM model is expected as soon as Monday. GLM-4.5 will serve as an upgrade to the firm's existing flagship model, as Zhipu attempts to position itself as a serious contender in the global AI landscape. The company has not responded to media enquiries regarding the release timeline or technical details of the new model. Zhipu's move reflects a broader trend in China's AI sector, where multiple firms are pivoting towards open-source strategies. Moonshot recently unveiled its Kimi K2 model to the public, while StepFun released a non-proprietary version of its own reasoning system. The open-source approach is increasingly seen as a method to drive adoption and influence global AI standards, especially as Western competitors like OpenAI and Anthropic continue to expand. Separately, the Chinese tech giant is reportedly re-evaluating its listing strategy. Bloomberg News reported earlier this month that the firm is considering moving its planned initial public offering from mainland China to Hong Kong. The potential shift, if executed, could raise up to $300 million and is said to be in collaboration with financial advisers. Zhipu counts Chinese tech giants Alibaba and Tencent among its investors. The announcement of GLM-4.5 follows a period of intensified activity among Chinese AI startups, many of which are seeking both market visibility and credibility in a sector dominated by American firms. The release of more powerful and accessible models is likely to add momentum to the competition, both within China and internationally. (With inputs from Bloomberg)


Time of India
an hour ago
- Time of India
Chatbot culture wars erupt as bias claims surge
For much of the last decade, America's partisan culture warriors have fought over the contested territory of social media — arguing about whether the rules on Facebook and Twitter were too strict or too lenient, whether YouTube and TikTok censored too much or too little and whether Silicon Valley tech companies were systematically silencing right-wing voices. Those battles aren't over. But a new one has already started. This fight is over artificial intelligence, and whether the outputs of leading AI chatbots like ChatGPT, Claude and Gemini are politically biased. Conservatives have been taking aim at AI companies for months. In March, House Republicans subpoenaed a group of leading AI developers, probing them for information about whether they colluded with the Biden administration to suppress right-wing speech. And this month, Missouri's Republican attorney general, Andrew Bailey, opened an investigation into whether Google, Meta, Microsoft and OpenAI are leading a 'new wave of censorship' by training their AI systems to give biased responses to questions about President Trump. On Wednesday, Trump himself joined the fray, issuing an executive order on what he called 'woke AI'. 'Once and for all, we are getting rid of woke,' he said in a speech. Republicans have been complaining about AI bias since at least early last year, when a version of Google's Gemini AI system generated historically inaccurate images of the American founding fathers, depicting them as racially diverse. That incident drew the fury of online conservatives, and led to accusations that leading AI companies were training their models to parrot liberal ideology. Since then, top Republicans have mounted pressure campaigns to try to force AI companies to disclose more information about how their systems are built, and tweak their chatbots' outputs to reflect a broader set of political views. Now, with the White House's executive order, Trump and his allies are using the threat of taking away lucrative federal contracts — OpenAI, Anthropic, Google and xAI were recently awarded Defense Department contracts worth as much as $200 million — to try to force AI companies to address their concerns. The order directs federal agencies to limit their use of AI systems to those that put a priority on 'truth-seeking' and 'ideological neutrality' over disfavoured concepts like diversity, equity and inclusion.


Time of India
an hour ago
- Time of India
Master the Prompt, Master the Future: Why This Skill Is a Game Changer
Live Events In an artificial intelligence-driven world, prompt engineering is the secret sauce that powers next-generation productivity , creativity, and innovation. What was once a niche skill hiding in the backrooms of AI research labs and among front-line innovators is now becoming a fundamental competence across businesses, like marketing and journalism, software coding, and customer experience are just some examples. The age of AI has arrived. It's not coming. And prompt engineering is the language we need to learn to what is prompt engineering? Simply put, it's the deliberate design of input questions (or "prompts") to direct AI models-particularly generative models such as ChatGPT, Claude, Gemini, or Midjourney is to generate extremely relevant, effective, or innovative responses. But don't get it wrong: this is not necessarily asking smarter questions. It's a matter of mastering the subtlety of context, tone, limitations, and ordering—basically learning how to communicate machine logic in human should we care? Because AI is only as intelligent as the prompts it's provided. Garbage in, garbage out. Whether you're teaching an AI to abstract legal documents, create marketing materials, create an image, or provide customer support, the distinction between a mediocre result and a breakthrough solution often rests on how well you design the prompt. In today's hybrid human-AI workflow, the prompt is your steering are waking up to it. Innovative companies are already upskilling squads with instant engineering playbooks. Job titles are changing. Titles such as "Prompt Designer" or "AI Interaction Specialist" are appearing on job boards everywhere, and with salaries to prove it. LinkedIn has even introduced prompt engineering as a skill you can highlight. It's no longer exclusive to techies - it's becoming a universal makers, analysts, and even CEOs are adopting this as a meta-skill - a force multiplier that boosts decision-making, productivity, and ideation. Need a career competitive advantage? Learn to prompt. Wish to be remarkable in your next pitch deck or client meeting? Speak AI fluently with prompt here's the catch: the space is still open. There is no set rulebook yet, and that's the opportunity. Prompt engineering is more art than science now. It favours curiosity, experimentation, and iteration. That means people who begin now can help define its standards, best practices, and even the ethics of AI-human collaboration Just as Excel was the 2000s' must-have skill and coding ruled the 2010s, prompt engineering is emerging as the 2020s' power skill. It's not about automating humans - it's about augmenting them. The people who can prompt with precision, creativity, and purpose will be driving the next revolution of digital change.