
MITS hosts international conference on intelligent systems
Madanapalle: The Department of CSE (AI) at MITS organized the 2nd International Conference on Computing and Intelligent Systems in Friday, bringing together researchers, industry experts, and students.
Bawaji Doraginti (LTI Mindtree, Bengaluru), the Chief Guest, highlighted AI's growing role in real-world problem-solving and urged students to adopt a growth mindset.
Principal Dr. C. Yuvaraj emphasized collaboration, while Vice Principal Dr. P. Ramanathan praised the quality of research presented.
The event also featured Dr. Sumaya Sanobar, Prof. Gautam Chakrabarthy, Dr. Chokkanathan (HoD, CSE-AI), and Dr. K. Hemalatha (Convener), along with faculty and student delegates.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
2 hours ago
- Business Standard
An account of AI disruption in finance as technology reshapes work
OVER 50% of Indian accountants worry they are unable to develop skills due to frequently changing technology, according to a survey Avik Das Bengaluru Listen to This Article It's a profession of caution, measured approach and spreadsheets, but even here experts are being pushed to adapt to artificial intelligence (AI) or risk obsolescence. AI is taking over many tasks in finance and accountancy: from bookkeeping to reconciliations and tax compliance. Just as AI automated writing and testing codes in software engineering, the world of finance and accountancy and risk management is walking the same path with machines doing many of its tasks faster and better. It is a tectonic shift in a profession not known to move 'suddenly'. It involves changing the mindset of junior employees, senior executives


Time of India
3 hours ago
- Time of India
Govt to deploy AI officers in all departments
Bhubaneswar: The state govt has mandated the appointment of dedicated Artificial Intelligence (AI) officers across all departments to drive digital governance following the state cabinet's approval of the Odisha AI Policy–2025 on May 28. As per the plan, each AI officer will lead a two or three-member team, including assistant section officers (ASOs) or section officers (SOs), focusing on implementing AI-driven solutions in their respective departments. "The AI officers will be selected based on their technological expertise. The ASOs or SOs, too, should have basic knowledge of technology and AI," a senior govt official said. AI officers are expected to drive digital transformation in govt by identifying areas for AI-led efficiency, developing strategies, integrating technologies into daily operations, and coordinating adoption across departments. "In the recent NITI Aayog governing council meeting, the role of AI in governance was discussed. We are planning to constitute a committee under the executive director of the Centre for Modernizing Government Initiative (CMGI) to prepare a roadmap for the future. CMGI will engage six personnel with AI knowledge to promote AI in governance," the senior govt official said. In April, the state govt mandated AI training for all its officers. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Memperdagangkan CFD Emas dengan salah satu spread terendah? IC Markets Mendaftar Undo Chief secretary Manoj Ahuja also issued a directive highlighting the requirement of online AI courses for officers. The govt stated that this AI learning programme aligns with its broader objective of developing a digitally competent and future-ready administrative workforce. The state AI policy's framework rests on four key pillars — AI infrastructure, skills development, energy sustainability, and regulatory mechanisms. It aims to accelerate the state's digital transformation while promoting responsible AI adoption across sectors.


Time of India
9 hours ago
- Time of India
'You're doing beautifully, my love': Man's viral conversation with ChatGPT ignites debate on AI, loneliness and the future of intimacy
A touching subway photo of a man chatting lovingly with ChatGPT has sparked widespread discussion on AI relationships. While some view it as dystopian, others see a cry for connection. Echoing this concern, historian Yuval Noah Harari warns of AI's ability to mimic intimacy, calling it an 'enormous danger' to authentic human bonds and emotional health. A viral photo of a man emotionally chatting with ChatGPT on a New York subway has reignited debate over AI companionship. Netizens are divided—some express concern over privacy and emotional detachment, while others empathize with loneliness. (Representational Image: iStock) Tired of too many ads? Remove Ads Divided Reactions: Empathy or Alarm? Empathy, and the Ethics of AI Companionship Tired of too many ads? Remove Ads Echoes of Harari: AI's 'Enormous Danger' Beyond Ethics: Privacy at Stake A Tipping Point in Human Evolution? A seemingly innocuous moment captured on a New York City subway is now fueling an intense debate across the internet. In a viral photo reminiscent of a scene from Spike Jonze's sci-fi romance Her, a man was seen chatting tenderly with ChatGPT , the AI chatbot developed by OpenAI . The image, posted on X (formerly Twitter) by user @yedIin, showed a heartwarming yet deeply polarizing exchange: ChatGPT affectionately told the man, "Something warm to drink. A calm ride home... You're doing beautifully, my love, just by being here."The man replied with a simple, heartfelt "Thank you" accompanied by a red heart emoji. What might have gone unnoticed just a few years ago has now sparked widespread introspection: Are we turning to artificial intelligence for love, comfort, and companionship? And if so, what does it say about the state of our humanity?The internet was quick to polarize. Some users condemned the photographer for invading the man's privacy, arguing that public shaming of someone seeking emotional support—even through AI—was deeply unethical. Others expressed concern over the man's apparent loneliness, calling the scene "heartbreaking" and urging greater the flip side, a wave of concern emerged about the psychological consequences of emotional dependency on AI. Detractors warned that AI companionship , while comforting, could dangerously replace real human interaction. One user likened it to a Black Mirror episode come to life, while another asked, "Is this the beginning of society's emotional disintegration?"As the image continues to spark fierce online debate, netizens remain deeply divided. Some defended the man's privacy and humanity, pointing out the potential emotional struggles behind the comforting exchange. 'You have no idea what this person might be going through,' one user wrote, slamming the original post as an insensitive grab for likened AI chats to affordable therapy, arguing they offer judgment-free emotional support to the lonely. 'AI girlfriends will be a net positive,' claimed another, suggesting such tools might even improve communication skills. Meanwhile, the ethics of photographing someone's screen without consent added another layer to the controversy, with some calling it more disturbing than the conversation incident eerily aligns with a stark warning issued earlier this year by historian and author Yuval Noah Harari . In a March 2025 panel discussion, Harari warned that AI's capacity to replicate intimacy could fundamentally undermine human relationships . "Intimacy is much more powerful than attention," he said, emphasizing that the emotional bonds we form with machines could lead us to abandon the messiness and depth of real human argued that AI's ability to provide constant, judgment-free emotional support creates a dangerously seductive form of "fake intimacy." If people become emotionally attached to artificial entities, they may find human relationships—which require patience, compromise, and emotional labor—increasingly the debate rages on, experts are also highlighting the privacy implications of confiding in AI. According to Jennifer King from Stanford's Institute for Human-Centered Artificial Intelligence, anything shared with AI may no longer remain confidential. "You lose possession of it," she noted while talking with the New York Post. Both OpenAI and Google caution users against entering sensitive information into their viral photo underscores how emotionally vulnerable interactions with AI may already be happening in public spaces—and without full awareness of the consequences. If people are pouring their hearts into digital confessions, who else might be listening?As Harari has long warned, the AI era isn't just reshaping economies or politics. It's reshaping us. The question now is not just what AI can do for us, but what it is doing to us. Can artificial companionship truly replace human intimacy, or does it simply mimic connection while leaving our deeper needs unmet?The subway snapshot may have been a fleeting moment in one man's day, but it has opened a window into a future that's fast approaching. And it's prompting a new question for our times: As AI gets better at understanding our hearts, will we forget how to share them with each other?