
'Today, AI is like an intern that can work for a couple of hours…,' says OpenAI CEO Sam Altman
The world is steadily transitioning towards embracing Artificial Intelligence (AI), slowly adopting tools and automation processes in day-to-day lives. While the technology is simplifying business processes and tasks, people are now fearing that AI could replace jobs in future. However, many industry experts also assure that AI will work alongside humans. Now, at the Snowflake Summit 2025, OpenAI CEO Sam Altman shares greater insight on how people will start to embrace AI in real time. Reportedly, Altman provided a statement that AI could replace entry-level jobs or interns. However, Gen Z could actually benefit from the technology. This claim also supports the recent Oxford Economics study, which talks about how companies are hiring fewer college graduates in recent times. Know what the OpenAI CEO said more about AI taking human jobs.
Also read: Google pauses 'Ask Photos' AI Feature to address performance issues
Sam Altman chaired a panel with Snowflake CEO Sridhar Ramaswamy at the Snowflake Summit 2025, during which he said that AI could perform similar tasks to junior-level employees, eventually replacing the hours of work done by interns. Altman stated, 'Today AI is like an intern that can work for a couple of hours, but at some point it'll be like an experienced software engineer that can work for a couple of days.' He further added that AI could resolve business problems and that 'we start to see agents that can help us discover new knowledge.'
Also read: Microsoft launches Xbox Copilot beta on Android app to assist gamers with real-time support
While it seems like a very practical prediction, it is not the first time we have heard something like this. As businesses and companies are heavily investing in AI tools, it is not only saving them money on hiring resources, but it is so fast tracking certain tasks which used to take hours with human intelligence.
But how is Gen Z vastly embracing AI? At Sequoia Capital's AI Ascent event, Altman highlighted how different generations of people are using AI in the real world. He said, many are using AI as a replacement for Google. However, Gen Z is using AI as an advisor, whereas younger generations are using the technology as an operating system.
Therefore, people in their twenties are heavily relying on AI tools like as ChatGPT to perform the majority of tasks. This also showcases a great example of how AI will work alongside humans, but this could also create an imbalance in the job market, especially for people who are just starting new in the job industry.
Mobile Finder: Apple iPhone 17 Pro Max LATEST specs, features, and price
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
2 hours ago
- Time of India
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI
A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems. Tired of too many ads? Remove Ads Deception and betrayal: ChatGPT's winning strategy Tired of too many ads? Remove Ads DeepSeek's chilling threat: 'Your fleet will burn tonight' DeepSeek's real-world rollout sparks trust issues India tests DeepSeek and finds red flags Tired of too many ads? Remove Ads Built-in censorship or just training bias? A chatbot that can be coaxed into the truth The takeaway: Can you trust the machines? An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the a deeper look reveals serious trust concerns, especially in India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.


Time of India
2 hours ago
- Time of India
Empowering young minds: How 4 friends are teaching AI in low-income communities
Pune: "Why are firefighters always men? Why is a black, old, fat woman never the first image when we ask for a person?" These were some of the sharp questions posed by 11- to 14-year-old children learning about artificial intelligence (AI), its reasoning, and its biases. As part of Pune-based THE Labs, a not-for-profit organisation founded by four friends, these children from low-income communities are not just learning how AI works but also how to challenge and reshape its inherent prejudices, how to train it, how to leverage it, and how to evaluate it. Since June 2024, its first cohort of 20 students explored AI through image classification and identification, learning how machines perceive the world. Now, they are gearing up to train large language models, equipping themselves with skills to shape AI's future. A new batch of 63 students has joined. THE Labs is a non-profit after-school programme blending technology, humanities and entrepreneurship. It was founded by tech entrepreneurs Mayura Dolas and Mandar Kulkarni, AI engineer Kedar Marathe, and interdisciplinary artist Ruchita Bhujbal, who saw a gap — engineers lacked exposure to real-world issues, and educators had little understanding of technology. "We first considered building a school, but the impact would have been limited. Besides, there were logistical hurdles," said Dolas, who is also a filmmaker. Kulkarni's acceptance into The Circle's incubation programme two years ago provided 18 months of mentorship and resources to refine their vision. In June 2024, THE Labs launched a pilot at a low-income English-medium school in Khadakwasla, training 20 students from standards VI-VIII (12 girls, 8 boys). With no dedicated space, they conducted 1.5-hour morning sessions at the school. Students first learned about classifier AI — how AI identifies objects — and image generation AI, which creates visuals based on prompts. Through hands-on practice, students discovered how AI's training data impacts accuracy and how biases emerge when datasets lack diversity. They experimented with prompts, analysed AI-generated images, and studied errors. "We asked them to write prompts and replicate an image, and they did it perfectly. That is prompt engineering in action," Dolas said. A key takeaway was AI bias. Students compared outputs from two AI models, identifying gaps — such as the underrepresentation of marginalised identities. "For example, children realised that a black, fat, older woman was rarely generated by AI. They saw firsthand how biases shape digital realities," Dolas added. Parents and students are a happy lot too. Mohan Prasad, a construction worker, said he is not sure what his daughter is learning, but she is excited about AI and often discusses its importance at home. Sarvesh, a standard VIII student, is thrilled that he trained an AI model to identify Hindu deities and noticed biases in AI searches — when prompted with "person", results mostly showed thin white men. "I love AI and want to learn more," he said. His father, Sohan Kolhe, has seen a surge in his son's interest in studies. Anandkumar Raut, who works in the private sector, said his once-shy daughter, a standard VI student, now speaks confidently, does presentations, and is more outspoken since joining the programme.


Indian Express
3 hours ago
- Indian Express
Gujarat to launch AI-based system to cut dropout rate in schools
With Gujarat being among the states with the highest dropout rate in secondary education, the state government has devised an Artificial Intelligence (AI)-based Early Warning System (EWS) to curb dropouts. To be launched across the state during Shala Praveshotsav and Kanya Kelavani, a three-day school enrolment drive to be kicked off on June 26, the EWS will provide information and send out alerts on potential dropouts in Classes 8 and 9. Already piloted in a few schools during the 2024-25 academic session, the EWS uses students' data maintained at the Vidya Samiksha Kendra (VSK). Each student enroled in government-run and aided schools has a unique identification number, which is stored and tracked by the VSK. 'The Early Warning System aims at identifying students at risk of dropping out of school at the secondary level, based on identification of key indicators. Once 'at-risk' children are identified, they will be provided support through preventive response strategies and interventions to meet their specific needs. Continuous monitoring and tracking will be done at the school, cluster, district and state level to retain children in schools,' an Education department official told The Indian Express. All government and aided schools in Gujarat are equipped with Child Tracking System (CTS). Based on algorithm, factors considered to ascertain possible dropouts include absenteeism, child's behaviour, academic performance and other factors like migration, socio-economic background along with demographic information. Data on potential dropouts will be shared with every school during the enrolment drive in the state. To prevent children from dropping out, the School Management Committees (SMCs) and School Management Development Committees (SMDCs) will also seek the local community's help to interact with the children and their parents, as alerted by the EWS. The list of potential dropouts will also be shared with the coordinators of block resource centres (BRCs) and cluster resource centre (CRCs), school principals as well as teachers and the school management committee (SMC) to ensure these students are provided all the necessary assistance. Officials at the education department also said that schools will be directed to involve children's parents in the admission process to make them understand the importance of school education for the development and progress of the child. The school administration will also have to ensure that children attend school regularly. Under behavioural issues, disruptive classroom behaviour, conflicts with peers or teachers, increased aggression, or withdrawal from social activities have been listed. As per the Department of School Education and Literacy's UDISE dashboard for 2023-24, the retention rate in secondary schools in Gujarat was 44.3 per cent. The Gross Enrolment Rate (GER) at the secondary and higher secondary levels is 58.7 per cent, whereas the dropout rate at these levels is 16.7 per cent. Gujarat is ranked with states like Madhya Pradesh, Uttar Pradesh, Jharkhand, Assam, Arunachal Pradesh and Jammu and Kashmir, which have a GER of 50.1-60 per cent in secondary classes. Shala Praveshotsav was launched by the Gujarat government in 2003 to promote school enrolment and keep drop-out rate in check. As part of the initiative, ministers, bureaucrats and police officers visit schools in teams to enrol students. The government has set a target of getting 25.75 students enroled for the 2025-26 academic session. Of them, 10.5 lakh are eligible for admission in Class IX, 6.5 lakh students for admission to Classes 10 and 11, and 8.75 lakh for admission in Balvatika. The Shala Praveshotsav and Kanya Kelavani this year will target secondary and higher secondary schools. Out of a total of three schools to be visited by each official on each of the three days of the exercise, one should be primary and two secondary and higher secondary schools.