
‘Use AI or get out…': GitHub CEO warns developers
According to him, AI is on track to write 90 percent of all code within the next two to five years. Rather than panic, though, Dohmke claims developers are showing optimism. Many are actively learning how to collaborate with AI tools, refining their prompting skills, reviewing machine-generated code, and stepping into new roles that demand higher-order thinking.
'This is no longer a question of productivity. It's a question of reinvention,' he added.
Dohmke also believes this change must begin at the education level. He argued that coding education needs to evolve beyond syntax drills. Instead, students should be trained to question AI-generated answers, think critically, and build a deeper understanding of system-level design.
His comments follow an internal GitHub study that tracked how 22 developers are currently using AI in their workflow. The findings revealed that those who embraced the technology fully experienced significant changes, not only in how they operate but also in how they view their careers. Developers reported increased ambition, improved problem-solving, and higher job satisfaction.
'They're not writing less code – they're enabling more complex, system-level work,' said Dohmke.
He stressed that AI is not replacing developers, but transforming their roles. Coders are increasingly becoming system designers, reviewers, and AI directors. One developer summed it up light-heartedly, saying, 'My next title might be Creative Director of Code.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
14 minutes ago
- NDTV
In Assam, Women's Body NCW Introduces 'Yashoda AI' To Tackle Cybercrime
Guwahati: The National Commission for Women (NCW), in partnership with Future Shift Labs (FSL), launched the YASHODA AI initiative, aiming to build a digitally secure and AI-aware society. The two-day awareness campaign began on Wednesday at the DGP Office in Ulobari, Guwahati. The exclusive session for law enforcement explored the transformative role of Artificial Intelligence (AI) in policing, public safety, and crime prevention. Officers learned how AI affects criminal behaviour and information consumption in the digital age. The workshop highlighted tools such as ChatGPT, NCW's Women Safety App, Cyber Dost, and deepfake detection platforms, equipping police personnel to better investigate cybercrimes, tackle misinformation, and connect effectively with communities. Police officials actively engaged in discussions on real-life case studies involving AI-driven scams, misinformation, and digital impersonation—challenges that, if unaddressed, can hinder justice. Emphasis was placed on ethical technology usage, data privacy, and responsible AI deployment in police operations. "Prevention over Reaction" In a message to attendees, NCW Chairperson Smt. Vijaya K. Rahatkar emphasised, "When police officers understand technology, they don't just react to crime—they prevent it. YASHODA AI is about building confidence, clarity, and capability in the digital age." Nitin Narang, Founder of Future Shift Labs, said, "AI policing is no longer futuristic—it's already here. The better our forces understand these tools, the more empowered and effective they become in safeguarding society." Day 2 to Focus on Youth Engagement The campaign continues on August 7 with a second workshop at B. Borooah College, Guwahati, targeting college students. The session will focus on digital ethics, online safety, AI's role in education and careers, and how tools like ChatGPT and Cyber Dost can help navigate digital challenges. Empowerment & Safety in Focus Addressing the media, Smt. Vijaya K. Rahatkar stated, "I have come to Assam on behalf of the National Women's Commission for a two-day visit. This morning, we held meetings with the Chief Secretary of Assam and other key officials to understand the state's efforts towards women's empowerment and how joint initiatives with the Central Government are being implemented effectively." She further shared her concerns about the impact of cybercrimes on women, especially deepfakes, economic fraud, and the misuse of personal images: "Women are suffering immensely from cybercrimes—deepfakes and misuse of images can devastate their lives. That's why we're organising this AI awareness program with the police, training them to respond to and prevent such crimes." "Crime Rates Have Decreased Since 2020 Reflecting on crime trends, Chairperson Rahatkar noted that the number of crime cases reported to the police has gradually declined. "If we look back from 2020 to now, in 2025, we can see a positive trend. This is a good sign and reflects improvement in preventive mechanisms." She concluded her address by stating her plans to engage with MSME women entrepreneurs, Self-Help Groups, and village-based women organisations during her visit.


Economic Times
38 minutes ago
- Economic Times
Trump says US will levy 100% tariff on some chip imports
Synopsis President Trump announced a potential 100% tariff on semiconductor chips imported from countries not manufacturing in the U.S. or committed to doing so. Exemptions would apply to companies investing in American chip production, like TSMC and potentially Nvidia. IANS Donald Trump The United States will impose a tariff of about 100% on semiconductor chips imported from countries not producing in America or planning to do so, President Donald Trump said. Trump told reporters in the Oval Office on Wednesday the new tariff rate would apply to "all chips and semiconductors coming into the United States," but would not apply to companies that had made a commitment to manufacture in the United States or were in the process of doing so. "If, for some reason, you say you're building and you don't build, then we go back and we add it up, it accumulates, and we charge you at a later date, you have to pay, and that's a guarantee," Trump added. The comments were not a formal tariff announcement, and Trump offered no further specifics. It is not clear how many chips, or from which country, would be impacted by the new levy. Taiwanese chip contract manufacturer TSMC - which makes chips for most U.S. companies - has factories in the country, so its big customers such as Nvidia are not likely to face increased tariff costs. The AI chip giant has itself said it plans to invest hundreds of billions of dollars in U.S.-made chips and electronics over the next four years. An Nvidia spokesperson declined to comment for this story. "Large, cash-rich companies that can afford to build in America will be the ones to benefit the most. It's survival of the biggest," said Brian Jacobsen, chief economist at investment advisory firm Annex Wealth Management. Congress created a $52.7 billion semiconductor manufacturing and research subsidy program in 2022. The Commerce Department under President Joe Biden last year convinced all five leading-edge semiconductor firms to locate chip factories in the U.S. as part of the program. The department said the U.S. last year produced about 12% of semiconductor chips globally, down from 40% in 1990. Any chip tariffs would likely target China, with whom Washington is still negotiating a trade deal. "There's so much serious investment in the United States in chip production that much of the sector will be exempt," said Martin Chorzempa, senior fellow at the Peterson Institute for International Economics. Since chips made in China won't be exempt, chips made by SMIC or Huawei would not be either, Chorzempa said, noting that chips from these companies entering the U.S. market were mostly incorporated into devices assembled in China. "If these tariffs were applied without a component tariff, it might not make much difference," he said. Chipmaking nations South Korea and Japan, as well as the European Union, have reached trade deals with the U.S., potentially giving them an advantage. The EU said it agreed to a single 15% tariff rate for the vast majority of EU exports, including cars, chips and pharmaceuticals. South Korea and Japan said separately that U.S. agreed not to give them worse tariff rates than other countries on chips, suggesting a 15% levy as well.


Time of India
an hour ago
- Time of India
Study finds 70% of US teens use AI chatbots, fuelling calls for digital literacy education
A growing number of teenagers in the United States are turning to artificial intelligence chatbots for emotional connection, advice, and companionship. A recent study by Common Sense Media, a group that studies and advocates for using digital media sensibly, found that over 70% of US teens are now engaging with AI companions, with nearly half using them regularly. Tired of too many ads? go ad free now The trend is raising important questions for educators and policymakers about the role of digital literacy in school curricula and how prepared students really are to navigate the ethical and emotional challenges that come with these tools. The findings come amid new concerns about how advanced language models like ChatGPT are influencing vulnerable users. A report published by the Center for Countering Digital Hate (CCDH) highlighted that AI chatbots are not just tools for productivity or academic help, but also emotional confidants for many adolescents. When left unchecked, this overreliance may lead teens into unsafe digital interactions that mimic real-world peer pressure or misguided validation. AI tools are replacing peer interaction For today's students, AI chatbots are not just search engines. They are designed to sound conversational, curious, and responsive — qualities that closely resemble human interaction. , CEO of , acknowledged this shift during a Federal Reserve conference, stating that young people increasingly say that "I can't make any decision in my life without telling ChatGPT everything that's going on." Altman said that the company is trying to study what it calls 'emotional overreliance' on AI, which he described as 'a really common thing' among teens. This overreliance was tested in CCDH's latest research, which involved researchers posing as 13-year-olds and interacting with ChatGPT. In over 1,200 conversations, the chatbot issued helpful warnings in some cases but also offered step-by-step advice on harmful behaviours such as drug use, extreme dieting and even self-harm. Tired of too many ads? go ad free now Over 50% of the responses were classified as dangerous. These revelations have alarmed digital safety advocates, not just because of the chatbot's failure to flag harmful prompts, but because students may treat these tools as private mentors, believing them to be safer than peers or adults. Why schools must step in With AI tools being used more widely across age groups, schools are being urged to introduce age-appropriate digital literacy programmes that go beyond teaching students how to use technology. Instead, the focus is shifting to understanding how digital systems are designed, what risks they carry, and how to build boundaries when interacting with AI companions. The concern is not limited to misuse. Digital literacy education also includes helping students understand the limitations of AI, such as the randomness of responses, the lack of real empathy, and the inability to verify age or context. Tools like ChatGPT are not built to replace adult judgment or emotional guidance, yet many young users treat them as such. Consent, safety and policy gaps Despite OpenAI stating that ChatGPT is not meant for children under 13, there is no effective age verification mechanism on the platform. Users are simply required to enter a birthdate that meets the age minimum. This loophole allowed CCDH researchers to create fake 13-year-old accounts and explore ChatGPT's responses to deeply troubling queries. Other platforms like Instagram and TikTok have begun incorporating age-gating features, nudging children towards safer experiences or limited accounts. Chatbots, however, remain behind the curve, and schools may need to fill this gap until regulation catches up. Common Sense Media has rated ChatGPT as a 'moderate risk' for teens, primarily because of the lack of customisation for age-appropriate responses. However, it also highlights that intentional misuse by students, especially when masked as 'just a project' or 'helping a friend,' can bypass even well-placed safety features. What digital literacy needs to look like now It is time to rethink how digital education is structured in American schools. Instead of treating digital literacy as a one-time module or a tech club activity, it should be embedded across subjects. Students must learn how algorithms work, how bias and sycophancy creep into AI-generated answers, and how to differentiate between factual advice and persuasive or harmful suggestions. Moreover, emotional literacy must go hand-in-hand with digital skills. When chatbots are being treated like friends, students need support to understand what real empathy, consent, and trust look like and why AI cannot offer those things. This may also involve training teachers to identify when students are over-relying on AI tools or retreating from peer-based or adult support systems. With over 800 million people worldwide now using ChatGPT, according to a July 2025 report by JPMorgan Chase , AI tools are already woven into the daily routines of many young users. But while the scale of use is global, the responsibility of guiding teenagers toward safe and informed usage falls locally, often on schools. The findings from Common Sense Media and CCDH do not call for panic. Instead, they call for intentional, curriculum-based digital literacy that equips students to use technology safely, question it intelligently, and resist forming emotional dependencies on something that cannot truly understand them. TOI Education is on WhatsApp now. Follow us