
UCLA grad brazenly shows off ChatGPT that did his assignments for him — and critics aren't happy: ‘We're so cooked'
Was it CheatGPT?
It's no secret that artificial intelligence use is becoming increasingly ubiquitous in academia.
But while most prefer to keep their AI schoolwork aids confidential, one student at the University of California, Los Angeles, brazenly boasted about employing the tech during his commencement ceremony.
The shocking moment was captured during UCLA's livestream at the Pauley Pavilion earlier this month, but videos have since been reshared to Instagram and X, where they've amassed millions of views.
3 The grad shows off the ChatGPT text he used for his final exams in front of fellow students.
Instagram/andremaimusic
In the brief clip, which was displayed on the facility's Jumbotron, Andre Mai, a computational and systems biology major, is seen holding up his laptop to show off walls of AI-generated text that he ostensibly used for his final exams.
The footage shows the undergrad proudly scrolling through the evidence of his so-called high-tech homework hacking as the rest of the graduating class of 2025 whoops and cheers in the background.
'Let's gooooo!!!!!!' he mouths while hyping up the crowd.
3 'If ChatGPT is why you graduated, ChatGPT has already taken your job,' said one critic.
Instagram/andremaimusic
The video didn't sit nearly as well with online viewers, many of whom saw it as indicative of societal decline.
'We're so cooked,' lamented one disillusioned commenter under a repost on X, while another wrote, 'Pandora's Box has been opened.'
'We're still supposed to take college degrees seriously btw,' scoffed a third.
'Our future doctors really gon have one AirPod in asking ChatGPT how to do open heart surgery,' quipped one X wit.
'If ChatGPT is why you graduated, ChatGPT has already taken your job,' theorized one poster, reiterating techsperts' concerns that AI could render effectively render human employees obsolete.
3 Academics are growing increasingly concerned over the prevalence of ChatGPT and other AI chatbots in the classroom.
AlexPhotoStock – stock.adobe.com
These fears were also echoed on Reddit. 'This is going to be the biggest problem,' fretted one poster. 'People just aren't going to learn anything anymore, instead of a tool to help you learn people are just going to think it's a magic answer box.'
However, some defenders applauded Mai for seemingly gaming the system with one X fan writing, 'Hot take ChatGPT and AI are tools that are going to be with us for good or bad for the foreseeable future.'
'So proving that they can effectively use the tools he had to achieve what was required of him is not cheating,' they added. 'It proves he will be able to provide similar results in the real world.'
Mai, who is also a DJ, addressed the viral moment in a video on Instagram, explaining, 'you guys might know me from this viral clip from graduation today. I wanna let you guys know what was actually on my computer screen.'
In the post, which was reshared by ChatGPT's official page, the tech whiz clarified that he'd used the chatbot to help with two complicated finals, one of which was due at 5pm and the other at midnight.
'I was wrapping up all the documentation that I've ever [done] for my machine learning lab,' he declared. 'I also had to use AI to summarize the key equations that I'd be using for what would essentially be the last test I'd ever take in my undergraduate career.'
Mai suggested that this wasn't cheating as he had his teachers' blessing. 'My professors have really encouraged the use of AI,' he said. 'So much so that when the jumbotron people came around, I just flipped my screen around and I had to show them what I was doing. I never could have imagined all this exciting attention.'
Mai added that he's used the tech AI in 'so many different ways as a college student,' ranging from understanding 'operating systems or computer networking' to selecting the best DJ equipment.
Nonetheless, techsperts remain concerned over the omnipresence of AI in the classroom. According to a winter survey by the Pew Research Center, approximately 26% of teen students used the AI chatbot to help them with assignments in 2024 — up from just 13% in 2023.
Unfortunately, using ChatGPT to fudge assignments could potentially make people dumber in the long run.
An alarming study by researchers with the Massachusetts Institute of Technology (MIT) found that students who use ChatGPT to complete essays have poorer cognitive skills than those who rely on just their brain.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Wire
36 minutes ago
- Business Wire
Breaking the Studio Social Media Blackout: Caylee Cowan Takes Creative Control and Financial Freedom with Fanfix
LOS ANGELES--(BUSINESS WIRE)-- Fanfix announced today that actress Caylee Cowan has joined the brand-friendly creator monetization platform to deepen direct fan connections during film production blackouts — the months-long period when actors are contractually restricted from traditional social media promotion about active projects. 'With Fanfix, I can interact with the fans who are truly invested in my personal and professional life and engage with my fans authentically.' As the lead in acclaimed films, including Willy's Wonderland and Frank and Penelope, Cowan represents a burgeoning category of working actors who want to grow and maintain authentic fan relationships through exclusive content while also generating sustainable income in between roles. 'My fans have been with me since my early projects – and want more than promotional posts and red carpet photos,' said Cowan. 'Fanfix is the first platform that allows me to be me, wholeheartedly and authentically. A lot of what I post either never makes it to Instagram or is posted there weeks before it shows up on my other socials. People who follow me are interested in my creative process, moments between takes, and what it's really like to navigate this industry as a working actor.' Recently on the set of Easy, a YA adaptation in Malta, Cowan gave subscribers real-time access to behind-the-scenes photos and videos – content otherwise restricted by studio contracts on traditional social platforms. With features like direct messaging, Cowan is also able to engage fans on a much deeper and controlled level than with traditional social platforms. 'I limit my time on social media personally. It's very much a curated representation of who I am. On Fanfix, I just post whatever I want whenever I want, and I don't worry about the aesthetics. As a result, it's a more authentic representation of who I am,' added Cowan. 'With Fanfix, I can interact with the fans who are truly invested in my personal and professional life and engage with my fans authentically.' Just 2% of actors earn a living wage from acting alone, leaving the vast majority dependent on supplemental income, according to Acting Magazine (2025). Instagram and TikTok require creators to manage brand deals and sponsored posts for revenue. Fanfix offers a different model - direct fan subscriptions built into the platform - with Caylee's move signaling a broader shift in how actors can maintain creative and financial autonomy. 'Caylee embodies everything audience and content monetization can and should be — talent building their careers and businesses through brand-safe content that respects their artistic and professional integrity,' said Jamie Forster, Senior Vice President of Talent at Motion Talent. 'Fanfix delivers real value to actors who don't follow the traditional influencer-to-creator path, but still want to earn, connect, and express themselves outside the bounds of studio timelines.' Beyond current projects, Cowan is looking to Fanfix to share behind-the-scenes moments on-set for previous films like The Possession of Gladstone Manor and Hollywood Grit. This never-before-seen material is exclusively for fans while creating promotional value for projects, all within Fanfix's secure platform that safeguards content from leaking to gossip sites or unauthorized distribution. 'Hollywood is finally catching up to what we've known for years – actors are entrepreneurs who deserve the platform that supports their artistry and financial independence,' said Dylan Harari, Co-CEO at Fanfix. 'While traditional social media gives talent followers, Fanfix gives them what they actually need: direct fan connections that translate into real income. Caylee is exactly the professionally trained talent we're excited to empower – actors, athletes, musicians who can build thriving businesses while maintaining industry credibility. We can't wait to see what Caylee does next.' Cowan joins Fanfix as the platform expands beyond Gen Z, attracting talent across sports, entertainment, lifestyle, and fashion verticals, most recently including male model Jordan Barrett. With over 15 million users and $170 million paid out to creators, Fanfix continues to set the standard for how creators can monetize their audiences across any category and level of their career. To subscribe and follow Caylee Cowan on Fanfix, visit To learn more about Fanfix, visit About Fanfix Fanfix is the brand-friendly platform where creators share exclusive content and build sustainable businesses without relying on ads or brand deals. Designed for authenticity and creative control, Fanfix empowers creators to monetize directly and connect with their fans through clean, non-explicit content. Since its 2021 launch and 2022 acquisition by global brand accelerator SuperOrdinary, Fanfix has grown to over 15 million users and paid out more than $170 million to creators. Headquartered in Los Angeles, CA, the platform continues to expand into new verticals including fashion, art, and culture. Learn more at About Caylee Cowan Caylee Cowan is a professionally trained actress with credits in independent and studio films. Current projects include Easy, filming in Malta, with previous credits including Willy's Wonderland. Upcoming releases include The Possession of Gladstone Manor.


Fast Company
38 minutes ago
- Fast Company
Enterprises can't overlook security when embracing AI
Amara's Law, coined by the American scientist and futurist Roy Amara, says humans 'tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.' If the first half of 2025 is anything to go by, in the AI era, the 'runs' are getting shorter, and the effects of the technology will be larger than we've seen in a generation. In a matter of months, the conversation in companies has accelerated far beyond if AI is a useful productivity tool, to where and when it can be applied. Across industries and geographies, executives are acknowledging that AI is a general-purpose business solution, not just a technical one. Despite widespread workplace adoption, the focus on cybersecurity has not kept pace. In the rush to adopt AI systems, applications and agents, companies are failing to consider that rapid deployment of these new technologies could lead to data breaches and other security risks. That matters because AI models are not only getting more powerful but also more useful for enterprises. More enterprises are using AI agents As of early June, OpenAI's base of 'paying business users' reached 3 million, up from 2 million in February. In a move for that market, ChatGPT can now connect to popular business apps such as Google Drive, Dropbox, and Sharepoint, allowing workers to quickly access answers that are locked in dispersed documents and spreadsheets. Confusion, and even fear, about AI agents has given way to exploration and adoption. Among US-based organizations with annual revenues of $1 billion or more, 65% were piloting AI agents in the first quarter of this year, up from 37% in the space of a single quarter. Microsoft's Azure AI Foundry, its platform for building AI agents, processed 100 trillion tokens in the first three months of 2025 (with one token representing the smallest unit of text that an AI model processes)—a five-fold increase year-on-year. At the same time, the cost per token more than halved, spurring higher use and creating virtuous cycles of innovation. As John Chambers, the former CEO of Cisco, says, AI is this generation's internet revolution but 'at five times the speed, with three times the outcome.' Beyond the hype that haunts the sector, there are signs of enterprise AI adoption everywhere. In his latest letter to shareholders Alex Karp, CEO of Palantir Technologies, describes a 'ravenous whirlwind of adoption' of AI. IBM, which has rolled out its AI strategy to 270,000 employees, reports that AI already handles 94% of routine human resources tasks. At Shopify, the e-commerce group, 'AI usage is now a baseline expectation,' CEO Tobias Lütke said in an employee memo. The workplace automation company Zapier, which took steps to embed AI across its workforce, says that 89% of employees actively use AI in their daily work. The list goes on—and it's not just technology companies. JP Morgan, the world's largest bank, has rolled out GenAI tools to 200,000 staff members, and says employees have each gained one-to-two hours of productivity each week. AI acquisitions are plentiful The shift from novel to mass-market tech is reflected in the business strategies of the main AI model makers, which are reimagining themselves as application companies. In the space of two weeks, OpenAI, the ChatGPT parent, appointed a CEO of Applications and then acquired IO, the AI device startup founded by former Apple designer Jony Ive, for $6.5 billion. Meta, perceived to be behind in the AI race, has invested $14.3 billion in Scale AI, which provides data and evaluation services to develop applications for AI. Meanwhile, Apple is reported to have had internal talks about buying Perplexity AI, a two-and-a-half year-old AI model maker. AI app security is rarely discussed Companies are naturally focused on the potential and performance of AI systems, but it's striking how rarely security is part of the story. The reality is that the speed of deployment of AI apps and agents is leaving companies at risk for breaches, data loss, and brand impact. For example, an AI system or agent that has access to employee HR data or a bank's internal systems leaves a company open to possible cyberattacks by bad actors. In business-critical applications, risks emerge at every stage of the development cycle, from choosing which AI model to use and what systems to give it access to, right through to deployment and daily use. In our work on testing the security of AI models with simulated attacks—known as red-teaming—and creating the CalypsoAI Model Security Leaderboards, we have discovered that, despite performance improvements, new or updated AI models are often less secure than existing ones. At the same time, existing models can see their security score slip over time. Why? Because the attacks keep progressing and bad actors learn new tricks. More techniques and capabilities of breaking or bypassing AI model securities keep being invented. Simply, the attack techniques are getting better and they're causing AI models that have only recently launched to become less secure. That means that organizations that begin using an AI system or agent today, but don't stay up to date with the latest threat intel, will be more vulnerable as attack techniques increase in capability and frequency. As corporate AI systems gain autonomy and access to sensitive data, what is safe today may not be safe tomorrow. The research firm Gartner has forecast that 15% of day-to-day business decisions will be made autonomously by agents by 2028, though that percentage may increase by then. Against that backdrop, virtually all the security protocols and permissions in enterprises are built for human workers, not for AI agents that can roam through company networks and learn on the job. That mismatch opens up vulnerabilities, such as the possibility of agents accessing sensitive information and sharing it inappropriately. Poorly secured agents will be prime targets for hackers, particularly where they have access to valuable data or functions such as money transfers. The consequences include financial loss and reputational damage. Final thoughts Securing these new systems will be critical to AI adoption and to successful return on investment for the companies involved. A new security paradigm, using the capabilities of agentic AI to secure enterprise AI, is needed to allow innovation to thrive and agents to reach their potential. While the development of AI models and systems so far can reasonably be summarized as ' better, cheaper, less secure, the final part of that equation must improve significantly as the emerging application-first AI era accelerates. Once that happens, Roy Amara seems certain to be proven right once again.
Yahoo
43 minutes ago
- Yahoo
How Nvidia could end up worth $6 trillion after reclaiming its spot as the world's most valuable company
Nvidia is trading at record highs, adding gains to Wednesday's all-time closing high. But one firm sees the stock soaring much higher. Here's how analysts at Loop Capital see Nvidia on the path to a $6 trillion valuation. Nvidia shares hit a record high on Wednesday, closing at $154.31 and lifting the chipmaker's market value to $3.77 trillion. The move allowed the AI titan to reclaim the title of the world's most valuable company — but one firm thinks the rally isn't even close to being over. The stock extended its record high on Thursday, rising about 1% to $155. The latest jump to record highs marks an impressive comeback for Nvidia after struggling against headwinds like the reveal of China's DeepSeek AI and the impact of tariffs earlier in the year. But the party is just getting started, according to one analyst. Ananda Baruah of Loop Capital raised his Nvidia price target to $250, a bold move even among AI bulls. It's the highest price target on the Street, implying an upside of 127% and a market cap of $6 trillion. A $6 trillion market cap might sound crazy, but "the math just works," according to Baruah. If you think AI infrastructure spending is robust now, brace yourself for even bigger capex from the AI hypercalers. Loop Capital sees a coming surge in data center spending as companies like Amazon and Microsoft continue to increase their investments in the technology. Loop Capital estimates that spending on non-CPU compute like GPUs and AI accelerators could soar as hyperscalers boost non-CPU compute from around 15% of their infrastructure to 50%-60% by 2028. They said their work "suggests that Hyperscale & AI Factory (Sovereign, Neocloud & Enterprise) Gen AI & AI Accelerator compute spending ALONE could increase to~$2.0T by 2028 using current compute economics." Nvidia also anticipates rising demand for AI factories, which are vertically integrated data centers built to train AI models. Loop Capital's projections suggest Nvidia has "line of sight" to tens of gigawatts of demand in the next two to three years. Nvidia CEO Jensen Huang has stated that every gigawatt of demand translates into around $40 to $50 billion of Nvidia revenue, implying $450 to $900 billion of sales in the pipeline. Additionally, the newest AI reasoning models are much more compute-intensive—and expensive—than anticipated, taking up to 150x more compute than traditional LLMs. That means Big Tech companies are growing their tokens at a rapid rate and spending more money on Nvidia AI server systems. Baruah sees Nvidia's data center revenue more than doubling in the next few years, from $115 billion in fiscal year 2025 to $367 billion in fiscal year 2028. He believes Nvidia is well-positioned to capture a significant chunk of that demand, as the company "remains essentially a monopoly for critical tech," giving it pricing power and room for margin expansion. "We're entering the next 'Golden Wave' of Gen AI adoption," Baruah wrote. "Nvidia is at the front-end of another material leg of stronger-than-anticipated demand." Read the original article on Business Insider