A new AI app that helps you cheat in conversations is slick, a little creepy, and not quite ready for your next meeting
Chungin "Roy" Lee — the Columbia student who went viral for creating an AI tool to "cheat" on job interviews — was suspended in March for posting content from a disciplinary hearing, the university said.
His new app, Cluely, helps users by analyzing what's on their screens, hearing audio, and suggesting answers to questions, all without detection from the other side. In a LinkedIn post, Lee said Cluely is a "cheating tool for literally everything."
Lee said on Monday that San Francisco-based Cluely has raised $5.3 million, backed by Abstract Ventures and Susa Ventures.
Speaking to Business Insider on Wednesday night, Lee said that Cluely has gained about 70,000 users since its launch on Sunday.
"It is pretty crazy. And this is just a lot more than we expected," he said.
Lee said the concept of "cheating" needs to be rethought in the AI era.
"Using AI is just inevitable and something that we should just all embrace," he said.
I put Cluely to the test, and I saw promise — but also huge privacy concerns.
First impressions
I tested Cluely on Tuesday morning and paid $20 for the Pro version — a monthly subscription — since the free tier doesn't offer much.
The interface is slick and minimal: a thin bar on the top of my screen. With one keyboard shortcut, Cluely scans my screen, then generates answers. I could also speak to it through my mic.
It only works with what's already on my screen — it doesn't crawl the internet like ChatGPT.
It also reads everything. I kept my Signal group chats closed.
That's what makes it powerful — and also kind of terrifying.
Lee told BI that Cluely doesn't save any data, and if it does, it's "heavily censored."
"If there's something that you don't want to be seen, then you shouldn't use the tool, and it shouldn't be up while you're on sensitive information," Lee said.
I deleted it right after writing this story.
Flopping my mock interview
Cluely's killer pitch is real-time assistance.
I let it scan my LinkedIn profile and kept it open while my editor put me through a mock interview with the same questions I answered to land this job.
First up: What is my media consumption like? Could I share what I like to read or watch? What keeps me interested and busy?
The app starts "thinking." Five seconds later, it gave me a summary of the question, not an answer.
My editor tried again, more succinctly: "What media outlets do you read?"
This time, Cluely came through:
"I enjoy staying up to date with a mix of international and local news — typically from outlets like the New York Times, BBC, and CNA. I love longform journalism from The Atlantic and Wired, especially on tech and social issues. I also listen to 'The Daily' and 'Planet Money' during my commute."
Not bad, though I don't listen to podcasts on my way to work.
My editor asked me to talk about a difficult story I've worked on. 10 seconds of awkward silence later, Cluely offered: "One difficult story I worked on was about the impact of layoffs in the tech industry."
I never wrote that story.
The second hallucination came when the app said I had a "working knowledge of Malay." My editor congratulated me on my surprising — including to me — third language skill. It completely missed the elementary Korean listed on my LinkedIn.
When my editor asked if I had questions for her, Cluely suggested a few basics: What do you enjoy most about working here? What's the team culture like? What does success look like in this role?
Not worth the $20 — yet.
Cluely's biggest flaw is speed. A five to 10-second delay feels like forever in a live interview.
The answers were also too generic, occasionally wrong, and not tailored enough to me.
It did generate decent answers to common questions. When I read them aloud, my editor said the biggest clue I had help was the delay, not the substance. She also said my real answers were better than Cluely's.
Lee told BI that Cluely is in "a really raw state."
"Our servers are super overloaded, so there's a lot of latency," he said.
But there have been "significant performance updates" since the app went out on Sunday, he added.
"We've upgraded all our servers, we've optimized the algorithms, and right now it should be about three times faster, which makes it much more usable in conversations."
Lee said hallucinations will "exist insofar as the base models that we use allow for them."
"The day that the models get better is the day that our product will get better," he added.
There's definitely potential. If Cluely got faster, smarter, and could pull info from beyond just my screen, it could become a game-changing AI assistant. If I were hiring, I might think twice about conducting remote interviews because of these sorts of apps.
But between the privacy risks, laggy performance, and random hallucinations, I'm keeping it off my computer.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TechCrunch
25 minutes ago
- TechCrunch
Inside OpenAI's quest to make AI do anything for you
Shortly after Hunter Lightman joined OpenAI as a researcher in 2022, he watched his colleagues launch ChatGPT, one of the fastest-growing products ever. Meanwhile, Lightman quietly worked on a team teaching OpenAI's models to solve high school math competitions. Today that team, known as MathGen, is considered instrumental to OpenAI's industry-leading effort to create AI reasoning models: the core technology behind AI agents that can do tasks on a computer like a human would. 'We were trying to make the models better at mathematical reasoning, which at the time they weren't very good at,' Lightman told TechCrunch, describing MathGen's early work. OpenAI's models are far from perfect today — the company's latest AI systems still hallucinate and its agents struggle with complex tasks. But its state-of-the-art models have improved significantly on mathematical reasoning. One of OpenAI's models recently won a gold medal at the International Math Olympiad, a math competition for the world's brightest high school students. OpenAI believes these reasoning capabilities will translate to other subjects, and ultimately power general-purpose agents that the company has always dreamed of building. ChatGPT was a happy accident — a lowkey research preview turned viral consumer business — but OpenAI's agents are the product of a years-long, deliberate effort within the company. 'Eventually, you'll just ask the computer for what you need and it'll do all of these tasks for you,' said OpenAI CEO Sam Altman at the company's first developer conference in 2023. 'These capabilities are often talked about in the AI field as agents. The upsides of this are going to be tremendous.' Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California.(Photo by) Image Credits:Justin Sullivan / Getty Images Whether agents will meet Altman's vision remains to be seen, but OpenAI shocked the world with the release of its first AI reasoning model, o1, in the fall of 2024. Less than a year later, the 21 foundational researchers behind that breakthrough are the most highly sought-after talent in Silicon Valley. Mark Zuckerberg recruited five of the o1 researchers to work on Meta's new superintelligence-focused unit, offering some compensation packages north of $100 million. One of them, Shengjia Zhao, was recently named chief scientist of Meta Superintelligence Labs. The reinforcement learning renaissance The rise of OpenAI's reasoning models and agents are tied to a machine learning training technique known as reinforcement learning (RL). RL provides feedback to an AI model on whether its choices were correct or not in simulated environments. RL has been used for decades. For instance, in 2016, about a year after OpenAI was founded in 2015, an AI system created by Google DeepMind using RL, AlphaGo, gained global attention after beating a world champion in the board game, Go. South Korean professional Go player Lee Se-Dol (R) prepares for his fourth match against Google's artificial intelligence program, AlphaGo, during the Google DeepMind Challenge Match on March 13, 2016 in Seoul, South Korea. Lee Se-dol played a five-game match against a computer program developed by a Google, AlphaGo. (Photo by Google via Getty Images) Around that time, one of OpenAI's first employees, Andrej Karpathy, began pondering how to leverage RL to create an AI agent that could use a computer. But it would take years for OpenAI to develop the necessary models and training techniques. By 2018, OpenAI pioneered its first large language model in the GPT series, pretrained on massive amounts of internet data and a large clusters of GPUs. GPT models excelled at text processing, eventually leading to ChatGPT, but struggled with basic math. It took until 2023 for OpenAI to achieve a breakthrough, initially dubbed 'Q*' and then 'Strawberry,' by combining LLMs, RL, and a technique called test-time computation. The latter gave the models extra time and computing power to plan and work through problems, verifying its steps, before providing an answer. This allowed OpenAI to introduce a new approach called 'chain-of-thought' (CoT), which improved AI's performance on math questions the models hadn't seen before. 'I could see the model starting to reason,' said El Kishky. 'It would notice mistakes and backtrack, it would get frustrated. It really felt like reading the thoughts of a person.' Though individually these techniques weren't novel, OpenAI uniquely combined them to create Strawberry, which directly led to the development of o1. OpenAI quickly identified that the planning and fact checking abilities of AI reasoning models could be useful to power AI agents. 'We had solved a problem that I had been banging my head against for a couple of years,' said Lightman. 'It was one of the most exciting moments of my research career.' Scaling reasoning With AI reasoning models, OpenAI determined it had two new axes that would allow it to improve AI models: using more computational power during the post-training of AI models, and giving AI models more time and processing power while answering a question. 'OpenAI, as a company, thinks a lot about not just the way things are, but the way things are going to scale,' said Lightman. Shortly after the 2023 Strawberry breakthrough, OpenAI spun up an 'Agents' team led by OpenAI researcher Daniel Selsam to make further progress on this new paradigm, two sources told TechCrunch. Although the team was called 'Agents,' OpenAI didn't initially differentiate between reasoning models and agents as we think of them today. The company just wanted to make AI systems capable of completing complex tasks. Eventually, the work of Selsam's Agents team became part of a larger project to develop the o1 reasoning model, with leaders including OpenAI co-founder Ilya Sutskever, chief research officer Mark Chen, and chief scientist Jakub Pachocki. Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023. (Photo by JACK GUEZ / AFP) Image Credits:Getty Images OpenAI would have to divert precious resources — mainly talent and GPUs — to create o1. Throughout OpenAI's history, researchers have had to negotiate with company leaders to obtain resources; demonstrating breakthroughs was a surefire way to secure them. 'One of the core components of OpenAI is that everything in research is bottom up,' said Lightman. 'When we showed the evidence [for o1], the company was like, 'This makes sense, let's push on it.'' Some former employees say that the startup's mission to develop AGI was the key factor in achieving breakthroughs around AI reasoning models. By focusing on developing the smartest-possible AI models, rather than products, OpenAI was able to prioritize o1 above other efforts. That type of large investment in ideas wasn't always possible at competing AI labs. The decision to try new training methods proved prescient. By late 2024, several leading AI labs started seeing diminishing returns on models created through traditional pretraining scaling. Today, much of the AI field's momentum comes from advances in reasoning models. What does it mean for an AI to 'reason?' In many ways, the goal of AI research is to recreate human intelligence with computers. Since the launch of o1, ChatGPT's UX has been filled with more human-sounding features such as 'thinking' and 'reasoning.' When asked whether OpenAI's models were truly reasoning, El Kishky hedged, saying he thinks about the concept in terms of computer science. 'We're teaching the model how to efficiently expend compute to get an answer. So if you define it that way, yes, it is reasoning,' said El Kishky. Lightman takes the approach of focusing on the model's results and not as much on the means or their relation to human brains. The OpenAI logo on screen at their developer day stage. (Credit: Devin Coldeway) Image Credits:Devin Coldewey 'If the model is doing hard things, then it is doing whatever necessary approximation of reasoning it needs in order to do that,' said Lightman. 'We can call it reasoning, because it looks like these reasoning traces, but it's all just a proxy for trying to make AI tools that are really powerful and useful to a lot of people.' OpenAI's researchers note people may disagree with their nomenclature or definitions of reasoning — and surely, critics have emerged — but they argue it's less important than the capabilities of their models. Other AI researchers tend to agree. Nathan Lambert, an AI researcher with the non-profit AI2, compares AI reasoning modes to airplanes in a blog post. Both, he says, are manmade systems inspired by nature — human reasoning and bird flight, respectively — but they operate through entirely different mechanisms. That doesn't make them any less useful, or any less capable of achieving similar outcomes. A group of AI researchers from OpenAI, Anthropic, and Google DeepMind agreed in a recent position paper that AI reasoning models are not well understood today, and more research is needed. It may be too early to confidently claim what exactly is going on inside them. The next frontier: AI agents for subjective tasks The AI agents on the market today work best for well-defined, verifiable domains such as coding. OpenAI's Codex agent aims to help software engineers offload simple coding tasks. Meanwhile, Anthropic's models have become particularly popular in AI coding tools like Cursor and Claude Code — these are some of the first AI agents that people are willing to pay up for. However, general purpose AI agents like OpenAI's ChatGPT Agent and Perplexity's Comet struggle with many of the complex, subjective tasks people want to automate. When trying to use these tools for online shopping or finding a long-term parking spot, I've found the agents take longer than I'd like and make silly mistakes. Agents are, of course, early systems that will undoubtedly improve. But researchers must first figure out how to better train the underlying models to complete tasks that are more subjective. AI applications (Photo by Jonathan Raa/NurPhoto via Getty Images) 'Like many problems in machine learning, it's a data problem,' said Lightman, when asked about the limitations of agents on subjective tasks. 'Some of the research I'm really excited about right now is figuring out how to train on less verifiable tasks. We have some leads on how to do these things.' Noam Brown, an OpenAI researcher who helped create the IMO model and o1, told TechCrunch that OpenAI has new general-purpose RL techniques which allow them to teach AI models skills that aren't easily verified. This was how the company built the model which achieved a gold medal at IMO, he said. OpenAI's IMO model was a newer AI system that spawns multiple agents, which then simultaneously explore several ideas, and then choose the best possible answer. These types of AI models are becoming more popular; Google and xAI have recently released state-of-the-art models using this technique. 'I think these models will become more capable at math, and I think they'll get more capable in other reasoning areas as well,' said Brown. 'The progress has been incredibly fast. I don't see any reason to think it will slow down.' These techniques may help OpenAI's models become more performant, gains that could show up in the company's upcoming GPT-5 model. OpenAI hopes to assert its dominance over competitors with the launch of GPT-5, ideally offering the best AI model to power agents for developers and consumers. But the company also wants to make its products simpler to use. El Kishky says OpenAI wants to develop AI agents that intuitively understand what users want, without requiring them to select specific settings. He says OpenAI aims to build AI systems that understand when to call up certain tools, and how long to reason for. These ideas paint a picture of an ultimate version of ChatGPT: an agent that can do anything on the internet for you, and understand how you want it to be done. That's a much different product than what ChatGPT is today, but the company's research is squarely headed in this direction. While OpenAI undoubtedly led the AI industry a few years ago, the company now faces a tranche of worthy opponents. The question is no longer just whether OpenAI can deliver its agentic future, but can the company do so before Google, Anthropic, xAI, or Meta beat them to it?


Miami Herald
an hour ago
- Miami Herald
Whataburger debuts limited-time burger, plus a new treat. When can you get them?
Whataburger fans can soon sip — or sink their teeth into two new menu items. The Dr Pepper blackberry shake and bacon wrangler double arrive Tuesday, Aug. 5, at participating restaurants nationwide, according to the Texas-based burger chain. The limited-time offerings are the latest to land at Whataburger as the brand celebrates 75 years, marking 'its diamond anniversary' with new and returning menu favorites. In a nod to its Texas roots, the bacon wrangler double features two beef patties topped with bacon, American and Monterey Jack cheeses, crispy onions and garlic aioli on a toasted bun, Whataburger said in a news release. 'As a proud Texas-born brand, Whataburger was built on bold flavors, genuine hospitality and a spirit that feels like home,' President and CEO Debbie Stroud said in the release. 'The Bacon Wrangler Double is a delicious reminder that you can always experience a little bit of that spirit with every bite.' Customers can wash it all down with the new Dr Pepper blackberry shake, also available for a limited time. It's a fruity take on the original Dr Pepper shake, combining Whataburger's vanilla soft serve with Dr Pepper 'and a touch of blackberry flavor,' according to the restaurant's website. The burger chain offered a Dr Pepper blackberry-flavored soda at restaurants earlier this year to rave reviews. 'We paired two Texas favorites in a way that feels fresh but familiar — fun, unexpected, and full of that signature Whataburger deliciousness,' Scott Hudler, chief marketing officer for Whataburger, said at the time. Pricing information wasn't immediately available. Find your nearest Whataburger here.


Forbes
an hour ago
- Forbes
Trump And Miller Compel Colleges Not To Enroll International Students
The Trump administration is using rules, policies and formal agreements to compel and discourage U.S. universities from enrolling international students. A controversial immigration clause in the administration's agreement with Columbia University represents the latest move to decrease international student enrollment. Settlements with other schools could soon follow. Despite what economists and educators view as the benefits of international students, Trump officials, led by White House Deputy Chief of Staff Stephen Miller, appear determined to reduce the number of international students who enter and remain in the United States to work. A Controversial Immigration Provision In The Agreement With Columbia University On July 23, 2025, Trump officials and Columbia University signed an agreement after the administration withheld over $400 million in federal research funds. The Trump administration accused the school of not sufficiently combating antisemitism on campus. Under the agreement, Columbia will pay $200 million to the U.S. Treasury and an additional $21 million into a fund associated with the U.S. Equal Employment Opportunity Commission to settle claims. A Resolution Monitor will 'monitor Columbia's compliance' with those and other provisions. Columbia's leadership decided that future and current funding, more than $1 billion, would remain at risk without a settlement. The agreement includes a controversial provision that commits Columbia University to decreasing international student enrollment. The measure has received little attention. On page nine, the agreement states, 'Columbia will examine its business model and take steps to decrease financial dependence on international student enrollment.' The measure is extraordinary, given that international students typically pay higher tuition than domestic students. Admitting more international students would likely improve the school's finances. 'It makes no economic sense for U.S. universities or the American economy to admit fewer international students,' said Mark Regets, an economist and senior fellow at the National Foundation for American Policy. 'The United States benefits economically in several ways from international students and the same is true for the universities where they enroll.' The number of U.S.-born men and women of college age is declining, which means a more prudent policy for U.S. universities and the federal government would be to attract more international students to the United States. 'Without immigrants, international students and the children of immigrants, the undergraduate student population in America would be almost 5 million students smaller in 2037 than 2022, or about two-thirds of its current size, while the graduate student population would be at least 1.1 million students smaller, or only about 60% of its current size,' according to a National Foundation for American Policy report by Madeline Zavodny, a professor of economics at the University of North Florida. Zavodny has found that a higher enrollment of international students is associated with an increase in U.S. students majoring in STEM fields. 'Each additional 10 bachelor's degrees—across all majors—awarded to international students by a college or university leads to an additional 15 bachelor's degrees in STEM majors awarded to U.S. students.' According to NAFSA: Association of International Educators, 'International students studying at U.S. colleges and universities contributed $43.8 billion and supported 378,175 jobs to the U.S. economy during the 2023-2024 academic year.' NAFSA expects the travel ban, visa interview suspension and limited appointment availability to result in a potential decline of 30% to 40% in new international student enrollment in the fall of 2025. International students also contribute as employees and entrepreneurs. At U.S. universities, 71% of the full-time graduate students in computer and information sciences and 73% of the full-time graduate students in electrical and computer engineering are international students. Regets notes that even students who leave the country help connect America to the three-quarters of research and development that is performed outside the United States. 'One-quarter (143 of 582, or 25%) of billion-dollar startup companies in the U.S. have a founder who first came to America as an international student,' according to an NFAP analysis. Earlier this year, Donald Trump said Harvard 'should have a cap of maybe around 15%' of international students in its student body. In their agreements with universities, Trump officials may focus on including a provision to 'decrease financial dependence on international student enrollment' at schools with a larger proportion of international students. While international students represent about 40% of Columbia University's enrollment, the proportion is only about 14% at Brown University and it was not forced to include the provision on international students in a July 30, 2025, agreement. Stephen Miller Puts His Imprint On Immigration Measure In The Columbia University Agreement Why does an agreement spurred by an investigation into Columbia University's lack of response to antisemitism on campus include a measure to reduce international student enrollment at Columbia? The answer is that Stephen Miller leads the negotiations with U.S. universities, and observers note he is using the opportunity to enact anti-immigration policies. 'College and university leaders have been privately negotiating with a deputy to top Trump aide Stephen Miller in hopes of avoiding the same aggressive targeting of Harvard University, a person familiar with the matter said, as the administration looks to escalate its attacks on the Ivy League institution and other schools,' reported CNN. Universities that the administration has (or may) investigate have discussed settlement terms and avoiding the administration's wrath with senior White House policy strategist May Mailman. 'Mailman works closely with Miller—an architect of the administration's strategy to target colleges over concerns they are not sufficiently policing alleged antisemitism on their campuses,' according to CNN. In addition to the agreement with Columbia, the Trump administration has enacted or proposed other policies to make it more challenging for students to come to America and work after graduation. Joseph Edlow said during his confirmation hearing for USCIS director that he wanted to eliminate the ability of international students to work on Optional Practical Training and STEM OPT after graduation. An upcoming rule would utilize salary 'weighting' to modify the H-1B selection process, likely disadvantaging early-career individuals, particularly international students. Another rule, not yet published, would end duration status and require international students to obtain extension approvals if they wish to continue their studies beyond two or four years. The administration also threatened to deport thousands of international students for minor infractions, banned students from entering the United States to attend Harvard, suspended visa interviews, said it would revoke visas for many Chinese students, imposed new social media review policies and prohibited international students from coming to the United States from several countries under the recent travel ban proclamation. The Trump administration has launched investigations against more than 50 universities. Administration officials have stated in the press that they hope the Columbia agreement will serve as a model for settlements with other universities.