logo
Judge Denies Injunction in Yale Student's AI Suspension

Judge Denies Injunction in Yale Student's AI Suspension

Yahoo07-05-2025
Yale School of Management
A federal judge has denied a Yale Executive MBA student's request to return to campus and graduate with his cohort, siding with Yale School of Management after it suspended the student for allegedly using generative AI to complete a final exam.
Thierry Rignol, a French national and entrepreneur based in Texas, also lost the right to sue anonymously and has since been named publicly.
Rignol filed a federal lawsuit in February accusing Yale of discrimination, due process violations, and other complaints after the alleged cheating incident in spring 2024. He sought an injunction that would allow him to graduate with his EMBA classmates in 2025 despite a one-year suspension and a failing grade in the course.
A FAILURE TO MAKE HIS CASE
District Judge Sarah Russell denied his motion on Monday (May 5), finding that Rignol failed to make his case.
'Rignol has not carried his burden of establishing that a break in studies until the start of the next academic year (in fall 2025) and continuing to have an F on his transcript will cause him irreparable harm,' the judge wrote.
Rignol, who founded a hospitality and real estate firm in Mexico, enrolled in SOM's 22-month executive MBA program in July 2023 and was expecting to graduate this May.
However, in May 2024, he submitted a 30-page final exam for the 'Sourcing and Managing Funds' course, one of the longest submissions in the class. Most were closer to 20 pages. A teaching assistant flagged it as potentially AI-generated. The course was set for open book but closed internet, so no use of AI tools, according to the course syllabus.
YALE: STUDENT IGNORED REQUESTS TO PROVIDE ORIGINAL DOCS
Faculty initially used ChatGPTZero to assess the work and found similarities between Rignol's answers and responses generated by ChatGPT. They referred the case to the school's Honor Committee.
Things spiraled from there.
Rignol claims that he was pressured by Yale administrators to falsely confess to AI use, with one dean allegedly suggesting he could face deportation. He also alleges that the Honor Committee proceeded with hearings without giving him access to key evidence, among other charges. His lawsuit claims that Yale's own policies prohibit using tools like GPTZero due to their high false-positive rates, particularly for non-native English speakers.
Yales contends that Rignol ignored requests to submit the original Word or Pages document used to produce the exam PDF. He later said he had used Apple Pages but allegedly did not submit the original file until the day of his Honor Committee hearing in November, months after repeated requests. He also declined a follow-up meeting in which the committee asked him to bring his laptop for later that day. The committee did not initially rule on whether Rignol used AI. Instead, it sanctioned him for not being forthcoming, suspending him for one year.
A COMMITTEE RULING AGAINST
Rignol appealed his suspension to Deputy Dean Anjani Jain who subsequently asked the committee to reconvene and consider the AI violation itself. The committee concluded that Rignol did, in fact, violate exam rules by using AI, citing strong similarities between his answers and ChatGPT responses. It issued a mandatory F in the course.
In his filings, Rignol argued that facing a full-year suspension incentivizes false confessions from accused students, and that he was denied the ability to fully defend himself. He also argued that the school's reliance on AI detection tools was flawed and that the punishment – a standalone suspension for not being forthcoming – was unprecedented at SOM.
The court disagreed.
'Continuing to serve the suspension does not prevent Rignol from listing merely his year of graduation rather than the number of years he took to earn a degree,' Judge Russell wrote, adding that any lost earnings 'can be quantified and remedied with money damages' if his lawsuit is ultimately successful.
She noted that Rignol, unlike undergraduate plaintiffs in similar cases, was a 'financially successful professional' who failed to identify a single concrete opportunity he would forfeit due to the suspension.
POTENTIAL FOR EMBARRASSMENT NOT ENOUGH TO JUSTIFY ANONYMITY, JUDGE SAYS
In a separate ruling on March 31, Russell also denied Rignol's request to proceed under the pseudonym 'John Doe,' finding that he failed to meet the legal standard to justify anonymity.
She emphasized that the potential for embarrassment or reputational harm – such as being publicly associated with an academic misconduct case – does not, by itself, override the public's interest in open court proceedings.
'I take seriously Doe's contention that he would be humiliated if he were to disclose that he was accused of academic misconduct,' she wrote. 'But the potential for embarrassment or public humiliation does not, without more, justify a request for anonymity.'
Rignol's lawsuit against Yale is ongoing and includes claims of breach of contract, discrimination, and emotional distress.
You can read Judge Russell's ruling on anonymity here and her injunction denial here.
DON'T MISS: DENYING ALLEGED AI USE, STUDENT SUES YALE SOM OVER YEAR-LONG SUSPENSION and ACCEPTANCE RATES & YIELD AT THE TOP 100 U.S. MBA PROGRAMS
The post Judge Denies Injunction in Yale Student's AI Suspension appeared first on Poets&Quants.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Microsoft Reveals The Most And Least ‘AI-Safe' Jobs—Where Do You Rank?
Microsoft Reveals The Most And Least ‘AI-Safe' Jobs—Where Do You Rank?

Forbes

time2 minutes ago

  • Forbes

Microsoft Reveals The Most And Least ‘AI-Safe' Jobs—Where Do You Rank?

Will AI take your job? New data from Microsoft offers a clear answer. As generative AI tools like ChatGPT, Claude and Microsoft Copilot become increasingly sophisticated, millions of workers are wondering whether their roles qualify as 'AI-safe jobs' or are at risk of disruption. Unlike previous studies that relied on expert predictions or theoretical models, Microsoft's research analyzed over 200,000 real user interactions with Bing Copilot to show exactly which roles are being impacted by generative AI today. Here's what the data reveals about AI-safe jobs and those at risk as artificial intelligence transforms the workplace. What Are 'AI-Safe' Jobs? AI-safe jobs are least likely to be replaced or fundamentally transformed by generative AI. These roles typically require physical presence, manual dexterity, human interaction or specialized skills that current AI tools can't replicate effectively. How Did Microsoft Measure AI Impact? Microsoft introduced an 'AI applicability score' for each occupation by matching user conversations with Copilot to measures of task success and scope of automation. This score reflects real-world usage rather than theoretical task analysis. How This Study Differs From Past Research Microsoft's approach differs from studies like OpenAI's 'GPTs are GPTs'. Instead of expert predictions, this research tracks actual AI usage in workplaces. While OpenAI estimated that 80% of the U.S. workforce could see at least 10% of tasks affected, Microsoft's data highlights clear winners and losers based on how workers really use artificial intelligence. Top 10 Jobs Facing The Highest AI Disruption Here are the top 10 occupations most impacted by generative AI today: Read on for all 40 roles facing significant AI disruption below. Why These Roles Are Vulnerable to AI Jobs on the high-impact list share these traits: Top 10 Jobs Safe From Generative AI At the opposite end of the spectrum, the research from Microsoft identifies jobs that remain largely insulated from AI disruption: Read on for the complete list of 40 AI-safe jobs below. Why These Roles Are Insulated From AI AI-safe jobs share these characteristics: 5 AI Trends Reshaping The Workplace High-education roles such as data scientists, management analysts and web developers rank among the most disrupted. Even highly skilled roles like financial advisors and business analysts are seeing significant impacts from AI. Only about 60% overlap exists between user expectations and AI capabilities according to the research. The study reveals that AI is augmenting work as much as it's automating it. For example, writers, advisors and service reps use AI for routine tasks while retaining creative or interpersonal work. Roles demanding hands-on skills and human connection stay largely insulated. How To Prepare For AI's Impact On Your Career AI's impact on jobs is more nuanced than many expected. Rather than eliminating roles, artificial intelligence is reshaping how tasks are performed across industries. As the workforce debates which positions qualify as AI-safe jobs and which are most at risk, the best strategy is to stay adaptable and invest in continuous learning. Professionals who thrive will combine in-demand technical skills with human qualities such as creativity, empathy and critical thinking—abilities that artificial intelligence still can't fully replicate. 40 Jobs Most at Risk from AI 40 Jobs Least Impacted by AI

How Recruiters Can Tell You Used AI On Your Resume—And Why It Matters
How Recruiters Can Tell You Used AI On Your Resume—And Why It Matters

Forbes

time2 minutes ago

  • Forbes

How Recruiters Can Tell You Used AI On Your Resume—And Why It Matters

Tammy Homegardner, career coach and LinkedIn expert (Linked Into Jobs); author and founder of The Job Search School. Here's a topic that's been coming up in conversations with job seekers lately: using AI to help write your resume or cover letter. With tools like ChatGPT, it's easy to plug in some information and get a pretty polished result. But here's the thing: recruiters are catching on. They're noticing when a resume feels like it came from a robot—and in some cases, it's costing people interviews. If you're using AI in your job search, don't panic. AI can be an invaluable tool when used effectively. However, you must ensure that you're not letting it take over. Let's break down what recruiters are noticing, why it matters and how to use AI smartly without hurting your chances. Yes, Recruiters Can Tell Many recruiters have started to recognize the telltale signs of AI-generated applications. A New York Post article quotes Laurie Chamberlin, head of LHH Recruitment Solutions: "A good recruiter can spot an AI-written application from a mile away." She explains that AI-generated resumes often follow the same structure, use overly formal or vague language and lack the personal touches that real human experience brings to the table. The same article quotes Bonnie Dilber, a recruiter at Zapier, saying that nearly 25% of the resumes she sees are clearly AI-written. She says AI-generated resumes sound robotic and lack the details that prove someone is actually qualified for the job. What Gives It Away I've observed that AI-generated resumes typically share a few red flags: • Generic Buzzwords: Terms like "results-driven," "go-getter," and "team player" are overused and vague without context to back them up. • No Personal Touch: The content feels copy-pasted, with little to no reflection of the applicant's personality, values or goals. • Weird Formatting Or Placeholders: Some people forget to remove template prompts, such as "[Insert accomplishment here]• Mismatch With Job Descriptions: Since bots and AI tools often struggle to understand nuance, people frequently submit the same generic resume to dozens of jobs, even when it's not a good fit. What Happens Behind The Scenes Job seekers have been using resume auto-submit bots to blast their resumes to hundreds of openings—sometimes even before the listings go fully public. But this tactic is backfiring. Employers are seeing the same resume repeatedly, and they're becoming frustrated. It's widely known in my industry that some major companies are already utilizing AI tools to scan applications and filter out low-quality or suspicious submissions. If your resume doesn't align with the role or appears to have been generated and sent out indiscriminately, there's a good chance it will be filtered out before a human ever sees it. Several recruiters I have spoken to recently reported that job seekers using ChatGPT and other AI tools to apply for jobs is creating an influx of unqualified applications. And when that happens, hiring teams start tightening their filters and scrutinizing every resume more critically. Real Talk: What I've Seen With My Clients One client, "Jane," thought she'd save time by using AI to write her entire resume. She provided the AI with an older version of her resume and asked it to update it. She attempted to use the resume as the AI had given it to her, without adding any detail. Jane's resume was generic and included details that didn't even apply to her, while omitting her best accomplishments (since she hadn't told the AI about them). Another client, "Tom," let a bot handle all his applications for several weeks. However, he noticed a sharp drop in interview requests. Upon closer examination, we discovered that several companies had received the same resume for completely different roles. In one case, we asked for feedback and learned that a recruiter had flagged it because they saw his name repeatedly applying to every opening the company had, from entry-level to executive. It made him appear careless and desperate rather than committed. So, Should You Use AI At All? Yes—but use it wisely. AI is a great assistant, but it's not a replacement for you. It can help you brainstorm ideas, structure your thoughts and even suggest more effective wording. But you should never rely on it to write your resume or cover letter from start to finish. The end result must sound like you. It should reflect your experience, your voice and the unique strengths you bring to the table. Here are a few ways to use AI the right way: • Use it to generate a rough draft, then revise it with your real-life accomplishments and specific results. • Don't skip the customization step. Tailor your resume for every job—yes, every single one. It's more work, but it makes a difference. • If you're using AI to summarize an experience or reword something, review the tone. Does it sound like you? And no, you don't need to retype everything by hand. That's a myth. But you do need to personalize and proofread it carefully. The Bottom Line Recruiters don't care if you used AI—they care if you relied on it. They want to know that you understand the job you're applying for and that you're presenting your experience in a thoughtful, relevant way. Using AI without customization makes your application feel lazy and forgettable. When that happens, you might never get the chance to explain yourself. So use AI like you would a GPS: helpful for guidance, but you still need to drive the car. I'll leave you with a quote that feels especially relevant right now. Automattic CEO Matt Mullenweg is often quoted as saying: "Technology is best when it brings people together." Let AI help you get your foot in the door, but make sure your unique story is what walks through it. Forbes Coaches Council is an invitation-only community for leading business and career coaches. Do I qualify?

Welcome to dystopia: I helped ChatGPT pass a CAPTCHA and doomscroll my Facebook
Welcome to dystopia: I helped ChatGPT pass a CAPTCHA and doomscroll my Facebook

Android Authority

timean hour ago

  • Android Authority

Welcome to dystopia: I helped ChatGPT pass a CAPTCHA and doomscroll my Facebook

Calvin Wankhede / Android Authority Last week, OpenAI released what may be the most ambitious (and potentially controversial) update to ChatGPT since its launch: Agent mode. Unlike the standard conversational interface, Agent mode gives ChatGPT control over a virtual machine running Chrome, allowing it to interact with websites like a human would. It can identify elements on websites, scroll, click buttons, fill out forms, and, if granted credentials, even log into your online accounts. For the first time since the AI's launch, it can perform tasks instead of spitting out some text on how to do it yourself. On the surface, the potential seems endless. The chatbot could reply to your emails, shop for groceries, book a flight, or perform even more complex tasks spanning multiple websites. The best part is that you can watch ChatGPT 'move' its mouse cursor around the virtual web browser and navigate the internet (as you can see in the video below). Admittedly, it's a lot like a toddler struggling to walk at times, but it's endlessly fascinating nonetheless. So what can ChatGPT's Agent mode actually do with all of these capabilities? To answer that question, I tested the feature with a couple of real-world tasks — the kind you might actually want to offload to an AI assistant. Here's how it handled them, and what ChatGPT did when it encountered an obstacle. Putting ChatGPT Agent to work: A grocery run Amazon's Alexa can add toilet paper to your cart with a voice command, but ChatGPT's Agent mode can be entrusted to do a whole lot more. Specifically, it can shop your entire grocery list on any platform of your choice. Case in point: I gave the agent a simple task: buy everything I would need for a homemade pizza from Walmart. I didn't offer any specific ingredients, items, or even guidance on price just to see what it would pick. The agent booted up a virtual computer and navigated to Walmart in no time. But it ran into a roadblock almost immediately — Walmart threw up an anti-bot verification screen requiring a human to press and hold a button. Shockingly, the agent recognized this screen and asked me to briefly take control of the browser and complete the task. I took control and about ten seconds later, we were in. I handed control back, and the agent immediately got to work. It looks like CAPTCHAs will need to evolve yet again if they are to keep bots out in the future. ChatGPT summoned me when it needed a human touch, which it turns out means just solving CAPTCHAs. Moving on, I watched the agent methodically search for 'pizza dough,' 'pizza sauce,' 'mozzarella cheese,' and 'pepperoni.' But to my surprise, the agent didn't just grab the first result. Instead, it prioritized familiar and well-priced alternatives just like I personally would. In more than one instance, I watched it pick the third or fourth item in the results or call a competing product overpriced. The agent also correctly moved past inaccurate search results like a fully premade frozen pepperoni pizza when it was merely shopping for pepperoni, the ingredient. Within four minutes, my virtual cart was filled with everything I needed to make a pizza. The agent navigated to the checkout page and then handed control back to me to complete another CAPTCHA, login, and enter my payment details securely. ChatGPT says it cannot see your inputs when you're in control of its virtual machine, presumably meaning it can't store your login or credit card info. Despite that, I didn't elect to enter my login details and therefore, spent the night without any pizza. Needless to say, this was a very impressive showing even if it was a rather straightforward task with a clear sequence of actions. But can the agent handle something more ambiguous? I decided to give it a harder challenge: find a list of used car candidates on Facebook Marketplace. Can ChatGPT find the perfect used car? I love Facebook Marketplace because there's always a great deal around the corner, but I dread scrolling through the hundreds of listings to find the right one. So I put ChatGPT's Agent mode to the task, fully expecting it to stumble at some point. Specifically, I asked it to log into my Facebook account, navigate to the Marketplace tab, and compile a list of used Honda Fit specimens. As if that wasn't difficult enough, I asked it to only look for 2015 to 2020 models within a 50-kilometer radius of Toronto. Much to my surprise, the agent started off strong and logged into my Facebook account without a hiccup. I did have to step in to provide a two-factor authentication code, but other than that, it was an entirely painless process. ChatGPT navigated to the Marketplace section, changed the location to the city I provided, and even adjusted the maximum distance filter. It then started going through individual listings and recorded important details about each car it came across. To be fair, ChatGPT's Agent mode is not faster than a human — I could probably go through individual listings faster than the agent, at least in its current state. But like any computer, ChatGPT is stubbornly persistent. After watching it look through listings for a few minutes, I walked away from my computer to grab a coffee. When I returned about ten minutes later, the agent was still meticulously clicking on individual listings. ChatGPT is stubbornly persistent, and scrolled through over a hundred listings on Facebook Marketplace. The final result was a comprehensive report of two to three dozen cars that met my criteria. As the image above shows, the agent had compiled a detailed brief for each vehicle it found. It organized the key details into columns: year and model, price and mileage, transmission, and location. It even created a notes section where it summarized the seller's description, pulling out crucial information like 'dealer listing,' 'salvage title,' or 'open to negotiation.' Finally, each line item had a screenshot of the Marketplace listing for me to peruse if I was interested in that specific vehicle. If that seems impressive, it absolutely is. Putting together a list like this manually would otherwise have taken me at least a couple of hours. And the best part is that I could probably go one step further and ask the ChatGPT agent to contact the shortlisted sellers on my behalf. It's not the responsible thing to do, but it's certainly a possibility. Not perfect, but impressive…and scary Calvin Wankhede / Android Authority ChatGPT's Agent mode is not perfect by any means; I watched it go in an endless loop countless times until I was forced to intervene. It can also follow instructions too closely, almost to a fault. For example, it refused to accept a search radius of 51km instead of the 50km I prescribed on a different used auto website. The result? The agent used its virtual mouse to 'drag' a slider back and forth for several seconds until it landed precisely on the 50km it needed. Common sense doesn't come naturally to AI, even in agent mode. I also noticed a pretty big delay between ChatGPT executing an action like clicking on a link or pressing the browser's back button. This causes the AI to retry the action, which is a big problem because it inevitably ends up on the wrong page. At times, the agent would click the back button twice and end up on the homepage. It would then restart the whole search all over again, wasting several minutes in the process. ChatGPT Agent can be a bumbling fool sometimes, but it's only going to get better. But for all its clumsy moments, the true significance of Agent mode isn't in its current speed or ability. The fact that I could walk away to make a coffee while an AI autonomously compiled a detailed report on used cars is the main value proposition here. The potential for increased productivity is massive. But at the same time, I'm not sure how I feel about an AI making decisions on my behalf. So the question now is: how much more autonomy are we willing to hand over to AI? If you'd like to use Agent mode, keep in mind that you will need a ChatGPT Plus subscription. That will set you back $20 monthly, and the feature is currently capped at just 40 messages each month. But if you're patient, I'm sure it will eventually trickle down to the free tier just like other new ChatGPT features in the past. Follow

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store