Latest news with #AcuityInsights


Forbes
25-05-2025
- Forbes
AI For College Admissions Essays: A Proposed Ethical Framework
CHAPEL HILL, NORTH CAROLINA - JUNE 29: People walk on the campus of the University of North Carolina ... More Chapel Hill on June 29, 2023 in Chapel Hill, North Carolina. The U.S. Supreme Court ruled that race-conscious admission policies used by Harvard and the University of North Carolina violate the Constitution, bringing an end to affirmative action in higher education. (Photo by) Across the country, students are turning to AI for help drafting one of the most personal pieces of their college applications: the personal statement and college supplemental essays. According to Acuity Insights' 2024 survey of over 1,000 applicants, 35% of students said they used AI tools like ChatGPT or Grammarly to support their applications, and 76% of those users relied on these tools for the majority of their work. Yet 63% said they didn't know how much AI use was permissible, and only 42% received clear guidance from schools. Rather than banning these tools or ignoring them, we need a shared framework that helps students use AI ethically and responsibly while preserving the integrity of the application process. Here's a simple framework I propose, adapted from my research on AI literacy and admissions strategy. It's called SAGE: Source, Analyze, Generate, Edit. Each step guides students through a thoughtful, transparent process in using AI in the essay writing phase of college admissions. Rule#1: Source your story, not someone else's. Before using any tool, reflect. What is the story only you can tell? AI can help you identify themes in your narrative, but it shouldn't replace your voice. Use journaling, voice memos, or trusted conversations to identify experiences that define who you are. In my book Get Real and Get In, I encourage students to engage in the 'When I Was Little' exercise. This activity prompts you to recall your childhood dreams and interests, like wanting to be a roller coaster test-rider or a superhero. These early passions can reveal underlying values and motivations that are still relevant today. By tapping into these authentic experiences, students can craft essays that truly reflect their unique identities. Avoid asking AI to 'write my college essay about X.' Instead: Use AI to brainstorm questions or themes based on your own experiences. Use AI to help uncover what to write about, not how. My custom College Admissions X-Factor GPT is designed specifically for this purpose. The GPT guides you through a series of reflective questions to help identify your unique experiences, values, and intellectual passions. For example, you might prompt it with: AI becomes a powerful tool when it reflects you back to yourself. That's how it adds value to the writing process by acting as a mirror, not a mouthpiece. Rule #2: Analyze the prompt and your intention. Each essay prompt asks something different and reflects the unique values of each college or university. What is the college truly looking for? Use AI as a thinking partner to understand what the prompt is really asking and what part of yourself you want to highlight. Try asking AI: Rule #3: Generate with caution. AI can be a helpful creative partner, but like any collaborator it should follow your lead. Used wisely, AI can help you get unstuck. It can suggest structure, compare tones, rephrase awkward transitions, or offer a few ways to start a paragraph. This is especially useful if writing isn't your strongest skill, or if you're staring at a blinking cursor and don't know where to begin. But there's a difference between using AI to clarify your message and asking it to invent your story. Letting AI generate full paragraphs or entire drafts can lead to several problems: Start with your own ideas. Free-write, bullet-point, record a voice memo; whatever helps you capture your thoughts honestly. Then, invite AI into the process as a second set of eyes, not a ghostwriter. Once AI gives you suggestions, rewrite them in your own voice. Keep what works, revise what doesn't, and delete what feels off. Never submit anything you haven't reviewed, rewritten, and fully made your own. Rule #3: Edit for voice, accuracy, and authenticity. Generative AI can improve grammar, streamline wordiness, and suggest more polished phrasing. But only you can ensure the essay reflects your actual experience, values, and tone. If you let AI overwrite your voice, you risk sounding generic or inauthentic. So what is 'voice,' exactly? It's the unique way you communicate your own perspective. It shows up in the details you choose, the metaphors that feel natural to you, the rhythm of your sentences, and the level of vulnerability you're comfortable with. Admissions officers are attuned to what it doesn't feel real. If your essay reads like it was written by a 35-year-old data analyst, but you're a 17-year-old aspiring biology major, that mismatch can work against you. Think of this step as closing the loop: AI may have helped you get started or stay organized, but now it's your job to make sure the final product is unmistakably yours. If you're a teacher, counselor, or admissions officer, now is the time to create clear, proactive guidance. College essays remain one of the most personal components of an application. That hasn't changed. What's changed is the tools that students have available to arrive at that voice. By offering students a framework like SAGE, we can help students gain additional support in the application process and help them to amplify, not muffle, their unique voices.

CBC
14-04-2025
- Health
- CBC
Shut out of medical school, he blames controversial admissions test which experts say lacks evidence
Erik Soby thought he had a shot at getting into medical school last year. The Torontonian scored high on the standard Medical College Admission Test (MCAT) and had an impressive grade point average. But most medical schools in Canada now require another admissions test — called the Casper — and Soby believes that hurdle was his downfall. "That was the one aspect where I was below the average," he said. "So I ended up getting screened out." Medical schools are under a lot of pressure to sort through thousands of applications each year — people vying for a coveted spot and the chance to become a physician. To help narrow down candidates, many medical schools use the Casper, which stands for Computer-Based Assessment for Sampling Personal Characteristics. The company behind the test, Acuity Insights, claims the Casper helps schools predict which students will have career success by assessing "soft skills" — from empathy and ethics to judgment and communication. The test poses video and typed scenario-based questions that ask the applicant to weigh in on a moral dilemma. The questions change every year, but Soby gives an example of what one might look like. "They'd say, 'This company that we're looking to invest in has a reputation of [not believing in] climate change,'" he said. "You're supposed to weigh both sides of the scenario." But Soby says the test is shrouded in mystery — test takers are never given their actual score, never learn where they might need improvement, and have no idea who is rating the test that can have such an impact on their future. On top of that, critics say Acuity Insight's research backing up its claims is poor and unconvincing. "There is no evidence that Casper predicts future performance," said Jennifer Cleland, an internationally renowned researcher in the area of selection to medical school, and professor of medical education research at Singapore's Lee Kong Chian School of Medicine. WATCH | Controversial Casper: "They are selling this tool — and presumably making money from it — and people are using it thinking that it's doing what it says it does." Twelve of Canada's 17 medical schools rely on the Casper test as part of the initial admissions process, many putting a lot of weight on an applicant's score — up to 30 per cent in some cases. Acuity Insights declined a request to be interviewed. A spokesperson wrote that a "wide range of evidence points to the effectiveness of Casper in assessing applicants' non-academic skills" and that medical schools that use Casper in their admissions processes "can identify applicants who will excel not just academically, but also as compassionate and effective physicians." Use of Casper spreading The test was developed by McMaster University's Faculty of Health Sciences and became part of its medical school admissions process in 2010. A few years later it was licensed to a private company — now Toronto-based Acuity Insights — and has received nearly $2.5 million in government funding since 2018 through grants from the National Research Council Canada. Although Casper was originally designed to screen medical school applicants, the company has successfully marketed it to other programs across Canada — from nursing, dentistry and physical therapy to undergraduate programs such as the University of Alberta's bachelor of education program and the University of Western Ontario's engineering school. One of the most common criticisms is the test's lack of transparency. People who write the Casper are never told a score — the company only sends that information to the schools. Instead, test takers are told which of four tiers they fall into, from highest to lowest — relative to other people taking the test at the same time. Acuity says this makes "feedback more accessible." "I think we deserve to know [the exact percentile], considering how much weight it carries going into admissions," said Soby. Go Public has heard from over two dozen medical school applicants who also have concerns. "The process is unnecessarily opaque," wrote one, who said he'd taken the test five times before finally getting accepted to a medical school. "I have endless concerns about the test," wrote another, who said he'd written Casper three times. A student who said he'd taken the test four times wrote that it should "be abolished." The company charges applicants $50 to write the test, and another $18 to submit to each medical school. Applicants also question the training of people who rate the Casper exams. A recent online job posting by Acuity promised raters could earn $30 to $50 an hour. The ad did not list any academic or professional requirements, noting that "applicants from all walks of life" were welcome and raters would get paid 65 cents for every written answer they assess and $1 for every video response. Acuity told Go Public it monitors how quickly raters are scoring responses "to ensure they are spending an appropriate amount of time reviewing the context of each response." The company also said its raters "have differing levels of qualification" which ensures "they represent the patient population students will serve when they become physicians." 'No evidence' But perhaps the biggest concern is criticism from respected academics who say there is no compelling evidence the test does what it claims. After Go Public asked about those claims, Acuity sent a lengthy document, which included a list of eight studies, in support of the Casper. Go Public shared those studies with Cleland — and three other established researchers with experience in medical school admissions, who declined to be identified as they fear professional repercussions. All called the research weak and insufficient to back the company's claims. "I was actually very surprised at how poor the research was," said Cleland. "They were not terribly high-quality studies. They weren't very good." Cleland and the others pointed out that one of the studies doesn't examine the actual Casper test, but a test that's similar. They said another is an overview of existing research and does not provide any new data. Two were conference papers — so did not undergo the rigour of a journal peer-review process — and several did not address long-term outcomes. "It disappoints me that something is so lacking in scholarship, lacking in rigour, robustness and credibility," said Cleland. The researchers also said that several studies are potentially "conflicted" because they were authored by co-founders of the company that is now Acuity or researchers that work for them. Acuity says "industry-funded research" is common practice and that all research involving the company undergoes "full disclosure regarding funding and affiliations." The researchers we spoke to were also concerned that most of the studies were small enough to be scientifically questionable, and hadn't been replicated. Cleland pointed to one study that looked at 31 medical residents and concluded that Casper could predict which ones would have fewer professional issues. "How can you say that with such tiny, tiny numbers?" asked Cleland. "The claim is groundless." Some of the research Acuity Insights sent showed that medical school applicants with higher Casper scores were more likely to be invited for an interview and do well in that interview, and other studies found that Casper can predict who will do well on some assessments in medical school. But Cleland says there was no clear pattern established. "So it predicted performance in one clinical exam, but not the equivalent exam the next year," she said. "You would expect that if something was predicting what it was meant to predict… it would be kind of consistent." There are few studies that track student performance over time, but Go Public examined two longitudinal studies that Acuity did not reference. One examined whether Casper scores and other admissions criteria could predict which medical students might run into professional problems. "Our research did not reveal a significant relationship between Casper performance and the need for professionalism remediation in medical school," said Lawrence Grierson, senior author of that study and an associate professor with the department of family medicine at McMaster University. The other study Grierson worked on also found the Casper test could not predict who would do well on the exam for obtaining a medical licence. "We did not find an association," said Grierson. "It is hard to know why an association appears in some studies and not others. But, taken together, what this means is any declarations of the test's universal effectiveness (at least with respect to predicting future professional behaviour) are overstated." Acuity Insights also sent Go Public its "technical manual," a 148-page document which it says "provides a robust and comprehensive guide on the validity and reliability of the Casper test." The researchers we spoke with point to the fact that the technical manual is not a peer-reviewed study, but a document used — in part — for commercial purposes. The company later sent more research, but some studies were duplicates they'd already sent, several were research papers — not peer-reviewed studies — and one was a student's doctoral thesis. Acuity also points to research that suggests the Casper test can increase student diversity because test results show less racial bias than academic assessments like the MCAT and GPA scores. Soby wrote the Casper test again last August and is waiting to hear whether he'll get into medical school for the fall. Meantime, he's posted a TikTok about his Casper concerns, calling out the fact that applicants don't get their exact score and that raters might rush through responses to increase their hourly wage. He says he wants medical schools to know how the Casper test is affecting people who hope to become doctors. "It's important that those schools see the criticisms," said Soby. "And it's also important that the public sees what's going on."