
Failing on every front, is higher education still sustainable today?
This latest headline from trouble-plagued Harvard puts higher education's problems in a nutshell.
Not only do many Americans believe higher education is elitist, but increasingly they're concluding it's also not very good at its job, or even harmful.
Advertisement
And with reason.
Gino, a Harvard Business School behavioral scientist who studied (of all things) honesty, was stripped of tenure and fired because of academic dishonesty, the first Harvard professor so treated since the 1940s.
Investigators found problems with several of her more famous studies were the result of research misconduct.
Advertisement
Nor is she Harvard's only problem child: Claudine Gay had to step down as the school's president amid her own plagiarism scandal.
And these problems are rife throughout the academy. A Smith College commencement speaker this year even had to surrender her honorary degree when it turned out her speech had been stolen.
It's not just about copying. There's also a widely acknowledged 'replication crisis': Scientists publish papers reporting results, but it's increasingly impossible for others to reproduce those results, leading to what some have called an existential crisis for research.
Advertisement
We're told cuts to federal spending on higher education will imperil research, but such claims would be more troubling if the 'research' were of more reliably high quality.
It's an open secret that the pressure to produce a constant flood of papers that are publishable and, better yet, interesting enough to spark headlines leads to corner-cutting, 'data torture' and overclaiming — or, sometimes, outright fraud.
The result is an expensive self-licking ice cream cone of grant applications and publications, but the actual contribution to human knowledge is often lacking.
Of course, research isn't the only justification for higher education; we had colleges and universities long before professors saw academic publication as the major goal of their jobs.
Advertisement
Higher education was long justified as a way to promote our society's values and instill knowledge.
College grads were supposed to understand philosophy, government, literature and human nature in ways that people without such a higher education couldn't.
They were supposed to gain a deeper appreciation of our society's roots and purposes, and an ability to think critically, and to re-examine their views in the face of new evidence.
This is one reason for the requirement that military officers have college degrees — a requirement that probably should be rethought: Does anyone seriously believe this is what colleges and universities teach now?
An overriding theme at elite colleges — and by no means limited to them — is that Western culture is uniquely evil, white people are uniquely awful, and pretty much any crime is justifiable so long as the hands committing it are suitably brown and 'oppressed.'
Meanwhile, numerous universities face federal civil-rights investigations for allowing and in some cases promoting antisemitism and violence against Jewish students.
We've seen riots, violence, Jewish students surrounded and attacked on campus or forced to hide out in attics as mobs rampage through buildings.
The notion that our colleges and universities are encouraging students to follow their better instincts seems unsustainable.
Advertisement
And how are schools doing at inculcating actual, you know, knowledge?
Not so well. In a recent study, Richard Arum and Josipa Aroksa found there's not a lot of learning going on: 45% of students 'did not demonstrate any significant improvement in learning' over the first two years of college; 36% failed to show any improvement over four years.
The reason: Courses aren't very rigorous, and not much is required of students.
Then we see things like UCLA Medical School's notorious dumbing down of admissions in the name of 'diversity.'
Advertisement
Though racial preferences are outlawed in California, UCLA has made its minimum requirements much less demanding in order to promote minority admissions.
The result: Up to half of UCLA medical students fail basic tests of competence.
The public has noticed, which is why higher education, whose position seemed unassailable not long ago, is facing successful assaults from both the Trump administration and the market.
Advertisement
As one wag put it on X: 'Harvard is quickly realizing that nobody outside of Harvard cares about Harvard.'
Or, if they do care, they want to see it turned upside down and shaken hard.
As evolutionary biologist Thomas Ray observed, 'Every successful system accumulates parasites.' American higher ed has been extraordinarily successful, and it has been parasitized by grifters, political hacks and outright terrorist sympathizers.
Advertisement
Now that it's lost public sympathy, it can expect a stiff dose of the salts. Good.
Glenn Harlan Reynolds is a professor of law at the University of Tennessee and founder of the InstaPundit.com blog.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fox News
an hour ago
- Fox News
French fries tied to diabetes risk, but not all potatoes 'created equal,' study finds
All potatoes are not created equally, a massive new study has found – especially when it comes to the risk of type 2 diabetes (T2D). Researchers at the Harvard T.H. Chan School of Public Health found that eating just three servings of french fries per week was associated with a 20% higher risk of developing T2D. Baked, boiled or mashed potatoes, however, were not linked to an increased risk. "We're shifting the conversation from, 'Are potatoes good or bad?' to a more nuanced – and useful – question: How are they prepared and what might we eat instead?" said Seyed Mohammad Mousavi, a postdoctoral research fellow in the Department of Nutrition and one of the study's lead authors, in a news release. Published in the British Medical Journal, the study tracked over 200,000 Americans for more than 30 years, analyzing their dietary habits and health outcomes. In total, 22,299 participants developed type 2 diabetes, per the study. Meta-analyses of more than 587,000 people and 43,000 T2D diagnoses across four continents were conducted and confirmed the findings, according to the research paper. The association between potatoes – the third most commonly consumed food crop in the U.S. – and health outcomes, particularly T2D, has been the subject of growing debate, the researchers wrote. While potatoes contain nutrients such as fiber, vitamin C and potassium, recent research has cautioned that their high glycemic index, which causes blood sugar spikes, and the various ways they're cooked could have negative health impacts. The latest findings confirm that the link between high potato consumption and increased T2D risk is primarily driven by french fries. Deep-frying potatoes in hot oil not only strips away nutrients but also triggers the Maillard reaction, the chemical process that gives fries and seared meats their crispy, browned texture and savory flavor — yet it can also produce harmful byproducts. The good news, however, is that some simple swaps three times a week can significantly decrease the risk of diabetes. Replacing fries with whole grains lowered the risk by 19%, the study found. Even choosing whole grains over healthier potato options reduced the risk by 8%. "The public health message here is simple and powerful," said study author Walter Willett, professor of epidemiology and nutrition, in the release. "Small changes in our daily diet can have an important impact on the risk of type 2 diabetes." More than 38 million Americans have diabetes – about 1 in 10 – and most have type 2, according to the Centers for Disease Control and Prevention (CDC). The trend has been consistently rising over the past two decades. "For policymakers, our findings highlight the need to move beyond broad food categories and pay closer attention to how foods are prepared and what they're replacing," Willett continued. "Not all carbs – or even all potatoes – are created equal, and that distinction is crucial when it comes to shaping effective dietary guidelines." The team also suggested future research into how cooking potatoes with butter or cream, or different frying oils, may influence health outcomes, as well as comparisons with sweet potatoes, which have different nutrient profiles. Potatoes USA, which represents U.S. potato growers and importers, argued that the average American eats about half the serving size used in the study. "We shouldn't judge foods in isolation because that's not how people eat in the real world," the organization told Fox News Digital in a statement. "Like many foods, fries can absolutely be part of a healthy eating pattern when enjoyed in moderation," it added. "With 90% of Americans falling short on vegetable intake, there's a clear need to help people enjoy more vegetables each day. Emerging research shows that when fried potatoes are mixed with other vegetables, total vegetable intake increases."


Washington Post
2 hours ago
- Washington Post
How lithium went from 7Up to treatment for mental illness — and maybe Alzheimer's
A study this week in the journal Nature found that the loss of lithium, a naturally occurring element in the brain, could be an early sign of Alzheimer's and a powerful driver of the disease, which afflicts more than 7 million Americans. The study, led by Bruce A. Yankner, a professor of genetics and neurology at Harvard Medical School, found that lithium is important to the health of all the major types of brain cells in mice. Depletion of lithium in the brain also seems to be a factor in almost all of the major deterioration that occurs with Alzheimer's disease. While the latest study of lithium is novel in pointing toward a potential Alzheimer's treatment, the use of lithium to treat other conditions is not. Here's a look at what it has worked for and how its use is being researched in new ways. Lithium, soft and silvery, is nature's lightest metal, enabling it to store energy at high density and discharge electrons rapidly. 'This is the basis of the lithium battery that powers our phones, laptops and electric vehicles,' Yankner said. Less well known is that the original formulation of the soft drink 7Up contained lithium; the drink was marketed under the name Bib-Label Lithiated Lemon-Lime Soda. The lithium was removed in 1948 after the Food and Drug Administration banned the use of lithium citrate in soft drinks. As of 2024, Australia was the world's largest producer of lithium, though Bolivia, Chile and Argentina are known as the 'lithium triangle.' A form of lithium, lithium carbonate, has been widely prescribed in the treatment of bipolar disease in the United States since it was first approved by the FDA in 1970. It is believed to be a mood stabilizer and can also be prescribed for long-term treatment of depression. While the precise mechanism of lithium carbonate isn't known, it is believed to suppress stress in the brain and help restore neuroplasticity — the brain's ability to change and adapt as we get older. One of the findings of the Nature study was that our brains contain a small amount of naturally occurring lithium. There are cases of lithium being used in psychiatry going back to the mid-19th century, but the study of lithium for mood disorders took a larger step in the late 1940s when an Australian psychiatrist, John Cade, found that it helped many bipolar patients stabilize quickly. 'It's been around for decades, and we have a lot of research and a lot of evidence supporting its use,' said Elizabeth Hoge, a professor of psychiatry at Georgetown University School of Medicine. 'The most important thing is that it does help patients. We know that it works from randomized, controlled trials.' Hoge said the use of lithium does require monitoring of renal and thyroid function, which can decline in some cases. Balwinder Singh, an assistant professor of psychiatry at the Mayo Clinic, said lithium remains the 'gold standard' medication for bipolar disorder, though it is under-prescribed in his view. About 10 to 15 percent of Americans with bipolar disorder take lithium, compared with about 35 percent of patients in Europe. 'Lithium is the only mood stabilizer consistently shown to reduce suicidality in individuals with bipolar disorder,' according to a recent comment Singh wrote in the journal Lancet Psychiatry. Although lithium has been widely prescribed and its use is supported in many studies, a 2022 paper argued that its effect on bipolar depression, a part of bipolar disorder, does not represent a statistically significant improvement over a placebo or antidepressants. Lithium had been investigated previously as a potential Alzheimer's treatment and antiaging medication. A 2017 study in Denmark found that the presence of lithium in drinking water might be linked to a lower incidence of dementia in the population. Yankner's lab became interested in lithium after measuring the levels of 30 different metals in the brain and blood of people who were cognitively healthy, people in a very early stage of dementia and people with full-blown Alzheimer's disease. Of the 30 metals, only lithium changed significantly among the three groups. Lithium maintains the connections and communication lines that allow neurons in a healthy brain to talk with one another. The metal also helps form the myelin that coats and insulates the communication lines and helps microglial cells clear cellular debris that can impede brain function. All this adds up to promoting good memory function both in mice and in humans, researchers said. Yankner's lab found that small amounts of the compound lithium orotate were able to reverse a mouse model of Alzheimer's disease and restore brain function. Researchers said the discovery should be enough to spur clinical trials for testing the compound in people. But Yankner said he could not recommend at this point that people start taking lithium because its use for Alzheimer's has not been validated in people, and 'things can change as you go from mice to humans.' Lithium can also be toxic if not regulated properly. And because this research is nascent, it's unlikely that a treatment will be available anytime soon.

Business Insider
3 hours ago
- Business Insider
The app will see you now: New AI scans faces to predict diseases, disorders, and early death
I look like I'm about 28. Or maybe 38. That's according to Harvard's "FaceAge" algorithm, which uses photos to determine a person's supposed biological age — meant as a quick proxy for wellness. This app is one of several new efforts to turn selfies into diagnostic tools. There's one for diagnosing nasal congestion, another for seasonal allergies, and a few safe driving apps that watch your face for signs of drowsiness. Some face scanners measure pain, illness, or signs of autism. One aims to track PTSD in kids to spare them from having to talk about traumatic issues over and over again. Since 2022, facial recognition for the clinic has blossomed, alongside rapid advancements in artificial intelligence and chipmaking. This year, new face technologies promising to diagnose diseases earlier, treat patients better, and ostensibly predict early death are taking off. "It's a medical biomarker, not just a gimmick," said FaceAge creator and radiologist Dr. Raymond Mak, who's leading the team at Harvard Medical School developing facial recognition technology that Business Insider recently tested. Ethics experts worry about what we're barreling into, without better understanding exactly what this tech is measuring, or why. "AI is entering these spaces fast," Malihe Alikhani, an assistant professor of machine learning at Northeastern University, told Business Insider. "It's about making sure that it is safe and beneficial." Your face is a reflection of your health Our faces say a lot about our physical, mental, and biological health. While this is new territory for computers, humans have read faces to make quick judgments for thousands of years. Research suggests we developed a third type of cone in our eyeballs about 30 million years ago, specifically to scan each other's faces for signs of health or sickness. That cone allows us to read faces in shades of red and green. "People look at rosy cheeks and they see that as a sign of health. When we want to draw a face that's sick, we'd make it green," Professor Brad Duchaine, a neuroscientist who studies facial perception at Dartmouth, told Business Insider. It's true: A flush can indicate good blood flow, or high levels of carotenoids in the skin from fruits and veggies we eat. Dr. Bahman Guyuron, a plastic surgeon in Cleveland, studied identical twins with different lifestyles to see how factors like smoking and stress impact their faces. Consistently, the twin with more stress and more toxins in their bodies looked several years older. Sagging skin and wrinkles can reflect poorer internal health, with lower collagen production and higher levels of stress hormones. Conversely, studies show that superaging centenarians — whose organs and cells are working unbelievably efficiently — look, on average, about 27 years younger than they are. I tried a face scanning app One of the first medical applications of face-reading tech was Face2Gene, an app first released in 2014 that helps doctors diagnose genetic conditions. Studies suggest Face2Gene is better than human doctors at extracting information from a person's face and then linking those features to a specific genetic issue. The Australian app PainChek has tracked the pain of nursing home patients since 2017. It is mostly used for dementia patients who may not be able to verbalize pain. In a recent announcement, the company said it is awaiting FDA approval and could be cleared by September 2025. I wanted to try one of these apps for myself. Since I write a lot about aging, I decided to try FaceAge, Harvard's new app that ostensibly measures your biological age. It is not yet available for public use; it is being used as a research tool for now. The ultimate goal, researchers say, would be to use selfies to do better diagnostic work. FaceAge could one day improve cancer treatments by tailoring them to a patient's unique biology and health status, or maybe even help flag other health issues before they happen. The FaceAge algorithm pays attention to two main areas of the face: the nasal labial folds, from the nose to the lips, and the temples between the eyes and ears. The idea is to spot premature or accelerated signs of aging that could be a red flag for internal problems. "If your face age is accelerating quicker than your chronological age, it's a very poor prognostic sign," Mak, one of the developers behind FaceAge, told Business Insider. I submitted four photos to the app. In the darkest, blurriest photo I provided, the app thought I was 27.9 years old — a little more than a decade below my actual age. The picture I took with no makeup on, and my face thrust out into the bright mid-day sunshine, gave me the oldest FaceAge, even though all of these pictures were taken within the past year. One selfie taken in the dark of winter and another on a cloudy day ended up somewhere in the middle, making me look young-ish. Humans (and face-scanning apps) use the proliferation of lines, sharp edges, and more details to assess someone's age. That's why people look younger in blurry photos — or with surgery or makeup to smooth over their wrinkles. In "a really, really, really blurry photograph of a face, what you've done is you've stripped out all of the high spatial frequency information," neuroscientist Bevil Conway, a senior investigator at the National Institutes of Health, told Business Insider. Direct light, like a ring light, can help mask old age. The midday sun, coming down on my face from above, had the opposite effect. So, what did I learn from my experiment? FaceAge told me I'm looking great (and young!) and should keep up my healthy habits. Still, its assessments were all over the place. Face Age is confident each time you run it, but that confidence masks the fact that it can't really tell how well I'm aging over time. Is my body 10 years younger than me, or just one? While I do eat a relatively healthy diet and exercise regularly, I'm curious how much the differences in lighting affected my results. The ethics are blurry Even if it's something as seemingly innocuous as measuring your age, bringing AI into the doctor's office is fraught with ethical conundrums. "We have been through a few years now of companies coming up with these systems, selling them to hospitals, selling them to doctors, selling them to border protection, and then after a while they're like, 'oops,'" Alikhani, the AI ethics expert, said. Readers may remember the uproar over the highly controversial Stanford study that developed "gaydar" AI in 2017. The app purported to spot someone's sexual orientation. Critics said it was just picking up on social and environmental cues that had nothing to do with sexuality, like makeup, facial hair, and sun exposure. Another team of researchers from Shanghai Jiao Tong University developed an algorithm that promised to identify criminals and terrorists, or people with law-breaking tendencies. These efforts feel uncomfortably close to the pseudoscientific practice of physiognomy, a deeply flawed practice that's been used for centuries to justify racism and bigotry, Alikhani said. Facial expression is highly context-dependent, varying not only based on a person's gender and culture, but also by the individual and the moment, she said. "Better healthcare involves patients more in the decision-making," Alikhani said. "What are we doing if we're putting that in the hands of AI?"