Roadmap to Recovery: Model for high school coming to Hampton Roads getting results for addicted teens
The hybrid high school, which enables addicted teens to get both a diploma and support services for sobriety at the same time in the same place, is a model for Harbor Hope Center, a similar school coming to Hampton Roads to serve Virginia Beach, Norfolk, Chesapeake, Portsmouth and Suffolk. Similar recovery schools are in the works for Northern Virginia and Waynesboro.
Recovery high school for addicted teens in Southside one step from approval
VB School Board votes in favor of recovery school
10 On Your Side visited the Chesterfield Recovery Academy last week. The school serves Richmond and 14 surrounding school divisions. Enrollment is small, about 30, and is about to complete its third school year.
'A lot of times [the students] come in a little apprehensive when they first come in,' said Program Coordinator Justin Savoy, because when they come in they're trying to kick booze, weed or prescription pills. 'After a few weeks of being here, they kind of realize that this is the most support they'll ever get compared to a traditional school.'
Three CRA students told 10 On Your Side about how they started their substance abuse.
'My grandma just passed away from pancreatic cancer,' said junior Derrick Buikema. 'My father had passed away.'
'I had legal trouble,' said sophomore Ethan Jones. 'Just like getting arrested and doing a bunch of drugs.'
'My mental health was really bad,' said sophomore Lexie Mackay.
And then the three students talked about their descent into addiction.
'I started experimenting with [Xanax], [percocet),' Buikema said. 'I started abusing cough syrup, alcohol, of course. I would do shrooms, .'
Said Mackay: 'I was smoking [weed] all the time and getting in trouble.'
'I was taking Xanax, OxyContin and other stuff, too,' Jones said. 'When I do things like that, it's just not thinking. My subconscious just goes for it.' He got kicked out of two different schools for possession before enrolling at CRA.
They've found a road to recovery here, and Savoy said CRA is not a form of school punishment or ordered by a court. It's voluntary.
'A child can refer themselves — parent, guardian, outside person, it can be a clinician or even a school principal,' he said.
The school has proven helpful for these students, at least.
'Now, I do my school, I do my homework,' Buikema said.
Said Savoy: 'Every day they have group and a different subject for psychoeducation, substance abuse, relationship building, art therapy, music therapy, and they begin each day eating breakfast together.'
Jones said he appreciates the tough candor of the instructors and counselors.
'Instead of spreading a bunch of lies, [they're] actually telling me it's not going to be easy,' Jones said. 'It's going to be hard for you, and it's not going to go away completely.'
The hybrid program of combining diploma studies with recovery resources has 'really made a difference for me reducing my use,' Buikema said.
Said Mackay: 'Counselors have really helped me and really changed how I perceive things.'
'It's been way easier to stay in line and out of trouble, and think before I do things,' Jones said.
'It made it to the point where I don't feel like I have to use,' Buikema said. 'I learned a lot of coping skills here.'
'I genuinely became happier and actually wanting to stop doing drugs,' Mackay added.
Savoy said the students are drug-tested randomly at least twice a month. For some students CRA works well, but not for all.
'Unfortunately, everyone is not always ready for recovery, so sometimes students have to be transitioned out,' Savoy said. 'They are welcomed back to our program.'
All three students weren't shy about where they would have headed if not for recovery school.
'Probably a group home or detention,' Buikema said. 'This place kind of saved my life. I couldn't go anywhere else.'
'I would just stay in trouble with everything going downhill,' Mackay said. 'My mental health would be getting really bad, being in a spiral I can't get out of.'
Instead, they now have a future to anticipate.
'I'm hoping to do music,' Jones said, 'as a producer.'
For Buikema, it's automotive technology and mechanics.
'I love cars, everything about them,' Buikema said. 'So working on that would be awesome.'
Mackay, who spent time fashioning her hair for our interview, said she wants a career in 'cosmetology, maybe hair or nails. I love to do hair.'
The students get a glimpse of the future through a different prism. Virginia Commonwealth University students make regular visits. They're 'college students who are in recovery themselves that work with our kids, teach them how to tell their stories, but also what life is like … being in sobriety beyond high school,' Savoy said.
Chesterfield will have its completion ceremony May 16, but a week later, seniors can walk with their home district school as well, and they get the exact same diploma.
Hampton Roads' Harbor Hope Center plans to open in the former Great Bridge Middle School and will serve all five Southside cities.
While recovery schools are new to the Commonwealth, they date back decades. Online information lists the Phoenix School of Silver Spring, Md. as being the first to open in 1979. is currently listed as the only recovery school in Maryland.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
5 hours ago
- Forbes
Sam Altman Says That Less Than 1% Of User-AI Relationships Are Unhealthy But That's Still Jittery For Far-Flung Mental Health
In today's column, I examine a rather pointed remark that Sam Altman recently made about the role of AI and its impact on humankind's mental health. Altman essentially stated that less than 1% of user-AI relationships could be construed as unhealthy. If you soberly unpack the bold assertion, there is a lot in there that deserves serious and mindful scrutiny. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. Trends In AI For Mental Health First, I'd like to set the stage on how generative AI and LLMs are typically used in an ad hoc way for mental health guidance. As you likely know, the overall scale of AI generalized usage by the public is astonishingly massive. ChatGPT has over 700 million weekly active users, and when added to the volume of users that are using competing AIs such as Claude, Gemini, Llama, and others, the grand total is somewhere in the billions. Of the people using AI, millions upon millions of people are genuinely using generative AI as their ongoing advisor on mental health considerations (see my population-scale estimates at the link here). Various rankings showcase that the top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets, see my coverage at the link here. This popular usage makes abundant sense. You can access most of the major generative AI apps for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. Compared to using a human therapist, the AI usage is a breeze and readily undertaken. The Vaunted 1% Or Less Remark According to news reports, during a media dinner event that took place in San Francisco on August 14, 2025, Sam Altman openly remarked that 'way under 1%' of AI users have an 'unhealthy' relationship with OpenAI's generative AI apps (see my associated coverage, at the link here, regarding Altman's comments about AI users and concerns over fragile mental states). Presumably, this estimated percentage would principally be associated with ChatGPT, and at some point, be similar for GPT-5 once this newer product has had a longer time post-release to garner widespread use. How might the percentage be interpreted? Some interpret the gist of the remark to suggest that the percentage is so low that concerns about unhealthy user-AI relationships are not an especially disconcerting issue and are perhaps being overblown. The count is not apparently even at a 1% threshold and seemingly significantly below that level. One aspect that some take solace in is that the indication at least acknowledges that the count is more than a flat zero. The actual level is not stated, but is non-zero, and has an upper bound of 1%. Let's assume that the percentage was estimated on an ad hoc basis. In other words, there doesn't seem to be any direct or tangible quantitative evidence to support the asserted percentage, at least none that was seemingly stated at the time and nor reported in the media. The Making Of Lore Here's the rub. Some have suggested that the percentage is merely an unsubstantiated hunch. How was the percentage gleaned? Did the company perform a carefully devised analysis that led to the percentage? There is a bit of handwringing that the noted percentage is wildly inaccurate and a likely misleading indicator. In other words, the actual percentage could be a lot higher. Maybe it really is 1%, or perhaps 5%, 10%, 20%, or some other demonstrative percentage. Without any analytics underlying the assertion, it is not feasible to gauge the veracity of the percentage. Worries, too, are that the claimed percentage will become standardized lore. This happens quite often in the AI field, namely, a perceived expert makes a brazen off-the-cuff claim, and the next thing that occurs is that the assertion gets repeated as though it is an ironclad fact. The utterance then gets repeated, repeatedly, and eventually gains widespread and unchallenged acceptance as being abundantly true. This is despite the reality that the assertion was only an unsupported guess at the get-go. I would predict that we will see a lot of upcoming coverage on AI and its mental health impacts that will cite the percentage and treat the assertion as pure fact. It will be the gold standard, regardless of the appearance that it seemingly was pulled out of thin air. The Numbers Adding Up The use of percentages as a statistic is sometimes a challenging aspect to grasp since it doesn't numerically convey the actual number of people being impacted. Let's noodle on counting a level of 1% of users. Here we go. ChatGPT has about 700 million weekly active users. A 1% proportion would be around 7 million people. That then is the presumed upper bound as a count of those who have an unhealthy relationship with ChatGPT. And, since we are told it is less than 1%, maybe the count is half that at 3.5 million people, or perhaps a seventh at 1 million people. We might ponder these thought-provoking questions: It is important to clarify that there isn't a suggestion in play of having no concern about these numbers and percentages. The clearer consideration is whether the numbers and percentages are harrowing versus something that should ostensibly be on our radar. The User-AI Healthy Counts Our focus on user-AI unhealthy relationships should be placed into a context of comparison to the presumed advent of user-AI healthy relationships. That's the yin and yang going on. The counterpoint some would make is that if we subtract the 7 million from 700 million, the remainder of 693 million people are presumably having healthy relationships with AI. That would seem to be cause for relief, possibly celebration. More so, it might be that if we subtract 1 million from 700 million, perhaps 699 million people are in a healthy relationship with AI. It depends on whether the glass is half full versus the glass is half empty perspective that one chooses. Keep in mind that if we apply the percentage to the grand total of all users of generative AI, which we don't know for sure what that total truly is, the numbers will rise accordingly. Suppose that there are 2 billion weekly active users of AI, based on various posted stats. The 1% upper bound would be 20 million people that are having an unhealthy relationship with AI. If it is only half of that, the count is 10 million people, and at one-tenth, it is 2 million people. Do those counts spur any greater concern, or are they still at about the same level of concern? Defining Unhealthy User-AI Relationships You might be wondering what precisely constitutes an unhealthy relationship with AI. That's quite an important cornerstone in the matter since the percentages and counts are alluding to user-AI relationships that are stated as being 'unhealthy'. What is the criterion or basis for user-AI unhealthiness? I am sure we all have a visceral sense of what that consists of. There aren't any across-the-board, fully accepted, definitive clinical definitions, and right now it is more of a loosey-goosey determination. In our gut, we all probably have a sense of the meaning at hand. A strawman that I came up with to try and describe unhealthy user-AI relationships consists of this quasi-definition: I tend to focus on these six highly revealing major factors: Those six factors often are present on an ebb-and-flow basis. Someone will be at a heightened degree on one factor, less so on the other factors. Or they might be on a heightened level among all the factors, which is the worst of the possibilities. Additional factors also enter the picture, and the full set of factors is more extensive. I've boiled it down to what I consider the most striking and strident factors. The Zones Are Crucial There is a tendency to assume that a user-AI relationship is a stark dichotomy, consisting solely of either being in a purely healthy user-AI relationship or a purely unhealthy user-AI relationship. The assumption is that there is no room in between. It is like a light fixture. The switch is either in the On position or the Off position. No other state of existence exists. The reality in user-AI relationships is that they exist on a spectrum. I'll be describing the nitty-gritty details of the user-AI relationship spectrum in an upcoming post, so be on the lookout for it. Meanwhile, I do want to bring up the zones associated with the spectrum. Depending on where someone is on the user-AI relationship spectrum, they can be approximately categorized into one of four keystone zones: The aim is to try and detect and catch people who are descending from the green zone down toward the red zone. It is better to do so early on. Thus, someone in the yellow zone is more likely to be kept from falling into the red zone, whereas someone in the orange zone is right on the precipice. User-AI Relationships Doggedness A final thought for now. There is certainty that user-AI relationships are going to continue and grow immensely. Generative AI is getting better at fluency, and humans are going to be allured by that capacity. AI will be globally pervasive and ubiquitous. The issue is whether people will gravitate toward unhealthy user-AI relationships. The percentages now, whatever they might be, could climb precipitously. Our goal should be to keep the chances to a minimum. Stop the descent into the red zone. Seek to prop people up into the green zone and keep them there. Carl Jung, the famed psychotherapist, made this notable remark years ago: 'The meeting of two personalities is like the contact of two chemical substances: if there is any reaction, both are transformed.' In the case of user-AI relationships, I would wager that people are the ones being transformed more than the AI. Our aspiration should be that the presumed transformation is a positive one for people and that humankind is energized and able to be even more human, rather than getting hopelessly embroiled in an AI-obsessed bond.


Medscape
a day ago
- Medscape
48-Year-Old's Wrist Pain Reveals Bone Necrosis
A 48-year-old man presented with a 2-year history of oedema and pain in his right hand. Initially diagnosed with tendonitis, he underwent physical therapy which provided limited relief. MRI demonstrated avascular necrosis of the lunate. The patient underwent surgery and follow-up treatment with home exercises and a structured physiotherapy program; he regained minimal wrist flexion restriction and normal grip strength. The Patient and His History In 2018, the patient had a vertebral fracture that resolved without complications. He received physical therapy and anaesthetic injections which provided limited relief. Due to the persistent symptoms, further evaluation was conducted. Findings and Diagnosis Physical examination revealed palm ecchymosis, reduced range of motion in both flexion and extension of the right wrist, and diminished grip strength. Imaging studies, including X-rays and MRI, indicated osteosynthesis of the carpal bones and fractures of the scaphoid and lunate bones. Coronal MRI showed diffuse hypointensity and collapse of the lunate. The patient was diagnosed with avascular necrosis of the semilunar bone of the carpus, Kienböck disease stage IV (according to the Lichtman classification), and carpal arthrofibrosis. The patient underwent surgical intervention, including scaphoid-lunate fusion with bone grafting, semilunar tendon arthroplasty, and radiocarpal capsulectomy. Postoperatively, the patient experienced significant improvement, although the chronic pain initially persisted. He underwent a 12-session physical therapy program. No medications for pain were prescribed. After 1-year of home exercises, he achieved almost full range of motion, with minimal wrist flexion restriction and normal grip strength. Discussion Kienböck disease, or lunate osteonecrosis, is a debilitating condition that primarily affects the lunate bone in the wrist. It is characterised by avascular necrosis due to an interrupted blood supply, leading to bone death. This rare condition affects approximately 0.0066% of the population, predominantly men aged 20-40 years. The aetiology of Kienböck disease is largely unknown; however, several contributing factors have been identified. These include repetitive microtrauma, acute wrist injuries, and anatomical variations in the lunate blood supply, which are limited to a few vessels, making it susceptible to ischaemic damage. Systemic conditions, such as lupus or sickle cell disease, that impair blood flow may also play a role. The pathophysiology begins with an ischaemic event leading to lunate necrosis, followed by changes such as bone fragmentation, collapse, and carpal instability. Management varies by stage, focusing on pain relief, preservation of wrist function, and prevention of disease progression. Early stages may respond to conservative treatments such as immobilisation, nonsteroidal anti-inflammatory drugs, and physical therapy, but these are often ineffective in advanced stages. Management requires a tailored approach based on disease stage and individual factors. Early diagnosis is crucial for effective treatment to preserve wrist function and slow disease progression. The authors noted that continued research and advances in imaging are expected to improve both the understanding and management of this condition.


Medscape
a day ago
- Medscape
Biologic Dose Adjustments May Benefit Patients With IBD
TOPLINE: The dose escalation of biologic therapy was effective in achieving remission in up to approximately half of patients with inflammatory bowel disease (IBD), with a probability of sustaining the escalated dose ranging from 71% to 88% at 24 months across different biologics. Dose de-escalation was proved to be feasible in selected patients, with 89%-100% maintaining clinical remission at 12 months. METHODOLOGY: In this cross-sectional study, researchers analysed patients with IBD enrolled in a Spanish registry (January 2012 to December 2022) from 72 Spanish centres to assess the frequency, persistence, and effectiveness of the dose escalation of biologics or their discontinuation. Adult patients with a diagnosis of Crohn's disease or ulcerative colitis (UC) who received standard maintenance doses of infliximab (5 mg/kg every 8 weeks), adalimumab (40 mg every other week), golimumab (50 and 100 mg monthly for those with weight ≤ 80 kg and > 80 kg, respectively), vedolizumab (300 mg every 8 weeks), or ustekinumab (90 mg every 8 or 12 weeks) were identified. Overall, 5096 patients underwent dose escalation, defined as any increase in the dose and/or shortening of the dosing interval from the standard schedule's doses. Conversely, dose de-escalation was defined as the reduction in the dose and/or lengthening of the dosing interval following an escalation regimen, occurring in 669 patients. The effectiveness of dose escalation or de-escalation was assessed on the basis of clinical remission with or without corticosteroids, response, and non-response. Factors associated with drug discontinuation after dose escalation were also assessed, with a median follow-up duration ranging from 9 to 24 months across different biologics. TAKEAWAY: Among patients on various biologics, the incidence rate of dose escalation per patient-year of follow-up was 5% for those on infliximab, 7% for those on adalimumab, 7% for those on golimumab, 10% for those on vedolizumab, and 12% for those on ustekinumab. Remission was achieved in 32%-49% of patients after dose escalation, with a probability of maintaining the escalated dose ranging from 71% to 88% at 24 months across different biologics. Incidence rates of dose de-escalation per patient-year of follow-up ranged from 3% to 9% across various biologics. Factors associated with drug discontinuation after dose escalation were previous biologic exposure and the duration of IBD (for infliximab), monotherapy (for adalimumab), and the presence of UC (for ustekinumab). Clinical remission was observed in 89%-100% of patients who underwent dose de-escalation, with a probability of maintaining the de-escalated dose ranging from 82% to 90% at 12 months across different biologics. Factors linked to relapses after dose de-escalation were previous biologic exposure (for infliximab) and age at dose de-escalation (for adalimumab). IN PRACTICE: "While dose escalation offers a clear benefit in cases of LOR [loss of response], with a high proportion of patients achieving response (and remission) over time, dose de-escalation seems feasible for long-term management in selected patients, although it must be approached with caution," the authors of the study wrote. SOURCE: This study was led by Cristina Rubín de Célix, MD, PhD, Department of Gastroenterology, Hospital Universitario de Fuenlabrada, Madrid, Spain. It was published online on August 08, 2025, in Alimentary Pharmacology and Therapeutics. LIMITATIONS: Drug trough levels and antidrug antibodies were not routinely measured prior to dose escalation and de-escalation. The small number of patients undergoing vedolizumab or ustekinumab dose de-escalation precluded definitive conclusions about these regimens. Response after dose escalation and outcomes following dose de-escalation were determined solely by clinician judgement as endoscopic confirmation was unavailable. DISCLOSURES: This study received support from Eli Lilly and Company for the statistical analysis, medical writing, and publication fees alone. Several authors reported receiving education funding, support for congress and conference attendance or travelling, speaker fees, research support, and consulting fees and serving as speakers, consultants, and advisory members for various pharmaceutical companies. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.