logo
After having a heart attack at 36, a master refinisher shares his tips on ‘a lost art'

After having a heart attack at 36, a master refinisher shares his tips on ‘a lost art'

Aaron Moore was at his workshop in Garden Grove when the first wave of pain hit.
That fall morning in 2022, the furniture refinisher, who was 36 at the time, felt his limbs begin to tingle as he took the clamps off a table. He had been under elevated stress. He was juggling care for a busy toddler and a booming business, along with building a social media presence to promote his work.
So as the tingling escalated to moderate chest pain, Moore chalked it up to a panic attack and kept working.
It was only upon seeing a late friend's funeral program pinned above his desk that Moore caved and headed to the hospital. Two weeks later, his doctor told him that decision likely saved his life.
'In the hospital, it was a joke, because we didn't really know what was going on,' Moore said. After being told that he'd had a trio of NSTEMI heart attacks (characterized by partial blockage in a coronary artery), he was still unsure how serious they had been.
At his two-week follow-up, though, Moore said he learned those 'mini heart attacks' could have been precursors to a deadly finale.
'It was more of a, 'Hey, you just narrowly dodged the widow-maker,'' he said.
At home in Orange, Moore didn't have much time to process the episode. His toddler son and newborn daughter kept him plenty busy. However, back in the shop, amid the mesmeric hum of sanders and drum fans, a thought dawned on him: 'What would have happened to all the knowledge if I had died?'
A high school woodshop prodigy, Moore got his start in furniture refinishing at a piano company in Anaheim. His boss, a friend's dad, hired him just before he graduated from Esperanza High School, and he stayed at that job for about five years.
Moore loved the work, but he hated workplace politics. He wanted to get poached by another refinisher based in Garden Grove. That refinisher's name was Butch Crane, but Moore liked to call him Elmer Fudd after Bugs Bunny's antagonist from Looney Tunes: 'Bald, kind of chunky, wore the red plaid flannels.'
When Crane decided to retire in 2010, he was resolved to keep the building out of the city's hands. So he made Moore a deal.
'Five grand. I'm leaving in one month. If you want it, shop's yours,' Moore recalled Crane telling him.
Moore accepted.
Fifteen years later, the tradesman stood outside Crane's old spray booth, sanding a $25,000 rosewood bench.
'Rosewood has a very floral scent to it,' he said. 'You can smell it in the air.'
Above him hung a 'Moore's Refishing' sign, a friend's comical misspelling of the shop's name.
Tucked into a corner of industrial Garden Grove, Moore's Refinishing boasts no ornate exterior. The shop's only signage — save for the misprint in the back — graces the top half of its glass front door.
Inside, though, is every thrifter's Midcentury Modern dream. Atop a wooden mezzanine, a rattan back desk sits among chestnut-colored dining chairs. Deeper into the shop, a Grotrian-Steinweg piano is just put back on its legs.
When Moore first took over the shop, he worked mostly on antiques, or as he calls it, 'grandma's old furniture.'
Over the years, he's amassed high-profile clients from specialty collectors to fine art dealers. His most expensive project thus far is a rare Antoine Philippon & Jacqueline Lecoq wall unit that Laguna Beach gallery owner Peter Blake valued at $175,000.
At 38, Moore has spent more than half his life in a trade boasting no more than a handful of old-timers to preserve it. After his 2022 health scare, he has been more intent than ever on passing down all he'd learned, but he wasn't sure how.
That's when Anastasia Petukhova wound up at his doorstep.
Petukhova, a Moscow-born marketer and photographer, was teaching marketing classes part-time at Loyola Marymount University when she began flipping furniture as a social experiment to share with her students.
At some point, Petukhova said, 'my flipping sort of evolved from something very basic to some nicer pieces,' and she realized she needed a master to fill in her knowledge gaps. So she started doing her research.
'Turns out there's this guy, Aaron Moore, an hour away from me,' Petukhova said. She messaged him, offering a free photo session in exchange for a refinishing lesson.
'I thought, I show up for half a day, do the skeleton of the process,' she said. 'How difficult can it be?'
They met a few days after Moore's cardiac episode. Moore knew it wasn't wise to return to work so soon, but he'd already canceled on Petukhova twice. He didn't want to bail again.
The pair ended up working together for more than a year on Petukhova's furniture flips, Moore's online refinishing content and later the coffee table book of Petukhova's dreams.
'Revive and Refine: The Art of Furniture Restoration' (self-published last year and available on Moore's website for $125) is a 240-page starter guide for aspiring refinishers, covering everything from the basics of disassembly to master staining techniques. It's intentionally written in such a way that anyone can pick it up and get started on a project — Moore's wife scanned his manuscript for jargon before the book was published.
In the book, Petukhova's images depicting the refinishing process step-by-step are interspersed with fine art-style photographs. The cover image, chosen by Moore's social media followers, is a striking shot of a 1970s Afra and Tobia Scarpa dining chair, so aesthetically composed that the object itself is defamiliarized, taking on the visual quality of an ancient relic.
When Moore first entered the industry in the mid-2000s, he said his mentors habitually kept trade secrets. 'This was an industry of gatekeepers,' he said, adding that master tradesmen ultimately viewed their apprentices — working in the same 10- to 15-mile radius as them — as competition.
However, in the internet age, there's business enough for everyone, Moore said, and teaching others what he knows doesn't threaten his own livelihood. If anything, it preserves his legacy at a pivotal moment for skilled trades.
Enrollment at public two-year schools focusing on vocational and trade programs has risen by nearly 20% since the spring of 2020, according to a May National Student Clearinghouse report. One explanation for this upward trend is the majority belief among Gen Zers that a college degree isn't necessary to obtain a well-paying, stable job.
Moore's mission is to capitalize on this resurgence of interest in the trades. He started to notice the uptick on his social media channels, especially after he first started sharing refinishing content on Instagram in 2021. He's since expanded his content by posting longer form videos on YouTube and teaching a paid online course that has 100 members.
Moore films most of his online content during the work day, which he admitted has caused some projects to pile up.
Blake, the gallery owner, has begun calling Moore 'Aaron Kardashian' — a dig at what he called the refinisher's growing 'influencer' behavior.
'I love giving him s— about [it], you know, when I walk in there and see the tripod,' Blake said. 'I tell him that, 'Of course, my stuff isn't done, because you're so busy being an influencer.''
But like the vast majority of Moore's clients, Blake is mostly content to wait, knowing his go-to refinisher has never skimped on quality.
'I hate to say this, but I pushed the s— out of him,' Blake said. 'I would give him threats all the time that, 'It better be good, Aaron. You realize this is a $30,000 chair. Do not f— it up.'
'I gotta say, he rose to the occasion,' the gallerist said.
Ivan Astorga, Moore's only full-time employee, initially had trouble adjusting to his boss' high standards. He began working part-time at Moore's Refinishing in 2016, while still employed at his father-in-law's refinishing business. A couple years in, Moore realized Astorga had downplayed his skills and promoted him to full-time.
'Working here was a complete different ballgame. He demanded only the best for his clients,' Astorga said.
At the shop earlier this month, Astorga was making his fourth pass on a chipped wooden table. Several times, he worked a Walmart iron over a wet cloth, then peeled the fabric back to inspect the surface underneath. On the last round, the chip was hardly visible.
To this day, whenever Astorga visits his father-in-law's shop, he said he has to hold his tongue about stains and scratches Moore would never miss.
In the rare event Moore does make a mistake — like the time he sanded through the veneer on a coffee table — 'we have the capabilities to repair it after,' he said.
Sometimes, he added, 'you have to break things first before you can make them better.'
Moore said that as a kid growing up in Yorba Linda, he always loved the physicality of taking things apart and putting them back together. But it's not the work itself that has kept him in the industry; it's the stories.
Pacing across his workshop, Moore rattled off the names of clients with heirlooms in his queue, smiling as he spoke about one woman who brought in her childhood sewing machine for restoration.
Just outside his office, he's curated a Wall of Treasures, composed of miscellaneous objects he found in furniture purchased at auction. Among the bric-a-brac are a hosiery stock card, old negatives and a birthday letter.
'It's history,' Moore said. 'I'm a catch-all person for this, not junk wood.'
Moore doesn't see himself retiring any time soon. No one currently stands to replace him, but he's hoping that the supplemental income he derives from his online and in-person coaching side hustle will allow him to spend fewer hours in the shop.
'I'm tired of doing it to this degree,' Moore said, adding that he'd rather make the bulk of his salary teaching refinishing than doing it himself.
That plan looks promising as he gears up to send a $20,000 quote to a prospective coaching client. To him, the figure seems inordinate. But to his wife Taylor, it represents the true value of his labor.
'My whole family is attorneys,' said Taylor, a probate paralegal. 'They're looking at it from, 'What is your hourly rate? How long is it going to take you to do this? What would you make in the shop?''
Taylor, 33, added that her husband can sometimes underestimate the value of what he does because 'it's such a lost art.' 'He doesn't realize what he does is so unique, and no one really does it anymore,' she said. 'All the old-timers are either retired or have passed or slowed down.'
Financially, it would behoove Moore to keep the trade specialized and therefore more lucrative for himself. But that's not the future he wrote a book for.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The 7 dietitian-backed foods to eat for a balanced breakfast
The 7 dietitian-backed foods to eat for a balanced breakfast

USA Today

timea day ago

  • USA Today

The 7 dietitian-backed foods to eat for a balanced breakfast

Your mornings are busy enough without having to spend the time to decide what to make – and then having to actually make – a healthy, well-balanced breakfast. That's further complicated by the fact "healthy" can mean different things to different people. Some may be trying to lose weight. Some may be trying to gain weight. Some may be trying to increase the amount of nutritious food they eat in a day. Some may just need to grab anything they can get their hands on to make sure there's something in their stomach before running out the door. Marisa Moore, MBA, RDN, LD, a registered dietitian nutritionist and author of "The Plant Love Kitchen," breaks down everything you need to know about best breakfast nutrition practices. Looking for a healthy breakfast? Here's what to eat in the morning for a nutritious meal. How many calories should I eat for breakfast? The amount of calories someone should eat in a given meal can depend on several factors, including age, sex, height, weight, physical activity, pregnancy or lactation status and individual goals. For example, the current Dietary Guidelines for Americans recommends that a moderately active (defined as the equivalent to walking between 1.5 and 3 miles per day at 3 to 4 miles per hour) 35-year-old man should be consuming about 2,600 calories a day, while a moderately active 35-year old woman needs about 2,000. Women generally need fewer calories than men, and older adults generally need fewer calories than younger ones. These recommendations can also vary depending on whether a person is trying to lose, maintain or gain weight, too. Rather than focusing on how many calories someone should be eating at breakfast, nutrition experts suggest paying more attention to the variety of macronutrients you're serving yourself. "Start the day with a protein- and carbohydrate-rich meal for sustained energy until lunch time," Moore suggests. Adding healthy fats, such as chia seeds, walnuts, olive oil, avocado or full-fat yogurt to the equation is helpful. What is meal sequencing? Health experts explain whether the rising diet trend works. What should I eat for breakfast? Your morning menu doesn't have to be boring. These balanced breakfast options will give you the energy you need to get through the day.

Minze Health Appoints Thomas Moore as President and Chief Executive Officer
Minze Health Appoints Thomas Moore as President and Chief Executive Officer

Business Wire

time5 days ago

  • Business Wire

Minze Health Appoints Thomas Moore as President and Chief Executive Officer

ANTWERP, Belgium & MINNEAPOLIS--(BUSINESS WIRE)--Minze Health, a leader in digital diagnostics and therapeutics for urology, announced today that Thomas Moore has been appointed President and Chief Executive Officer, effective immediately. Moore brings over two decades of commercial leadership in medtech and a proven track record of driving growth, reimbursement, and global market expansion across the U.S. and Europe. Prior to joining Minze, Moore served as Chief Commercial Officer at GT Medical Technologies and executive leadership roles at Ablative Solutions and CVRx, where he played a central role in revitalizing trial enrollment, scaling commercial operations, and raising over $100M in venture funding. His leadership has accelerated pivotal clinical trials, launched novel therapies, and expanded reimbursement in key global markets. 'Minze is uniquely positioned to redefine the patient pathway for millions affected by BPH and OAB through our home-based diagnostics and evidence-backed therapeutics,' said Moore. 'I'm honored to join this world-class team and lead our expansion into the U.S. while advancing strategic partnerships, reimbursement access, and product innovation. The opportunity to make urologic care more accurate, efficient, and accessible has never been more urgent or exciting.' Moore's appointment marks a new chapter in Minze's growth trajectory, as the company prepares for commercial scaling, key reimbursement milestones, and Series B fundraising. 'Tom is the right leader at the right time,' said Steffen Hovard, Executive Chairman of the Board. 'He brings a powerful combination of strategic vision, commercial discipline, and passion for patient-centered innovation. We're thrilled to welcome him as CEO and confident he'll accelerate Minze's mission to become the global standard in digital urology care.' About Minze Health Minze Health is transforming urology by combining accurate at-home diagnostics, remote monitoring, and digital therapeutics into a single, scalable platform. With CE mark and 510(k) exemptions, Minze's ecosystem supports early diagnosis and personalized care for conditions like BPH and OAB—empowering patients and relieving pressure on overburdened urology systems. For more information, visit

Google's healthcare AI made up a body part — what happens when doctors don't notice?
Google's healthcare AI made up a body part — what happens when doctors don't notice?

The Verge

time04-08-2025

  • The Verge

Google's healthcare AI made up a body part — what happens when doctors don't notice?

Scenario: A radiologist is looking at your brain scan and flags an abnormality in the basal ganglia. It's an area of the brain that helps you with motor control, learning, and emotional processing. The name sounds a bit like another part of the brain, the basilar artery, which supplies blood to your brainstem — but the radiologist knows not to confuse them. A stroke or abnormality in one is typically treated in a very different way than in the other. Now imagine your doctor is using an AI model to do the reading. The model says you have a problem with your 'basilar ganglia,' conflating the two names into an area of the brain that does not exist. You'd hope your doctor would catch the mistake and double-check the scan. But there's a chance they don't. Though not in a hospital setting, the 'basilar ganglia' is a real error that was served up by Google's healthcare AI model, Med-Gemini. A 2024 research paper introducing Med-Gemini included the hallucination in a section on head CT scans, and nobody at Google caught it, in either that paper or a blog post announcing it. When Bryan Moore, a board-certified neurologist and researcher with expertise in AI, flagged the mistake, he tells The Verge, the company quietly edited the blog post to fix the error with no public acknowledgement — and the paper remained unchanged. Google calls the incident a simple misspelling of 'basal ganglia.' Some medical professionals say it's a dangerous error and an example of the limitations of healthcare AI. Med-Gemini is a collection of AI models that can summarize health data, create radiology reports, analyze electronic health records, and more. The pre-print research paper, meant to demonstrate its value to doctors, highlighted a series of abnormalities in scans that radiologists 'missed' but AI caught. One of its examples was that Med-Gemini diagnosed an 'old left basilar ganglia infarct.' But as established, there's no such thing. Fast-forward about a year, and Med-Gemini's trusted tester program is no longer accepting new entrants — likely meaning that the program is being tested in real-life medical scenarios on a pilot basis. It's still an early trial, but the stakes of AI errors are getting higher. Med-Gemini isn't the only model making them. And it's not clear how doctors should respond. 'What you're talking about is super dangerous,' Maulin Shah, chief medical information officer at Providence, a healthcare system serving 51 hospitals and more than 1,000 clinics, tells The Verge. He added, 'Two letters, but it's a big deal.' In a statement, Google spokesperson Jason Freidenfelds told The Verge that the company partners with the medical community to test its models and that Google is transparent about their limitations. 'Though the system did spot a missed pathology, it used an incorrect term to describe it (basilar instead of basal). That's why we clarified in the blog post,' Freidenfelds said. He added, 'We're continually working to improve our models, rigorously examining an extensive range of performance attributes -- see our training and deployment practices for a detailed view into our process.' On May 6th, 2024, Google debuted its newest suite of healthcare AI models with fanfare. It billed 'Med-Gemini' as a 'leap forward' with 'substantial potential in medicine,' touting its real-world applications in radiology, pathology, dermatology, ophthalmology, and genomics. The models trained on medical images, like chest X-rays, CT slices, pathology slides, and more, using de-identified medical data with text labels, according to a Google blog post. The company said the AI models could 'interpret complex 3D scans, answer clinical questions, and generate state-of-the-art radiology reports' — even going as far as to say they could help predict disease risk via genomic information. Moore saw the authors' promotions of the paper early on and took a look. He caught the mistake and was alarmed, flagging the error to Google on LinkedIn and contacting authors directly to let them know. The company, he saw, quietly switched out evidence of the AI model's error. It updated the debut blog post phrasing from 'basilar ganglia' to 'basal ganglia' with no other differences and no change to the paper itself. In communication viewed by The Verge, Google Health employees responded to Moore, calling the mistake a typo. In response, Moore publicly called out Google for the quiet edit. This time the company changed the result back with a clarifying caption, writing that ''basilar' is a common mis-transcription of 'basal' that Med-Gemini has learned from the training data, though the meaning of the report is unchanged.' Google acknowledged the issue in a public LinkedIn comment, again downplaying the issue as a 'misspelling.' 'Thank you for noting this!' the company said. 'We've updated the blog post figure to show the original model output, and agree it is important to showcase how the model actually operates.' As of this article's publication, the research paper itself still contains the error with no updates or acknowledgement. Whether it's a typo, a hallucination, or both, errors like these raise much larger questions about the standards healthcare AI should be held to, and when it will be ready to be released into public-facing use cases. 'The problem with these typos or other hallucinations is I don't trust our humans to review them' 'The problem with these typos or other hallucinations is I don't trust our humans to review them, or certainly not at every level,' Shah tells The Verge. 'These things propagate. We found in one of our analyses of a tool that somebody had written a note with an incorrect pathologic assessment — pathology was positive for cancer, they put negative (inadvertently) … But now the AI is reading all those notes and propagating it, and propagating it, and making decisions off that bad data.' Errors with Google's healthcare models have persisted. Two months ago, Google debuted MedGemma, a newer and more advanced healthcare model that specializes in AI-based radiology results, and medical professionals found that if they phrased questions differently when asking the AI model questions, answers varied and could lead to inaccurate outputs. In one example, Dr. Judy Gichoya, an associate professor in the department of radiology and informatics at Emory University School of Medicine, asked MedGemma about a problem with a patient's rib X-ray with a lot of specifics — 'Here is an X-ray of a patient [age] [gender]. What do you see in the X-ray?' — and the model correctly diagnosed the issue. When the system was shown the same image but with a simpler question — 'What do you see in the X-ray?' — the AI said there weren't any issues at all. 'The X-ray shows a normal adult chest,' MedGemma wrote. In another example, Gichoya asked MedGemma about an X-ray showing pneumoperitoneum, or gas under the diaphragm. The first time, the system answered correctly. But with slightly different query wording, the AI hallucinated multiple types of diagnoses. 'The question is, are we going to actually question the AI or not?' Shah says. Even if an AI system is listening to a doctor-patient conversation to generate clinical notes, or translating a doctor's own shorthand, he says, those have hallucination risks which could lead to even more dangers. That's because medical professionals could be less likely to double-check the AI-generated text, especially since it's often accurate. 'If I write 'ASA 325 mg qd,' it should change it to 'Take an aspirin every day, 325 milligrams,' or something that a patient can understand,' Shah says. 'You do that enough times, you stop reading the patient part. So if it now hallucinates — if it thinks the ASA is the anesthesia standard assessment … you're not going to catch it.' Shah says he's hoping the industry moves toward augmentation of healthcare professionals instead of replacing clinical aspects. He's also looking to see real-time hallucination detection in the AI industry — for instance, one AI model checking another for hallucination risk and either not showing those parts to the end user or flagging them with a warning. 'In healthcare, 'confabulation' happens in dementia and in alcoholism where you just make stuff up that sounds really accurate — so you don't realize someone has dementia because they're making it up and it sounds right, and then you really listen and you're like, 'Wait, that's not right' — that's exactly what these things are doing,' Shah says. 'So we have these confabulation alerts in our system that we put in where we're using AI.' Gichoya, who leads Emory's Healthcare Al Innovation and Translational Informatics lab, says she's seen newer versions of Med-Gemini hallucinate in research environments, just like most large-scale AI healthcare models. 'Their nature is that [they] tend to make up things, and it doesn't say 'I don't know,' which is a big, big problem for high-stakes domains like medicine,' Gichoya says. She added, 'People are trying to change the workflow of radiologists to come back and say, 'AI will generate the report, then you read the report,' but that report has so many hallucinations, and most of us radiologists would not be able to work like that. And so I see the bar for adoption being much higher, even if people don't realize it.' Dr. Jonathan Chen, associate professor at the Stanford School of Medicine and the director for medical education in AI, searched for the right adjective — trying out 'treacherous,' 'dangerous,' and 'precarious' — before settling on how to describe this moment in healthcare AI. 'It's a very weird threshold moment where a lot of these things are being adopted too fast into clinical care,' he says. 'They're really not mature.' On the 'basilar ganglia' issue, he says, 'Maybe it's a typo, maybe it's a meaningful difference — all of those are very real issues that need to be unpacked.' Some parts of the healthcare industry are desperate for help from AI tools, but the industry needs to have appropriate skepticism before adopting them, Chen says. Perhaps the biggest danger is not that these systems are sometimes wrong — it's how credible and trustworthy they sound when they tell you an obstruction in the 'basilar ganglia' is a real thing, he says. Plenty of errors slip into human medical notes, but AI can actually exacerbate the problem, thanks to a well-documented phenomenon known as automation bias, where complacency leads people to miss errors in a system that's right most of the time. Even AI checking an AI's work is still imperfect, he says. 'When we deal with medical care, imperfect can feel intolerable.' 'Maybe other people are like, 'If we can get as high as a human, we're good enough.' I don't buy that for a second' 'You know the driverless car analogy, 'Hey, it's driven me so well so many times, I'm going to go to sleep at the wheel.' It's like, 'Whoa, whoa, wait a minute, when your or somebody else's life is on the line, maybe that's not the right way to do this,'' Chen says, adding, 'I think there's a lot of help and benefit we get, but also very obvious mistakes will happen that don't need to happen if we approach this in a more deliberate way.' Requiring AI to work perfectly without human intervention, Chen says, could mean 'we'll never get the benefits out of it that we can use right now. On the other hand, we should hold it to as high a bar as it can achieve. And I think there's still a higher bar it can and should reach for.' Getting second opinions from multiple, real people remains vital. That said, Google's paper had more than 50 authors, and it was reviewed by medical professionals before publication. It's not clear exactly why none of them caught the error; Google did not directly answer a question about why it slipped through. Dr. Michael Pencina, chief data scientist at Duke Health, tells The Verge he's 'much more likely to believe' the Med-Gemini error is a hallucination than a typo, adding, 'The question is, again, what are the consequences of it?' The answer, to him, rests in the stakes of making an error — and with healthcare, those stakes are serious. 'The higher-risk the application is and the more autonomous the system is ... the higher the bar for evidence needs to be,' he says. 'And unfortunately we are at a stage in the development of AI that is still very much what I would call the Wild West.' 'In my mind, AI has to have a way higher bar of error than a human,' Providence's Shah says. 'Maybe other people are like, 'If we can get as high as a human, we're good enough.' I don't buy that for a second. Otherwise, I'll just keep my humans doing the work. With humans I know how to go and talk to them and say, 'Hey, let's look at this case together. How could we have done it differently?' What are you going to do when the AI does that?' Posts from this author will be added to your daily email digest and your homepage feed. See All by Hayden Field Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Features Posts from this topic will be added to your daily email digest and your homepage feed. See All Google Posts from this topic will be added to your daily email digest and your homepage feed. See All Health Posts from this topic will be added to your daily email digest and your homepage feed. See All Report Posts from this topic will be added to your daily email digest and your homepage feed. See All Science Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store