
Beach Boys founder Brian Wilson regretted being blocked from band's hit song: book
"Brian was in a weak mental state," the entertainment lawyer, who once represented the fragile leader of the Beach Boys, told Fox News Digital.
"Brian often said to me, as sad as it sounds … 'I fried my brain. I took too many drugs.' Brian couldn't get up in the morning without somebody getting him up. He couldn't eat healthily without somebody giving him something healthy to eat."
"So, the good and bad of Dr. Eugene Landy in Brian's life was that he motivated Brian to become a participant in his own life," Mason shared. "But the bad part was that, as time went by and years went by, Dr. Landy expected more and more to replace Brian in the Beach Boys… Brian wasn't allowed to do anything without a Landy handler being with him."
Mason, who has worked with Roy Orbison, Reba McEntire, Shakira and Quincy Jones, among others, has written a new memoir, "Crazy Lucky." It explores what it takes to defend the famous during career-defining moments.
The book delves into Wilson's relationship with Landy, a psychotherapist accused of holding a Svengali-like power over him. Mason said it led to one of the singer/songwriter's biggest regrets.
"Mike [Love] and Carl [Wilson] came into my office and said to Brian, 'Hey, we have the opportunity to write a song for this movie, [1988's] 'Cocktail,'" said Mason.
"'It's going to be starring Tom Cruise. It's really great. We'd love you to join us. And Brian was really excited. He said, 'Oh, I'd love to do that.' But later in the evening, Brian called and said, 'I shouldn't do that. Dr. Landy said I shouldn't do that. Well, that turned out to be 'Kokomo,' the biggest hit the Beach Boys had had probably forever. And Brian felt really badly about not working on 'Kokomo.'"
"When he heard it, and when I heard it, we went, 'Oh my gosh, was that a missed opportunity?'" Mason recalled.
WATCH: BEACH BOYS FOUNDER BRIAN WILSON HAD ONE BIG MUSIC REGRET: PAL
Mason wrote that Landy refused to let Wilson participate unless he, too, were listed as a writer on the song. However, Carl and Bruce Johnson, along with Love, refused. They went on to write "Kokomo" without Wilson's input. It was a decision that Wilson deeply regretted over the years.
"Brian is truly a giant teddy bear and genius who regrets bad decisions and lives for better ones," wrote Mason.
According to Mason's book, Wilson's struggles began in 1968, when he quit performing and devoted himself to songwriting instead. While Wilson was determined "to make the greatest music," his mental health began to deteriorate.
Mason wrote that Wilson's experiments with drugs, specifically LSD and cocaine, had "diminished his mental capacity." He rarely left his bed and, according to reports, would go without brushing his teeth or showering for weeks.
"He eventually became so bizarre that he would sit at the piano in his living room surrounded by actual sand that had been dumped in big piles in a sort of playpen," Mason wrote.
"He was forsaking his young family — wife Marilyn Rovell, a singer with the group the Honeys, and young kids Carnie and Wendy — for his strange kind of creative peace. Four years passed, and he never left the house. His weight ballooned to 350 pounds from eating entire birthday cakes as a late-night snack."
In 1975, a "devastated" Marilyn brought in Landy, a psychologist known for his unconventional 24-hour treatment of celebrities. Wilson, who reportedly feared being committed to a psychiatric hospital, completely surrendered. Their first session took place in Wilson's bedroom closet, where the artist felt safe, the Los Angeles Times reported.
Landy was successful. He padlocked Wilson's fridge, put the star on a diet and shooed away drug-enabling pals, The Telegraph reported.
"Dr. Eugene Landy [helped] Brian overcome his fears of everything," Mason told Fox News Digital. "I would call it an agoraphobia. . . . He feared going outside. . . . And he needed outside help."
Landy's strict methods worked. But in 1976, Landy was fired over a dispute involving fees, the Los Angeles Times reported. When Wilson was regressing into drugs and obesity, Landy was rehired six years later, the outlet shared. The 24-hour therapy resumed from 1983 to 1986. Landy said he was paid $35,000 a month.
And as Wilson began recording and playing live again, Landy was a constant shadow looming over him. Manager Tom Hulett, who knew that Mason was friendly with the Beach Boys, suggested that he could be "a strong, independent balance." In 1984, Mason was hired.
"I was asked . . . if I would be Brian's lawyer, and I agreed," said Mason. "At that time, Brian was, I think, doing better, but he had a lot of issues. . . . Brian came to my office at least once a week. We started having Beach Boys meetings at my office once a month, and we all insisted that Brian come to those meetings without Landy."
"Brian was my client. . . . [But] Brian did check in with Landy after our meetings, after our phone calls. Too often I would get a call back from Brian saying, 'I know I said that, but I have to change my mind.'"
Mason wrote that Landy was eager to insert himself into every part of Wilson's life. Wilson was controlled by both prescription drugs and the "Landy handlers" who "secretly or openly recorded everything Brian and anyone else said" for the doctor.
No decision was made without Landy's approval, leaving Mason bewildered and frustrated.
"Ultimately, it led me to say to Brian, 'I can't work with you if Dr. Landy is in a position to change your mind or to second-guess me,'" said Mason. "And he said, 'I understand that.' But then, Dr. Landy called me and said, 'You told Brian that he can't work with me … so you are fired."
"That's a shame, but that is the kind of control Dr. Landy had over Brian Wilson," said Mason."
After Mason was fired in 1990, Landy continued to tighten his grip. At one point, he was co-credited as a songwriter on several tracks. Wilson was "an obsession" to Landy, Mason wrote.
But in 1991, the Wilson family took legal action to appoint an independent conservatorship. The goal was to stop Landy from further influencing Wilson both personally and financially, the Los Angeles Times reported. In 1992, Landy was barred by court order from contacting Wilson.
"The court ordered Landy to disassociate from Brian," said Mason. "Ultimately, Landy's license to practice psychotherapy in the state of California was revoked. And Brian's second wife, Melinda, was able to keep Brian motivated to perform."
"He did a lot of shows," said Mason. "He wrote songs, he did a lot of work. His health seemed to be pretty good. I saw Brian a number of times after I wasn't his lawyer, and he looked good. He felt good. He was in a good mental state."
Landy passed away in 2006 at age 71. Wilson died in June of this year. He was 82.
In his lifetime, Wilson admitted he didn't entirely regret his association with Landy. Mason doesn't either.
"I have to say that, in Brian's case, I don't think there was a better outcome," Mason explained. "Had Landy not become involved, Brian would have become an ineffective vegetable. He was taking too many drugs and couldn't find focus."
"I don't think that, at that point, back when Landy came in, either Brian's ex-wife Marilyn or his daughters were able to motivate him to be independent. Drugs and alcohol have led to the demise of too many people. Many people we see end up dead from the process."
"Saving Brian's life probably necessitated a Eugene Landy who could come in and force him to take control of himself," Mason continued. "I think they were the best years of his health, but the worst of his years with Dr. Landy."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Washington Post
6 minutes ago
- Washington Post
Making cash off ‘AI slop': The surreal video business taking over the web
Luis Talavera, a 31-year-old loan officer in eastern Idaho, first went viral in June with an AI-generated video on TikTok in which a fake but lifelike old man talked about soiling himself. Within two weeks, he had used AI to pump out 91 more, mostly showing fake street interviews and jokes about fat people to an audience that has surged past 180,000 followers, some of whom comment to ask if the scenes are real.
Yahoo
33 minutes ago
- Yahoo
'Of course': Bob Odenkirk confirms he'd be on board for more Better Call Saul
Bob Odenkirk would "of course" be on board for a new series of Better Call Saul. The 62-year-old actor first played lawyer Jimmy McGill and his corrupt alter ego Saul Goodman in Breaking Bad and then reprised the role in his own spin-off series, which ran from 2015 to 2022, and he admitted if showrunners Vince Gilligan and Peter Gould had a new idea for his character, who was jailed at the end of Better Call Saul, he would be on board, though he thinks it is unlikely. He told 'Vince Gilligan and Peter Gould created that show. There are some of the best writers who ever worked in TV. 'So if they were to think of something in that world, of course I would do it. 'But I don't think so. I think they've all moved on to some more amazing projects that you'll soon see.' Of a potential plotline, he noted: 'He's not getting out [of prison]. If there's another Saul show, it takes place inside prison" The Nobody actor will always be grateful for the opportunity to have played Saul. He said: 'That part turned my life around, and I've given more to that part than anything I've done." Creator Vince has been working on new Apple TV+ sci-fi drama Pluribus, which stars Bob's Better Call Saul co-star Rhea Seehorn, and the actor teased the programme - which will be released in November - is "going to be a great one". Bob said: 'Look forward to the best written show on TV for years to come." The veteran actor previously admitted he is "fine with moving on" from playing both Saul and his assassin character Hutch from the Nobody movies because they are both tough roles to play. He told The Hollywood Reporter: 'They're guys who, for different reasons, have pretty big chips on their shoulders, and that's hard to play after a while. You can't just carry that guy around all the time.' Bob suffered a heart attack during the filming of the final season of Better Call Saul and he recently recalled how he was lucky that co-stars Ray Campbell and Patrick Fabian were nearby as he likely would have died had the incident happened in his trailer. He told Conan O'Brien Needs a Friend: "It was during Covid shooting, so we were separate from the crew. "And luckily, I didn't go to my trailer. If I'd gone to my trailer, I wouldn't be here, because they don't bother you (in the trailer)." But his co-stars' screams for help were initially mistaken as laughter at first due to the social distancing provisions in place on the set. He said: "It took a few seconds to realise people were screaming." Bob was grateful to the show's medical officer Rosa Estrada as she "immediately" began CPR after learning that no defibrillator would be available for 15 minutes.


WIRED
36 minutes ago
- WIRED
Teachers Are Trying to Make AI Work for Them
Aug 18, 2025 6:00 AM Since the start of the AI boom, teachers have been tasked with figuring out if LLMs are helpful tools or a cheat code. This is how they're bringing AI to their curricula. ILLUSTRATION: VIVIENNE SHAO One day last spring, in a high school classroom in Texas, students were arguing about who to kill off first. It was a thought experiment with a sci-fi premise: A global zombie outbreak has decimated major cities. One hundred frozen embryos meant to reboot humanity are safe in a bomb shelter, but the intended adult caretakers never made it. Instead, 12 random civilians stumbled in. There's only enough food and oxygen for seven. The students had to decide who would die and who would live to raise the future of the human race. It wasn't an easy choice. There was Amina, a 26-year-old actress, and Bubak, her husband. Also, a nurse named Marisa, a farmer named Roy, and others. Bubak, who had a criminal record, was a hard sell. So were the useless-yet-likable extras. For years, English teacher Cody Chamberlain had let students debate the ethics and logistics of saving humanity on their own—until he decided to throw AI into the mix. Chamberlain fed the scenario to ChatGPT. It killed Bubak and saved his wife—not because she was useful in other ways but because she could bear children. 'That's so cold,' the students gasped. It was. But for Chamberlain, it offered something new: a dispassionate, algorithmic judgment his students could think about critically. 'ChatGPT said we needed her, like Handmaid's Tale –style,' he says. 'And the kids were like, 'That's ridiculous.' It was weird for ChatGPT to finally not have an answer key but something the kids could push back on.' Teachers have long used technology to personalize lessons, manage workloads, or liven up slideshows. But something shifted after ChatGPT's public launch in 2022. Suddenly, teachers weren't just being tasked with figuring out how to incorporate iPads or interactive whiteboards into their lessons. They had to decipher how to deal with a technology that was already crash-landing into their students' lives, one that could help them study or help them cheat. A quarter of teachers surveyed by Pew in the fall of 2023 said they thought AI provided more harm than benefits; 32 percent thought the tech was a mix of good and bad. Educators faced a choice: Try to fight off AI, or find a way to work with it. This fall, AI will be more embedded in US classrooms than ever. Teachers are deploying large language models to write quizzes, adapt texts to reading levels, generate feedback, and design differentiated instruction. Some districts have issued guidance. Others have thrown up their hands. In the absence of clear policy, teachers are setting the boundaries themselves—one prompt at a time. 'It's just too easy and too alluring,' says Jeff Johnson, an English teacher in California who instructs other teachers on AI incorporation in his district. 'This is going to change everything. But we have to decide what that actually means.' Teaching has long relied on unpaid labor—nights spent googling, planning, adjusting for special education or multilingual learners. For Johnson, AI can provide the kind of assistance that can curb burnout. He uses Brisk to generate short quizzes, Magic School to streamline lesson planning, and Diffit to create worksheets tailored to different skill levels. He doesn't use AI to grade papers or answer student questions. He uses them to prep faster. 'That alone saves me days and weeks,' Johnson says. 'Time that can be better spent interacting with students.' Jennifer Goodnow, who teaches English as a second language in New York, feels similarly. She now plugs complex readings, like essays or book excerpts, into ChatGPT and asks it to create separate versions for advanced and beginner students, with corresponding depth-of-knowledge questions. Amanda Bickerstaff, a former teacher and CEO of AI for Education, an organization that offers training and resources to help educators integrate AI into their classrooms, puts it bluntly: 'Teachers are incorporating AI because they've always needed better planning tools. Now they finally have them.' The same goes for students with individualized education plans, commonly called IEPs—especially those with reading or processing disabilities. If a student struggles with comprehending text, for instance, a teacher might use generative AI to simplify sentence structures, highlight key vocabulary, or break down dense passages into more digestible chunks. Some tools can even reformat materials to include visuals or audio, helping students access the same content in a different way. Chamberlain, Johnson, and Goodnow all teach language arts, subjects where AI can offer benefits—and setbacks—in the classroom. Math teachers, though, tend to be more skeptical. 'Large language models are really bad at computation,' Bickerstaff says. Her team explicitly advises against using tools like ChatGPT to teach math. Instead, some teachers use AI for adjacent tasks—generating slides, reinforcing math vocabulary, or walking students through steps without solving problems outright. But there's something else teachers can use AI for: staying ahead of AI. Nearly three years after ChatGPT became available to the public, teachers can no longer ignore that their kids use it. Johnson recalls one student who was asked to analyze the song 'America' from West Side Story only to turn in a thesis on Simon & Garfunkel's song of same name. 'I was like, 'Dude, did you even read the response?'' he says. Rather than ban the tools, many teachers are designing around them. Johnson has students draft essays step-by-step in a Google Doc with version history enabled, which allows him to track students' writing progress as it appears on the page. Chamberlain requires students to submit their planning documents alongside final work. Goodnow is toying with the idea of having students plug AI-generated essays into assignments and then critique the results. 'Three years ago, I would've thrown the book at them,' Chamberlain says. 'Now it's more like, 'Show me your process. Where were you an agent in this?'' Even so, detecting AI use remains a game of vibes. Plagiarism checkers are notoriously unreliable. Districts have been reluctant to draw hard lines, in part because the tools are moving faster than the rules. But if there's one thing almost everyone agrees on, it's this: Students need AI literacy, and they're not getting it. 'We need to create courses for high school students on AI use, and I don't know that anybody knows the answer to this,' Goodnow says. 'Some sort of ongoing dialog between students and teachers on how to ethically, question mark, use these tools.' Organizations like AI for Education aim to provide that literacy. Founded in 2023, it works with school districts across the US to create AI guidance and training. But even in the most proactive schools, the focus is still on tool use—not critical understanding. Students know how to generate answers. They don't know how to tell whether those answers are inaccurate, biased, or made up. Johnson has begun building lessons around AI hallucinations—like asking ChatGPT how many R's are in the word 'strawberry.' (Spoiler: It often gets it wrong.) 'They need to see that you can't always trust it,' he says. As the tools improve, they're also reaching younger students, raising new concerns about how kids interact with LLMs. Bickerstaff warns that younger children, still learning to distinguish fact from fiction, may be especially vulnerable to over-trusting generative tools. That trust, she says, could have real consequences for their development and sense of reality. Already, some students are using AI not just to complete tasks but to think through them—blurring the line between tool and tutor. Across the board, educators say this fall feels like a turning point. Districts are rolling out new products, students are getting savvier, and teachers are racing to set the norms before the tech sets them itself. 'If we know we're preparing students for the future workforce—and we're hearing from leaders across many different companies that AI is going to be super important—then we need to start now,' Bickerstaff says. That's what teachers like Johnson and Goodnow are doing, one prompt, one student, one weird apocalypse scenario at a time.