logo
Worst song ever? Starship's ‘We Built This City' remains pop's most loathed tune.

Worst song ever? Starship's ‘We Built This City' remains pop's most loathed tune.

Boston Globe9 hours ago
Get Love Letters: The Newsletter
A weekly dispatch with all the best relationship content and commentary – plus exclusive content for fans of Love Letters, Dinner With Cupid, weddings, therapy talk, and more.
Enter Email
Sign Up
If you've never heard the song, just imagine that AI existed in the mid-1980s, and someone typed 'write a soul-sucking pop song that lacks artistic merit but will make a lot of money and appeal to those who love shoulder pads' into ChatGPT.
Advertisement
Still, 40 years later, 'We Built This City' continues to haunt us. When I was a teenager in the 1980s, I developed a trigger finger to avoid the song. Whenever the a cappella intro came on the car radio, I desperately changed the station before singers Mikey Thomas and Grace Slick had a chance to finish the stanza. Similarly, when the video came on MTV, I grabbed the remote and frantically switched the channel to
Advertisement
What's most lamentable about Starship circa 1985 was that it was a machine-made husk of its former self. The band formed as Jefferson Airplane in 1965. By 1967, they were poster children for the Summer of Love. The band was a messy, dangerous, acid-encrusted cyclone of pot, booze, and other things not appropriate to mention in a family newspaper. In short, they were every parent's worst nightmare.
The evolution of the band can be seen clearly on 'American Bandstand.' When Jefferson Airplane appeared in
The camera cuts to the band, and the ominous opening strains of 'White Rabbit' begin. The camera slowly pans to Slick, who has an uncanny ability to stare while never blinking. 'One pill makes you larger …' she sings sinisterly. With that, a generation of suburban teens unwittingly entered the Summer of Love.
Contrast that with a Jefferson Starship appearance on 'Bandstand' in
Advertisement
By now, you're thinking, 'When is he going to cut the history and start the hating?' Well, my impatient friends, the background is important because it demonstrates why 'We Built This City' is still in our lives. Jefferson Airplane was a band with cache. By the time the band was simply Starship, Slick was still there, and she was a survivor.
Cast your mind back to 1985. Music fans loved a good survivor story. Tina Turner was riding high, the Moody Blues were getting wistful with 'Your Wildest Dreams,' the Kinks came back with 'Come Dancing,' prog-rockers Yes hit with 'Owner of a Lonely Heart.' Aretha Franklin asked the eternal question, 'Who's Zoomin' Who?' and swinging 1960s chanteuse Dusty Springfield (with an assist from the Pet Shop Boys) asked, 'What Have I Done to Deserve This?'
With Slick still aboard, Starship had its own 1960s songstress and was teed up and ready for the synthetic, engineered pop of the 1980s. By the way, Slick has since said she hates 'We Built This City.' The cheerful, nonsensical ditty is about as far from
Much like the lyrics, the video for the song is cobbled together and makes little sense. You almost need some of Slick's old 'White Rabbit' party favors to piece together the story. A collection of random faces appears on green screen, and then Thomas and his gang of random green screen people are suddenly at the Lincoln Monument, and then the statue of Lincoln comes to life. Did I mention this is a song about San Francisco, or maybe Los Angeles? That part is a little muddy.
Advertisement
Next is Slick, with a green screen gang of her own. The Vegas strip is projected behind her. Now it's Japan. Wait, is she in a bank lobby now? Watch out! A giant pair of dice just dropped from the sky. Run green screen people! Run! How did the band end up on top of a building? Did the green screen people die under the giant dice?
The song and video are terrible, but many 1980s songs and videos are equally dumb. I'm still trying to figure out how to Wang Chung. But why are we still talking about 'City' 40 years later and not
After being ignored for decades, hating 'We Built This City' became fashionable in the 2000s. It began in 2011 when it won a
As much as I dislike 'We Built This City,' I don't think it's the worst song of all time. Sure, it makes my ears bleed. But I have a more visceral reaction to 'Who Let the Dogs Out,' 'Blue (Da Ba Dee),' or 'Mambo No. 5 (A Little Bit Of…).' Perhaps the world needs to go a bit easier on Starship; instead of calling 'We Built This City' the worst song ever, just call it the most disappointing song ever recorded.
Advertisement
Christopher Muther can be reached at
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Dex is an AI-powered camera device that helps children learn new languages
Dex is an AI-powered camera device that helps children learn new languages

Yahoo

time5 hours ago

  • Yahoo

Dex is an AI-powered camera device that helps children learn new languages

Three parents—Reni Cao, Xiao Zhang, and Susan Rosenthal—were worried about their children's screen time, so they left their tech jobs to create a product that encourages children to engage with the real world while also helping them learn a new language. Their move has paid off, as the company recently raised $4.8 million in funding. The newly launched gadget is called Dex and resembles a high-tech magnifying glass with a camera lens on one side and a touchscreen on the other. When kids use the device to take pictures of objects, the AI utilizes image recognition technology to identify the object and translates the word into the selected language. It also features interactive story lessons and games. While kid-focused language learning apps like Duolingo Kids exist, Dex argues that it takes a more engaging approach that emphasizes hands-on experiences, allowing children to immerse themselves in the language. 'We're trying to teach authentic language in the real world in a way that's interactive,' Cao told TechCrunch. 'The kids are not only listening or doing what they are told to do, but rather, they are actually thinking, creating, interacting, running around, and just being curious about things, and acquire the necessary language associated with those concepts and objects.' Dex is designed for kids ages 3 to 8 years old and currently supports Chinese, French, German, Hindi, Italian, Japanese, Korean, and Spanish. It also offers support for 34 dialects, including Egyptian Arabic, Taiwanese Mandarin, and Mexican Spanish. In addition to object recognition, Dex features a library of interactive stories that encourage children to actively participate in the narrative. As the story unfolds, kids are prompted to respond, such as greeting characters in the language they are learning. The device comes with a dedicated app for parents to see a detailed overview of their child's progress, including the vocabulary words they've learned, the stories they've engaged with, and the number of consecutive days they've used Dex. Additionally, Dex is currently developing a feature that allows kids to ask an AI chatbot questions and engage in free-form conversations. This feature is already available to some testers, but the company admits it isn't ready for a wider rollout. Parents might also be cautious about introducing AI chatbots to their children. During our testing of Dex, we had concerns about the possibility of a child learning inappropriate words. Cao assured us that 'rigid safety prompts' are included whenever the large language model is used across vision, reasoning, and text-to-speech. He said, 'We have an always-on safety agent that evaluates conversations in real-time and filters conversations with a safe stop word list. The agent will suppress conversation if any of the stop words are mentioned, including but not limited to those related to sexuality, religion, politics, etc. Parents will soon be able to further add to personalized stop word lists.' Plus, it said that the AI is trained using vocabulary standards similar to those found in Britannica Kids and other children's encyclopedias. In our testing, the AI successfully ignored topics related to nudity. However, it did recognize and accurately translate the term 'gun,' something parents should consider when purchasing the device. In response to our findings, Cao told us, 'Regulation-wise, I'm not worried, but I do think this presents a concern, especially among [some] parents.' He added that these concerns have pushed the company to soon introduce an option in settings to filter out specific words, such as guns, cigarettes, vape pens, fireworks, marijuana, and beer bottles. Dex also has a zero data retention policy. While this means there's no risk of sensitive or personal images being stored, one downside could be that parents are left in the dark about the type of content their kids may be capturing. Dex is also actively working towards obtaining COPPA certification, which would make it compliant with the Children's Online Privacy Protection Act. The company secured funding from ClayVC, EmbeddingVC, Parable, and UpscaleX. Notable angel investors include Pinterest founder Ben Silbermann, Curated co-founder Eduardo Vivas, Lillian Weng, who is the former head of safety at OpenAI, and Richard Wong (ex-Coursera). The device is priced at $250, which feels steep for a product designed for children. However, Dex positions itself as a more affordable alternative to hiring a tutor, which can charge up to $80 per hour, or attending a language immersion school, which can cost several hundred to even thousands of dollars. Dex says that hundreds of families have already purchased the device.

I'm an AI artist working in Hollywood. Here's my advice for others looking to use the tech to boost their careers.
I'm an AI artist working in Hollywood. Here's my advice for others looking to use the tech to boost their careers.

Business Insider

time5 hours ago

  • Business Insider

I'm an AI artist working in Hollywood. Here's my advice for others looking to use the tech to boost their careers.

Hollywood is cautiously adopting AI, using it to cut time and costs from tasks ranging from script evaluation to post-production. But concerns that it will steal creative works and wipe out jobs persist. Minta Carlson, 35, who goes by Araminta K professionally, represents an emerging class of labor in Hollywood. She's a senior creative AI architect at Moonvalley, one of the prominent AI companies working in Hollywood. Moonvalley, which just raised $84 million, pitches an ethical AI model it calls Marey and owns AI film studio Asteria Film Co., which was cofounded by filmmaker and actor Natasha Lyonne and filmmaker Bryn Mooser. Carlson shared her career path in this as-told-to essay and how other creative people in Hollywood can adapt to AI. It's been edited for length and clarity. After studying theater and working in graphic design, I taught myself how to train AI models. In the Hollywood context, what we do is called "fine-tuning" models. Let's say you wanted a specific dragon in a production. You could train a model to understand who that dragon is, how it moves, what it looks like — what it looks like really close up and really far away. And when you point the dragon this way and you tell it to fly upward toward the sunset, it'll understand what that means. I started working at Asteria full time early this year. Sometimes, I'm working on animated shorts in-house, and other times, I'm working with studios on VFX or background pieces. I'll work with creative directors and others to flesh out how a project's characters and styles should look. We recently helped a studio augment a party scene that otherwise would have been cut for budgetary reasons. I have not seen a lot of people like me before starting this job. Anyone with a technical and art background can do it. It takes less than an hour to explain how to curate a dataset in the context that we need. What's really difficult to teach someone is how to have taste and how to have a really critical eye. With people who don't have an art background, it can be weeks of going back and forth. I could definitely see technical artists, traditional animators, and colorists doing this work, as well as people with less traditionally technical roles, like a creative director or a concept artist — anyone who has to communicate visual content to other people. The most misunderstood idea is that AI is this amorphous blob that can replace an artist. At the end of the day, AI doesn't have a perspective, vision, or opinion about anything. If you're just prompting a model, whatever results you're getting are just that model's bias unless you know what you're asking for and how to ask it. I still lean on my traditional skills I was born in Berkeley and moved to New England when I was young. I went to school for theater directing and playwriting. After college, I worked as an independent graphic designer and honed art skills on my own. I also built websites. I started working with Stable Diffusion in 2022 when AI was starting to blossom. I wasn't happy with the results, so I started to learn how to train the model. It was very technical, so I just muscled through. It's a very artistic and curatorial process. In the case of Marey, you're usually putting in 20 or 30 images or videos to fine-tune the model, so each one has so much impact, and you need to understand each one's strengths and weaknesses and how each one relates to the other. Knowing how to look technically at art and think about a whole piece is something I had to do in school for theater, and those skills came back in full force. For people already working in Hollywood, it's really easy to get overwhelmed by all of the different tools. I tell people, don't test every single tool. Come up with something you're trying to solve, like making someone walk across the screen or drawing multiple angles of an animated character. I like sites like Replicate or Fal that let you test a lot of open-source models. Knowing how to illustrate and animate are still valuable, but I recommend people prioritize creativity and not just technical skills. Otherwise, you might never think of all of those different movements that you need to show the AI. Rather than drawing a million of the same character to try to perfect it, I would draw a million different characters to create a perspective and unique style. Artists have a future in Hollywood When I was consulting, I had conversations with people in marketing who wanted to try to replace artists with AI, and it didn't really work. It was very obvious that the person asking didn't understand how AI and art worked, and they didn't have self-awareness around their own standards. Even when I saw colleagues try to follow through with those requests, it was never good enough. Now, I'm rarely involved with a project if we're not taking direction from an artist or a director. I think the fear of AI impacting jobs in Hollywood is understandable, but I'm seeing that AI is going to unlock a lot of productions that may be struggling to get funding or forward momentum.

Beauty publishing was always a lie. But AI just broke it
Beauty publishing was always a lie. But AI just broke it

Fast Company

time6 hours ago

  • Fast Company

Beauty publishing was always a lie. But AI just broke it

Beauty magazines have been lying to readers for decades—but at least they used to start with actual humans. NewBeauty magazine's Summer/Fall 2025 issue quietly crossed that line, publishing a multipage article dedicated to the beautification of female skin that featured perfect female models who weren't real. Spotted by professional photographer Cassandra Klepac, she pointed out that each photo was labeled as AI and included the prompt used to generate them. With the advent of technology capable of synthesizing ultra-high-definition photos of realistic humans, this was bound to happen sooner rather than later. Knowing the hell that Vogue recently faced for featuring Guess advertisements with AI-generated models—sparking the rage of 2.7 million TikTok viewers and subscription cancellations—it is surprising that NewBeauty's editors decided to do the same with actual editorial content, the supposedly 'real' part of magazines. So why did the magazine—which calls itself 'the beauty authority' in its tagline—do this? 'NewBeauty features both real people and patients, alongside AI-generated images,' executive editor Liz Ritter told me via email. 'We maintain a strict policy of transparency by clearly labeling all AI content in detail in our captions, including the prompts used to create these images, so readers always know the difference.' Subscribe to the Design newsletter. The latest innovations in design brought to you every weekday Privacy Policy | Fast Company Newsletters Her nonanswer leaves us to only speculate on the reasons why. Talking about the Guess campaign, Sara Ziff—founder of the Model Alliance—said that it was 'less about innovation and more about desperation and the need to cut costs.' Given the depressed status of the print media industry, I suspect that may have played a role in the case of NewBeauty. View this post on Instagram A post shared by Cass Klepac (@ Perfectly legal . . . However you may feel about the campaign, we know that NewBeauty didn't do anything illegal. There are virtually no laws governing editorial use of AI-generated humans at this point, except to stop deceptive use in politics and in regards to the honor of individuals (something already covered by libel laws). Surprisingly, advertising is a little bit more regulated. The Federal Trade Commission can penalize deceptive advertising practices. New York's groundbreaking AI disclosure law targets advertisements, requiring 'conspicuous disclosure' when synthetic performers are used. But editorial? It exists in a regulatory wasteland, leaving it to the judgment of editors. Europe's comprehensive AI Act mandates clear labeling of AI-generated content and carries maximum fines of €35 million ($40.7 million), but it focuses on transparency, not prohibition. You can fabricate entire humans for editorial use—you just have to mention it in the caption. NewBeauty did exactly that. . . . but dangerous anyway? We also know that, despite the fact there's nothing illegal about it, this doesn't mean it is right. Using artificial intelligence feels dangerous because it is so easy and so powerful. When it spreads—and it will—many professions will be affected. This includes not only the models, photographers, makeup artists, and all the people who make real photoshoots possible, but also the Photoshop artists who retouch what comes out of the digital camera into an image that quite often has very little to do with what the sensors capture. For the past few decades, Photoshop artists have erased wrinkles, refined arms, rebuilt waistlines, adjusted eyes, and turned anything that editors deemed imperfect into whatever fantasy beauty standard the industry set. Reality has been malleable, to be generous. advertisement Remember that time Rolling Stone heavily retouched Katy Perry because they didn't think she was pretty enough? Or that Lena Dunham Vogue cover and photo feature, the one in which she was missing an arm? Duham said at the time those photos were intended as 'fantasy.' Like everything else featured in glossy pages. Those were just two high-profile examples of a practice that happens regularly for any cover of any fashion, beauty, or celebrity print magazine. In this sense, acting surprised or offended by NewBeauty's AI models feels hollow, albeit understandable—due to a fear of the damage that AI tools will bring to the industry. It's been a ruse forever The hard reality is that photographers have used lighting and filtering tricks to make things look more beautiful than they are in real life since the advent of the medium. Then, the editorial and advertising industries have been breaking every taboo in digital manipulation since Photoshop was invented. Today, AI is democratizing the deception once again—to the point where a single art director for some random magazine can actually create a high-resolution print spread full of beautiful people who don't exist, simply by using a short prompt and spending a couple of dollars. I get it. It's tempting to tweak reality, sometimes rearranging it completely, to tell a compelling narrative. This summer I went to the Robert Capa museum in Budapest—highly recommended—and stared for a while at that famous Spanish Civil War photo of a Republican 'miliciano' being shot. I considered its terrible beauty and the effect it had on the public in an era in which the specter of Nazism and fascism was rising in Europe. I also considered the fact that some experts believe that the photo may have been staged (while others vehemently disagree) and pondered on what is real and what's not, on the effects of perceived reality versus 'real reality' versus manipulated reality. These are questions we constantly face as journalists. If Capa really staged that photo, perhaps it was the right thing to do at the time. Perhaps not. But I digress. I don't pretend to hold NewBeauty to the same fact-checking standards that governed news media back in the time of Capa. Beauty, fashion, cars, and luxury magazines are all part of that aspirational world in which reality easily gets bent to tell a fantasy. I would say that, by clearly labeling the AI images, NewBeauty is being way more honest than the editors of fashion and beauty magazines have been in years and decades past. Those magazines never labeled their photos, 'THIS CELEB IS PHOTOSHOPPED! THIS AIN'T REAL, STEPHANIE! STOP DIETING! LOL!' Yet all covers and many interior shots were digitally altered and many times reconstructed beyond recognition, sometimes pathetically so. Indeed, the beauty industry was already constructed on visual lies, but in the age of AI, the powers that be won't stop here. Will it be problematic? Yes. Will it cause real economic and personal damage? Most definitely. But we will get more and more used to it until we stop questioning the practice at all. I hate to tell you I told you so, but I told you so. It's the destruction of reality as we know it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store