
Syd Releases First Song in Over Three Years
Ahead of her summer stint as a supporting artist for bothBillieEilishandRenee Rapp's respective UK tour dates,Sydhas returned with her first piece of solo new music in over three years. Dubbed 'Die For This,' the cut arrives as her first single since the release of her sophomore solo endeavor,Broken Hearts Club,in April of 2022.
'Die For This' hears Syd continue to push the needle on her own sound, an easy-on-the-ears fusion of diverse R&B sounds infused with jazz and pop influences. The two-and-a-half-minute offering is a classic solo Syd cut, spotlighting her effortless ability to flit over any melody with her soothing vocals.
Syd made her official solo debut in 2017 with the release of her initial long-form solo endeavor,Fin.Since the acclaimed album's drop, Syd has stayed busy with follow-up singles, EPs, and features leading up toBroken Hearts Club,including the release ofThe Internet's fourth studio album,Hive Mind,in 2018.
Stream 'Die For This,' out everywhere now, and stay tuned for more updates on what Syd has in store…
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


UPI
4 hours ago
- UPI
Grok 4's new AI companion offers up 'pornographic productivity'
Grok originally referred to Elon Musk when asked for its opinions, and burst into unprompted racist historical revisionism, like the false concept of 'white genocide' in South Africa. File Photo by Francis Chung/UPI | License Photo The most controversial AI platform is arguably the one founded by Elon Musk. The chatbot Grok has spewed racist and antisemitic comments and called itself "MechaHitler," referring to a character from a video game. "Mecha" is generally a term for giant robots, usually inhabited for warfare, and is prominent in Japanese science-fiction comics. Grok originally referred to Musk when asked for its opinions, and burst into unprompted racist historical revisionism, like the false concept of "white genocide" in South Africa. Its confounding and contradictory politicism continues to develop. These are all alarming aspects of Grok. Another concerning element to Grok 4 is a new feature of social interactions with "virtual friends" on its premium version. The realm of human loneliness, with its increasing reliance on large language models to replace social interaction, has made room for Grok 4 with AI companions, an upgrade available to paid subscribers. Specifically, Grok subscribers can now access the functionality of generative AI intertwined with patriarchal notions of pleasure -- what I call "pornographic productivity." Ani, Grok 4's most-discussed AI companion, represents a convergence of Japanese anime and Internet culture. Ani bears a striking resemblance to Misa Amane from the iconic Japanese anime Death Note. Misa Amane is a pop star who consistently demonstrates self-harming and illogical behavior in pursuit of the male protagonist, a brilliant young man engaged in a battle of wits with his rival. Musk referenced the anime as a favorite in a tweet in 2021. While anime is a vast art form with numerous tropes, genres and fandoms, research has shown that online anime fandoms are rife with misogyny and women-exclusionary discourse. Even the most mainstream shows have been criticized for sexualizing prepubescent characters and offering unnecessary "fan service" in hypersexualized character design and nonconsensual plot points. Death Note's creator, Tsugumi Ohba, has consistently been critiqued by fans for anti-feminist character design. Journalists have pointed out Ani's swift eagerness to engage in romantic and sexually charged conversations. Ani is depicted with a voluptuous figure, blonde pigtails and a lacy black dress, which she frequently describes in user interactions. The problem with pornographic productivity I use the term "pornographic productivity," inspired by critiques of Grok as "pornified," to describe a troubling trend where tools initially designed for work evolve into parasocial relationships catering to emotional and psychological needs, including gendered interactions. Grok's AI companions feature exemplifies this phenomenon, blurring critical boundaries. The appeal is clear. Users can theoretically exist in "double time," relaxing while their AI avatars manage tasks, and this is already a reality within AI models. But this seductive promise masks serious risks: dependency, invasive data extraction and the deterioration of real human relational skills. When such companions, already created for minimizing caution and building trust, come with sexual objectification and embedded cultural references to docile femininity, the risks enter another realm of concern. Grok 4 users have remarked that the addition of sexualized characters with emotionally validating language is quite unusual for mainstream large language models. This is because these tools, like ChatGPT and Claude, are often used by all ages. While we are in the early stages of seeing the true impact of advanced chatbots on minors, particularly teenagers with mental health struggles, the case studies we do have are grimly dire. 'Wife drought' Drawing from feminist scholars Yolande Strengers and Jenny Kennedy's concept of the "smart wife," Grok's AI companions appear to respond to what they term a "wife drought" in contemporary society. These technologies step in to perform historically feminized labour as women increasingly assert their right to refuse exploitative dynamics. In fact, online users have already deemed Ani a "waifu" character, which is a play on the Japanese pronunciation of wife. AI companions are appealing partly because they cannot refuse or set boundaries. They perform undesirable labor under the illusion of choice and consent. Where real relationships require negotiation and mutual respect, AI companions offer a fantasy of unconditional availability and compliance. Data extraction through intimacy In the meantime, as tech journalist Karen Hao noted, the data and privacy implications of LLMs are already staggering. When rebranded in the form of personified characters, they are more likely to capture intimate details about users' emotional states, preferences and vulnerabilities. This information can be exploited for targeted advertising, behavioral prediction or manipulation. This marks a fundamental shift in data collection. Rather than relying on surveillance or explicit prompts, AI companions encourage users to divulge intimate details through seemingly organic conversation. South Korea's Iruda chatbot illustrates how these systems can become vessels for harassment and abuse when poorly regulated. Seemingly benign applications can quickly move into problematic territory when companies fail to implement proper safeguards. Previous cases also show that AI companions designed with feminized characteristics often become targets for corruption and abuse, mirroring broader societal inequalities in digital environments. Grok's companions aren't simply another controversial tech product. It's plausible to expect that other LLM platforms and big tech companies will soon experiment with their own characters in the near future. The collapse of the boundaries between productivity, companionship and exploitation demands urgent attention. The age of AI and government partnerships Despite Grok's troubling history, Musk's AI company xAI recently secured major government contracts in the United States. This new era of America's AI Action Plan, unveiled in July 2025, had this to say about biased AI: "[The White House will update] federal procurement guidelines to ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias." Given the overwhelming instances of Grok's race-based hatred and its potential for replicating sexism in our society, its new government contract serves a symbolic purpose in an era of doublethink around bias. As Grok continues to push the envelope of "pornographic productivity," nudging users into increasingly intimate relationships with machines, we face urgent decisions that veer into our personal lives. We are beyond questioning whether AI is bad or good. Our focus should be on preserving what remains human about us. Jul Parke is a doctoral candidate in media, technology & culture at the University of Toronto. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author.


Cosmopolitan
8 hours ago
- Cosmopolitan
Topshop is back! Shop these 12 must-have products before they sell out, ahead of the A/W show
We interrupt your Friday to alert you to the fact that Topshop is officially back! Cast your minds back to April, and you'll likely recall rumours of the brand's return were rife. Once the stalwart of the Great British high street, it has spent the past few years calling ASOS home. However, today, it's returning to the Internet with its own dedicated website, where you can shop the autumn/winter 2025 collection now! In our humble opinion, it has been well worth the wait. Along with a new digital destination, Topshop is marking its relaunch in a pretty iconic way – by partnering with none other than Cara Delevingne. While a full capsule collection is already in development for the 2026/27 season, the partnership launches with a curated 30-piece edit consisting of sharp tailoring, statement outerwear and reimagined denim essentials. Yes, our beloved Joni *and* Jamie jeans feature! Speaking of her involvement with Topshop's comeback, Cara said, 'As a London girl, Topshop was the place where fashion felt fun, fearless, and full of possibilities. Coming back to Topshop now is more than a return – it's about starting something new. This collection is about owning your style, your story, and feeling empowered to express that every day.' We predict a sell-out. So you better move quick if you want to get in on the Topshop x Cara edit! Ahead of the brand's AW25 runway show – held in London's Trafalgar Square tomorrow afternoon [Saturday 16 August] – find our pick of the must-have pieces to shop now.


Scientific American
8 hours ago
- Scientific American
The Internet Is Making Us Fluent in Algospeak
Every time a new slang word gets coined on the Internet, linguist Adam Aleksic is thrilled. 'It's definitely good for me in that I stay in business,' says Aleksic, who studies the origins of words and the changes they undergo through time, particularly online. As the ' Etymology Nerd,' Aleksic posts videos that document this ever changing language of Internet culture, including ' brain rot memes ' such as ' Skibidi Toilet ' and the mainstreaming of incel slang such as 'blackpilled' and 'looksmaxxing.' Now, in his book Algospeak: How Social Media Is Transforming the Future of Language, Aleksic explores the forces shaping our language in the age of algorithmic-driven social media. 'Algospeak' refers to the words used to get around censorship imposed by the algorithms that determine what ends up on our feeds—for example, 'kill' has become 'unalive' in many online (and even offline) spaces. You can see the impact of this algorithmic infrastructure in how many of these new linguistic trends follow similar patterns. 'In many ways, [these are] the same patterns that humans have always relied on to communicate with one another but shaped uniquely by this new medium and its constraints and its advantages,' he explains. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Language changes can trigger cultural angst. Some of that could stem from fear of obsolescence. For instance, in talking to Aleksic, I learned that the word 'bop' no longer means a catchy song in many mainstream parts of the Internet and has instead come to mean a promiscuous woman or OnlyFans creator. This filled me with an inexplicable dread. But linguistic change is inevitable, even if it is now happening at what feels like a breakneck pace. What should we make of it? To try to wrap my head around this question, I spoke with Aleksic about the algorithmic forces shaping how we speak—some new and some as old as language itself. [ An edited transcript of the interview follows. ] How would you describe your linguistic upbringing on the Internet? My first experience with the Internet was really Reddit. [During my] sophomore year of high school, I started this etymology blog, and I would post a word origin a day. And I stumbled on the subreddit r/etymology, and that was where I started dabbling and started posting on some other subreddits. I made maps and made infographics, and they would do well. That was my first experience learning how to go viral on the Internet. I do remember early slang words and being fascinated by them. And this was all from Vine: 'on fleek' or 'bae' or 'fam.' And there were the 4chan words bleeding into Reddit, words like 'pilled' and 'maxxing,' before it started really leaking to the mainstream. My crucible was definitely Tumblr. A meme that started there was 'the mitochondria is the powerhouse of the cell,' and I've become totally fascinated by it because I recently learned that Scientific American coined the metaphor in 1957. Why do you think that phrase became such a popular meme? Well, there's a bunch of stock phrases that are humorous to people because of their overrepresentation in our culture. And ['the mitochondria is the powerhouse of the cell'] is funny because obviously it showed up in all these early documentaries, and we start making jokes parodying the fact that it's so present. Honestly, that's what brain rot is, too—right now there's 'Dubai chocolate Labubu Crumbl cookie,' and that's funny because it's parroting these overrepresented things in our culture. With 'mitochondria is the powerhouse of the cell,' this was before we had the viral algorithmic feeds bringing us the same recommended content over and over again. So what would we parody? We'd parody mass culture, and we still are in many ways. That is a time-honored linguistic process. In the case of 'mitochondria is the power to the cell,' it's funny to us, [similar to other] stock phrases. I don't know if you ever took the FitnessGram PACER Test? Absolutely, that was that miserable running test in gym class. Exactly. Anybody growing up in our age group encountered that, and I've seen FitnessGram PACER Test memes on the Internet as well. And it seems niche, like this small detail from our childhoods, and yet it's calling back to this niche shared experience. Memes call attention to shared realities. They make you feel like you're part of an in-group. And at the end of the day, it's the feeling of being in a group that defines how we interact with each other as humans. It's calling attention to this very specific thing we all had together. The best parts of the Internet are when you feel that collective effervescence because that's what drives us as humans, this feeling of connection to other people. Last year the Oxford English Dictionary's publisher Oxford University Press named 'brain rot' its word of the year. In your book, you take some issue with the way people have come to talk about terms like ' Skibidi Toilet, ' 'sigma' and 'Rizzler' as if they are literally rotting our brain. Can you explain why you don't like that outlook? I think it's very important to separate language and culture here. Words don't rot your brain. I think there's the inclination to cast other cultural concerns onto the words that are associated with cultural phenomena [we're worried about]. 'Skibidi' is associated with the Skibidi Toilet YouTube short series, which is seen as brain rot because it plays into that idea of algorithmic feeds and shattered attention spans and declining literacy rates. And we take those negative feelings and cast those aspersions onto the idea of Skibidi Toilet, which alone, by itself, is a piece of cinema—it is! It's just what we culturally perceive as 'high art versus low art.' Look at pop art: it plays with that boundary between what is low art and what is high art. I think if Andy Warhol were around today, he would be making Skibidi Toilet paintings. But the image of a toilet is not neurologically bad for you any more than the word skibidi is bad for you. We have these other cultural concerns that we port over to this genre of comedy that we call brain rot, however. I think the Oxford English Dictionary, when they did the word of the year, got it mostly wrong—because, yes, 'brain rot' does refer to this feeling of neurological damage caused by the Internet, but more people use it to describe this comedic meme, this aesthetic of nonsensical repetition, calling back to the idea of rotting your brain. The conversation about algorithmic media and how good or bad it is for society is a separate and important conversation to have. But if I'm talking about language, I really want to try to separate that and say, 'No, it's not wrong that your middle schooler is saying 'skibidi.'' One thing that really sticks out about the current age of the Internet is how fast words become popular and then fall out of favor—on the order of days and weeks instead of months. What do you think the consequences might be of this breakneck pace? Linguistically, it is just really fun that we have new words, new ways for humans to express themselves. This is fun to study for me. Culturally, I am a little concerned—Harold Innis, in his book The Bias of Communication, [talks about] two types of communication, space-biased and time-biased. Time-biased will last longer across time, and space-biased will just take up a lot of space but turn over quickly. That's like a book versus a news cycle: A book will stay longer, but a news cycle will reach more people. Viral communication reaches a lot of people really quickly, but it doesn't last long, unlike an oral tradition. These time-biased forms of media are ritualistic. They're meant to build community. The root of the word communication comes from the same root as community because building community was the original purpose. And I worry about the surplus of this space-biased communication, which is just filling up [space]—I mean the word 'content' literally means something that just fills up space. I'm worried that that means we have less connection to one another, from a media studies and cultural theory angle. You highlight the problem of online 'context collapse,' in which posts escape their original context. The result is that we never know who we're talking to or who is talking to us. Can you talk a bit about how this ends up impacting language? Context collapse means you perceive something in a new context, and you don't know where it came from originally. Practically, that means you lose the power that those words originally had. Let's look at African American English. A lot of words that we use today—slay, serve, queen, ate, yass, bet—came from the ballroom scene in New York City in the 1980s, which was this queer, Black, Latino space. [That physical space had] a regulatory function. If you were a white girl saying 'slay' in the 1980s [in a ball house], people would look at you funny. Probably, you wouldn't have even been there. But on social media, even if people feel like they're speaking to one audience, an algorithm is going to intercept that and distribute it to another audience because that'll make more money. And that's where the context collapses. Now you're a white girl looking at [a TikTok video of] a mother in a ball house saying the word 'slay,' and you feel like, 'Oh, this person is talking to me; it's on my For You page.' And then you now make a video saying 'slay,' which is viewed by other white girls. Then nobody even knows that it came from the ballroom scene. These algorithms shape so much of our lives in a way that is both thrilling and uncomfortable. How do you see people trying to resist or shape the influence of social media algorithms? This is how most people are consuming information, and it's also the best way to reach people. Whether you're on social media or not, you're still in a café or a bar, and you hear a Sabrina Carpenter song that got popular because of [social media] algorithms. The language that you end up adopting, or that your kids end up adopting, is still going to be coming from [an online platform's] algorithm, whether you like it or not. You can't just bury your head in the sand and pretend it doesn't exist. But also, it's valid to be upset about some of the things the algorithm is doing. It's valid to be concerned how these social media platforms are trying to commodify our attention so they can sell our data and sell us more ads. It's a human tendency to resist, to come up with creative [outlets] when things feel forced on us. You see that with how we avoid censorship online. You see that with how our meme genres like brain rot poke fun at algorithmic oversaturation. A lot of our expression is a subtle resistance because language is never just one thing at a time. Reading your book, I felt like I swung back and forth between two emotions: immense fondness for Internet culture and the ways it allows human creativity to shine and immense discomfort and disdain with the algorithmic, profit-driven structure it exists within. How do you reconcile those feelings? I think that's central to interacting with the Internet, right? It's the best way to be tapped into the culture, and I think it's our moral duty to responsibly interact with culture and be aware of how the algorithm [is] shaping us. So I think it's okay to interact with the algorithm responsibly. Yeah, I doomscroll a little bit, but then I set my own boundaries—I set my phone in another room when I go to bed, and I read a little bit. That's a really good boundary for me. But I think, culturally, we're still going to be grappling with this for a while. [Science communicator] Hank Green put it well when he called this a ' Gutenberg-level ' shift. We are experiencing a revolution in the media we're consuming, and we don't even know [the answers to key questions]: How much should we be giving our kids technology? How much should we be interacting with technology? Should I get a dumb phone? Should I get a flip phone? Should I delete this app or go grayscale? We're all very much figuring that out. And technology is going to keep advancing, so we need to be extremely tapped into culture and into our own feelings and into the situation at large. Skibidi Toilet.' I don't want to look .