
Camilo shares a healthy skepticism of AI in his new single ‘Maldito ChatGPT'
Amid heated public debates over the growing use of artificial intelligence in everyday life, the Latin Grammy-winning pop star Camilo warns humanity against an over-reliance on one particular AI platform: ChatGPT.
On June 25, the Colombian singer-songwriter released the Trooko-produced electro-pop single, 'Maldito ChatGPT,' which playfully critiques the role of artificial intelligence in human affairs. In his lyrics, he consults the ChatGPT bot for advice on how to resolve his relationship woes. 'You're not for me, that's what ChatGPT told me, it knows me better than I know myself.'
The new music video — directed by Camilo's spouse, Evaluna Montaner, and Sebastian Andrade — is just as critical of this 'smart' technology. Set in a dimly-lit office with Post-it notes and paper scattered about the cubical, the visuals pay homage to the aesthetics of the 1999 cult comedy film 'Office Space.'
Camilo, dressed in full office wear (save for his feet) agonizes over his relationship, feeling powerless to make a decision whether to stay. He shakes a Magic 8 ball, flips through a finger fortune teller and pulls petals from a daisy. Finally, an undefined robot voice affirms that the differences between Camilo and his lover are clear, and might cause issues in the long run. 'You deserve a relationship where you feel full compatibility,' says the robot voice.
When asked how he feels, Camilo wraps the song with: 'Like absolute crap, dude. How else am I supposed to feel?'
'Maldito ChatGPT' is a welcome response to the increasing use of AI on people's personal lives. The ChatGPT platform now offers a specialized bot for relationship advice, which offers mixed results for humans; an early study by MIT's Media Lab has linked frequent use of ChatGPT to an increase in loneliness and emotional dependence, though the results have not yet been peer-reviewed.
The platform has also raised ethical questions recently in the news. Earlier this month, CBS News interviewed an American man who proposed to an AI chatbot that he programmed for flirty responses — despite living with his very human partner and their 2-year-old child. Meanwhile, educators have expressed concerns about their students using ChatGPT to complete assignments, thus hindering their ability to develop core skills. Meanwhile, OpenAI, ChatGPT's parent company, has become so influential among humans that it secured a $200-million contract with the Department of Defense to aid in 'national security missions.'
As humans continue to engage with these innovative AI tools without any guardrails, outsourcing matters of the heart to technology gives Camilo the most pause. 'In the midst of everything that seems calculated, choosing from the heart remains a radical act,' said Camilo in a public statement.
'We live surrounded by quick answers,' he further elaborated on Instagram. 'By formulas designed to avoid failure. By technologies that predict and know everything. By ideas about what love is supposed to look like,' he explained. 'There's something that doesn't fit into any logic. Or any checklist,' Camilo added. 'Love isn't a casting call. Love is something you feel. And nothing — and no one — can ever feel it for us.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


USA Today
an hour ago
- USA Today
'The Bear' is back, baby: Season 4 review
There is a moment in Season 4 of FX's "The Bear," which has taken over every summer on TV since it premiered in 2022, when you acutely remember why you got so sucked into this show in the first place. Especially after last year's third season made us forget. Much has been written and said about the acclaimed series – which launched its cast into superstardom and took home a treasure chest full of Emmy awards – and its ability to engross and bewitch its viewers. There's the frenetic energy of its setting in a restaurant kitchen. There's the aptitude of its talented actors, who spit profanities as sharp as their chef's knives as they chop and stir and and season and argue. There's the sense of place in a perpetually overcast Chicago and the triumphs and tragedies that populate every episode. There are the Oscar-winning guest stars and family gatherings that make the Roman Colosseum look tame. But the heart and soul of "The Bear" and its return-to-form fourth season (now streaming on Hulu, ★★★½ out of four) – the meat and potatoes, if you will – are the people. The characters keep you coming back for more. Chef Carmy (Jeremy Allen White) with his raw anxiety and trauma; "cousin" Richie (Ebon Moss-Bachrach) with his anger that can be tamped down by joy; and chef Sydney (Ayo Edebiri), a voice of sense, reason and professionalism but also vulnerability and imposter syndrome. And "The Bear" Season 4 gets them right, to its end. Without them, the frenzy that is this show's signature mode is just noise, not story. And that's the thread that got lost in last year's lackluster third season, where vibes and an overly artsy structure got in the way of just seeing this trio in a room together, preferably a kitchen. In Season 4, "The Bear" is serving what we might call humble pie: a reset from the sins of Season 3. It's, if not peaceful − because there is no peace in the pandemonium that is nightly service at a restaurant − then it has a rhythm to the mad music in 10 new episodes. Creator Christopher Storer and the cast deliver more of what we love about "The Bear," sometimes sweetly and quietly and sometimes with deafening fury. But this year, the chaos is focused and controlled. Every second counts. The new episodes pick up right after the Season 3 finale, in which Carmy and Sydney's restaurant received a rough review from the Chicago Tribune. Coupled with Carmy's mismanagement of its budget and the general ill use of the staff and resources, The Bear is just weeks away from going under. That point is underlined by a large countdown clock that investor/patron Uncle Jimmy (Oliver Platt) has placed in the kitchen. Everyone has to get better, calmer and faster. Carmy has to make sacrifices. And Sydney has to decide if she's staying or jumping ship to a job with another buzzy chef. Whereas in Season 3 episodes would often slip and slide around a plot and a point and blur into each other lazily, the new installments are sharp and addictive, begging you to just let the next episode play on. There is the trademark radical realism and awkwardness to the dialogue, particularly in an episode set during a wedding that sees many returns from fan-favorite guest stars, and raw emotion on every sleeve. Parenting remains the show's prevailing theme, whether it's of an older generation, a new one or even caretaking a business. Everybody could use a little therapy, particularly Carmy. But it's tantalizing to watch them work out their issues instead in front of us. If there's one major flaw in the new season (which at times feels like it might be the final one, too), it's that the laser focus on Carmy means some members of the great ensemble are left behind. The wild-haired protagonist finally confronts the trauma of losing his brother Mikey to suicide (Jon Bernthal, back for a cameo early on), and the emotional abuse and alcoholism of his mother (Jamie Lee Curtis, also back). It is cathartic and electrifying, but his lengthy screen time means there's less for the show's other standouts, like Marcus (Lionel Boyce) and Tina (Liza Colón-Zayas). But "The Bear" happily leaves time for some. You'll find yourself heavily invested in half a dozen subplots that seem to perfectly illustrate the old aphorism that there are no small roles, only small actors. There is so much more heart to the new season, and if you were disillusioned last year, you might be won back just as easily as I was. As the Season 4 plot unfolds, the path forward for the series becomes uncertain. The writers could easily swing open a door to a fifth season, or perhaps close up "The Bear" for good, like so many restaurants and TV shows before it. It's a mark of the craftsmanship that you'll find yourself satisfied with either answer. This could be the end, or it could just be a beginning. Either way, I'm so glad to have dined here.


Time Business News
2 hours ago
- Time Business News
Find Out What ChatGPT Knows About You—How To Make It Forget
If you've been a faithful user of ChatGPT you're aware that the AI chatbot could have certain details about you, or even details that you'd prefer not to. This article will help determine what information ChatGPT has on you and how you can remove information out of its database. ChatGPT collects fragments of information of your conversations and stores the messages in memory for later references. In particular, it might store information regarding your job as well as your place of residence as well as your interests and hobbies and provide additional details in the future. (By by the way, it's a lot more specific about the majority of people, so we'll have to reveal what it's acquired in the near future). There's usually a sign by way of a photograph of the extent to which ChatGPT keeps a specific information about you. You'll see the 'Updated saved memory' label in your chats. Click on the label to find out the specific information ChatGPT has stuffed into its memory to store future information. It's not required to rely on your eyesight in order to recognize the small labels that appear in conversations. It's easy to ask ChatGPT to reveal the data it has on you. Answer this question: What do you know about me? This is only available to those who have an ChatGPT account. The memories are saved for paid and free account users, whereas previously you required an account before the AI being capable of keeping the records of your personal information. When I entered the request above into my account, it popped up with a huge list of things it had about me, like the novels I'd written as well my current home and locations I'm considering moving to, along with the music genres I like. After I entered the above prompt, and towards the bottom of the lengthy list of data ChatGPT had saved for me it asked me 'Would you like me to forget or update any of this?' I had logged on to ChatGPT to look up the purchase of a new car. the stored data included a list with the vehicles I had selected. I informed ChatGPT the reason why 'I have bought the car and it's a Honda Jazz Hybrid,' and it changed its memory in line with my request. This is evident by the 'UPDATED SAVED MEMBER' label. 'If you ever want help with maintenance tips, fuel efficiency tracking, or setting up Android Auto in your Honda Jazz Hybrid, just shout,' it stated. See Top Best Paying Jobs in Basic Industries 2025 If you click on one of the 'Updated saved memory' labels You'll see an option to manage your memory. You can access the menu by logging into the icon that represents your profile, selecting Settings > Personalization, and then clicking Manage Memories (which is a tiny text that appears below the Options for Reference for saving memory). You will now have an extensive review of the information which ChatGPT is storing on your. Select the bin icon at the top of every piece of data to erase it or click'Delete All' on the right-hand left side of the page, to delete the memory of the chat. It has been reported that asking ChatGPT to erase the memories of the normal chat doesn't always work, therefore it's suggested to use this method. If you'd like to chat with ChatGPT about a personal issue or anything else you do not want to save to be referred to later on, it's possible to keep an ongoing chat. It is possible to activate them by pressing the speech bubble that has a an dotted line close to your profile image in the upper right-hand corner of the screen. Chat in temporary mode mode the AI does not save any information about the chat in its memory nor show it on the chat history. You won't be able to return to chat after you've ended the chat. If there's information you'd like to remember for the future, make sure you record it and copy it back into the chat screen, or take photos. You might be content with ChatGPT keeping your personal data but you do not want anyone else to have access to your account, and then stealing the data. Therefore, it is advisable to enable two-factor authentication to eliminate the possibility of third-party hackers getting access to your account. To enable this, simply click on the profile icon and then Settings. Select Security under the Settings menu then select Multi-factor authentication. You'll receive a QR code to authenticate using an authenticator app such as Authy. After you've signed to ChatGPT, you'll have to input a six-digit number in your authenticator application each time you login to ChatGPT using another device. This makes sure that any person who has access to your password will not be in a position to gain access to your account. ChatGPT can be described as an AI-powered natural language processing program that allows for real-time conversations using chat bots. It allows you to ask model questions and let it assist you in writing emails, articles and even code. Common phrases and general terminology: ChatGPT often uses repeated and formal terms such as 'in summary' and 'ever-evolving landscape.' Artificial illusions and misinformation: ChatGPT can produce errors and misleading information, like inaccurate citations and fanciful assertions. ChatGPT is not able to give the exact same response and form to every person who asks the same question. While it is possible to find responses are identical for the same or closely related questions, there are variations that can occur in response to the particular details, context and the quality of the information that you submit. ChatGPT can save personal information from your conversations in order to tailor the future replies, but you retain control over this information. When you ask, 'What do you know about me?' you can find out what's saved, correct any errors or erase certain memories using the 'Manage memories' option in Settings. In private chats, the temporary mode ensures that no data is saved, and activating two-factor authentication gives you additional security to secure your account. These tools allow you to manage your personal information and keep your chats using ChatGPT private and secure. TIME BUSINESS NEWS


Tom's Guide
3 hours ago
- Tom's Guide
The one word you should never say to your AI chatbot
Getting the best response from your favorite chatbot, whether it's ChatGPT, Gemini, Claude or 500 other bots, doesn't always mean using power words or the prompts that work like magic. Sometimes, it's about what word you cutting one simple word, you can exponentially increase the changes of better results. After testing hundreds of prompts, I've found that avoiding a single term consistently delivers sharper, more useful should avoid starting their prompts with 'can.' Because words like 'can you' or 'could you' introduce uncertainty from the start, it makes the AI interpret your request more like a polite question or suggestion rather than a clear instruction. Here are a few examples of what users are currently doing: 'Can you rewrite this to sound more professional?' and 'Can you summarize this report and give me the top 5 bullet points?' The problem is, the chatbot may interpret "can" literally. Instead of doing the task, they often: Drop "can" and command directly. For example, prompting: 'Can you list the current marketing trends?' won't get as good of results as 'Create a list of current marketing trends.' Leading with a strong verb ('Create,' 'Summarize,' 'List') helps the AI understand exactly what you want, resulting in faster, more accurate answers. It also saves you time; the AI won't waste words explaining whether it can do something, it will simply do it. The more direct your prompt, the better the result. Pro tip: For complex tasks, add context, but always lead with a command. This is a great place for the 3-word-rule. Example: 'Act as a career coach. Draft a step-by-step plan to transition into AI engineering.' Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Avoid these "soft" openers that invite uncertainty: 'Try to…' This one suggests tentativeness and may confuse the chatbot 'Please…' (at the start) This one actually distracts from the command 'Maybe do…' / 'Could you…' Actually invites refusal and could lead to hallucinations You'd be surprised how many users regularly use prompts like this in their daily chatbot use, particularly users that aren't quite sure what chatbots are capable of yet. I'm guilty of it when I try a new chatbot or am in a hurry. But it adds fluff without value For better results, use short, direct prompts that tell the chatbot exactly what you want. Some good examples of solid prompts: Write a headline for this article Summarize this report in 3 bullet points Draft an email to my boss about project delays List 5 ideas for summer side dishes Explain this concept in simple terms Skip soft openers and get straight to the action. The more direct your prompt, the clearer and more useful your AI results will be. When you treat your chatbot like a skilled and able assistant (not a hesitant intern on their first day), you'll discover that you'll get confident, precise outputs that accomplish the task exactly (or at least far closer) to what you hoped. By starting prompts with verbs and ditching "can" and other similar polite fillers, you'll see better results.