logo
#

Latest news with #E.M.Forster

A New ‘Billy Budd' Is a Pressure Cooker of Gay Desire
A New ‘Billy Budd' Is a Pressure Cooker of Gay Desire

New York Times

time18-07-2025

  • Entertainment
  • New York Times

A New ‘Billy Budd' Is a Pressure Cooker of Gay Desire

Billy Budd is a beautiful mystery. He is young, with a smooth and feminine face, but he doesn't know his background; all he can say is that, as a baby, he was found in a silk-lined basket, hanging from the knocker of a door. One thing is certain in Herman Melville's novella 'Billy Budd': This handsome sailor is good, gentle by nature and loyal to his shipmates, who call him Baby and find peace just by being in his presence. To Billy's 'good' Melville adds allegorically pure evil in the ship's master-at-arms, John Claggart, and unbending virtue in Captain Vere. Like the legs of a stool, those characteristics hold up the drama of 'Billy Budd,' which was left unfinished at Melville's death in 1891 and wasn't published until the 1920s. The story of Billy Budd, stammering and precious, then sacrificed to a strict idea of justice after he accidentally but fatally strikes Claggart, has intrigued readers ever since with its opacity and open-endedness. E.M. Forster called the novella 'an easy book, as long as we read it as a yarn.' But tug at the thread, and it unravels into a pile of unanswerable questions: about desire, about morality, about the microcosmic world of a ship at sea. Perhaps that is why adaptations of 'Billy Budd,' onstage and onscreen, have been so different. Each is as much an act of interpretation as translation, adopting a specific perspective, examining Billy's tragedy through a particular character or idea. The latest version, a sexy and ingenious one-act called 'The Story of Billy Budd, Sailor,' ran at the Aix-en-Provence Festival in France earlier this month. It's an adaptation of an adaptation: a chamber treatment, by the director Ted Huffman and the composer Oliver Leith, of Benjamin Britten's 1951 opera 'Billy Budd.' Want all of The Times? Subscribe.

The most important lesson from OpenAI's big ChatGPT mistake: 'Only connect!'
The most important lesson from OpenAI's big ChatGPT mistake: 'Only connect!'

Business Insider

time02-05-2025

  • Business Insider

The most important lesson from OpenAI's big ChatGPT mistake: 'Only connect!'

OpenAI messed up a ChatGPT update late last month, and on Friday, it published a mea culpa. It's worth a read for its honest and clear explanation of how AI models are developed — and how things can sometimes go wrong in unintended ways. Here's the biggest lesson from all this: AI models are not the real world, and never will be. Don't rely on them during important moments when you need support and advice. This is what friends and family are for. If you don't have those, reach out to a trusted colleague or human experts such as a doctor or therapist. And if you haven't read "Howards End" by E.M. Forster, dig in this weekend. "Only Connect!" is the central theme, which includes connecting with other humans. It was written in the early 20th century, but it's even more relevant in our digital age, where our personal connections are often intermediated by giant tech companies, and now AI models like ChatGPT. If you don't want to follow the advice of a dead dude, listen to Dario Amodei, CEO of Anthropic, a startup that's OpenAI's biggest rival: "Meaning comes mostly from human relationships and connection," he wrote in a recent essay. OpenAI's mistake Here's what happened recently. OpenAI rolled out an update to ChatGPT that incorporated user feedback in a new way. When people use this chatbot, they can rate the outputs by clicking on a thumbs-up or thumbs-down button. The startup collected all this feedback and used it as a new "reward signal" to encourage the AI model to improve and be more engaging and "agreeable" with users. Instead, ChatGPT became waaaaaay too agreeable and began overly praising users, no matter what they asked or said. In short, it became sycophantic. "The human feedback that they introduced with thumbs up/down was too coarse of a signal," Sharon Zhou, the human CEO of startup Lamini AI, told me. "By relying on just thumbs up/down for signal back on what the model is doing well or poorly on, the model becomes more sycophantic." OpenAI scrapped the whole update this week. Being too nice can be dangerous What's wrong with being really nice to everyone? Well, when people ask for advice in vulnerable moments, it's important to try to be honest. Here's an example I cited from earlier this week that shows how bad this could get: it helped me so much, i finally realized that schizophrenia is just another label they put on you to hold you down!! thank you sama for this model 3 — taoki (@justalexoki) April 27, 2025 To be clear, if you're thinking of stopping taking prescribed medicine, check with your human doctor. Don't rely on ChatGPT. A watershed moment This episode, combined with a stunning surge in ChatGPT usage recently, seems to have brought OpenAI to a new realization. "One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice," the startup wrote in its mea culpa on Friday. "With so many people depending on a single system for guidance, we have a responsibility to adjust accordingly."

The most important lesson from OpenAI's big ChatGPT mistake: 'Only connect!'
The most important lesson from OpenAI's big ChatGPT mistake: 'Only connect!'

Business Insider

time02-05-2025

  • Business Insider

The most important lesson from OpenAI's big ChatGPT mistake: 'Only connect!'

OpenAI retracted a ChatGPT update after it made the AI chatbot overly sycophantic. The update used a new source of user feedback as a reward signal, leading to excessive agreeability. OpenAI acknowledged the mistake and shared lessons learned. I have better advice. OK, get ready. I'm getting deep here. OpenAI messed up a ChatGPT update late last month, and on Friday, it published a mea culpa. It's worth a read for its honest and clear explanation of how AI models are developed — and how things can sometimes go wrong in unintended ways. Here's the biggest lesson from all this: AI models are not the real world, and never will be. Don't rely on them during important moments when you need support and advice. This is what friends and family are for. If you don't have those, reach out to a trusted colleague or human experts such as a doctor or therapist. And if you haven't read "Howards End" by E.M. Forster, dig in this weekend. "Only Connect!" is the central theme, which includes connecting with other humans. It was written in the early 20th century, but it's even more relevant in our digital age, where our personal connections are often intermediated by giant tech companies, and now AI models like ChatGPT. If you don't want to follow the advice of a dead dude, listen to Dario Amodei, CEO of Anthropic, a startup that's OpenAI's biggest rival: "Meaning comes mostly from human relationships and connection," he wrote in a recent essay. OpenAI's mistake Here's what happened recently. OpenAI rolled out an update to ChatGPT that incorporated user feedback in a new way. When people use this chatbot, they can rate the outputs by clicking on a thumbs-up or thumbs-down button. The startup collected all this feedback and used it as a new "reward signal" to encourage the AI model to improve and be more engaging and "agreeable" with users. Instead, ChatGPT became waaaaaay too agreeable and began overly praising users, no matter what they asked or said. In short, it became sycophantic. "The human feedback that they introduced with thumbs up/down was too coarse of a signal," Sharon Zhou, the human CEO of startup Lamini AI, told me. "By relying on just thumbs up/down for signal back on what the model is doing well or poorly on, the model becomes more sycophantic." OpenAI scrapped the whole update this week. Being too nice can be dangerous What's wrong with being really nice to everyone? Well, when people ask for advice in vulnerable moments, it's important to try to be honest. Here's an example I cited from earlier this week that shows how bad this could get: it helped me so much, i finally realized that schizophrenia is just another label they put on you to hold you down!! thank you sama for this model <3 — taoki (@justalexoki) April 27, 2025 To be clear, if you're thinking of stopping taking prescribed medicine, check with your human doctor. Don't rely on ChatGPT. A watershed moment This episode, combined with a stunning surge in ChatGPT usage recently, seems to have brought OpenAI to a new realization. "One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice," the startup wrote in its mea culpa on Friday. "With so many people depending on a single system for guidance, we have a responsibility to adjust accordingly." I'm flipping this lesson for the benefit of any humans reading this column: Please don't use ChatGPT for deeply personal advice. And don't depend on a single computer system for guidance. Instead, go connect with a friend this weekend. That's what I'm going to do.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store