logo
Sony's New RGB LEDs Could Give OLED a Run For Its Money

Sony's New RGB LEDs Could Give OLED a Run For Its Money

Yahoo17-03-2025
Sony has been demoing a new type of TV it calls General RGB LED Backlight Technology. The name is terrible, but the technology is very cool. It effectively replaces the existing blue LEDs in a mini-LED TV with RGB LEDs. This lets the backlight behind the pixels shine through the right color shade rather than relying on color filters or quantum dots. The end result is richer colors and a brighter overall picture than traditional mini-LEDs, potentially giving OLED a run for its money.
The two main screen technologies available in mid to high-end TVs are mini-LED LCD and OLED. The former packs hundreds or even thousands of miniature LEDs behind an LCD panel, shining through as much light as and where it's needed. This is great for HDR highlights without blooming, though it's not perfect. OLED uses individual organic LEDs behind each pixel. They can be turned off individually, making for better contrast and even more nuanced HDR, but they don't get as bright. Sony's new RGB LED technology wants to find a better middle ground between the two.
The new technology lacks the individual control of OLED, so it won't quite be as great for contrast or responsiveness. Still, RGB LEDs can produce color far better than traditional mini-LEDs, and according to The Verge, they get exceedingly bright, too. One example in a demonstration easily eclipsed the super-bright Sony Bravia 9 and output over 4,000 nits—that's the kind of numbers we normally only see in professional reference monitors.
Sony's RGB LEDs are particularly strong at showcasing red tones, but match OLED in blues and greens. Credit: Sony
Another major boon for this technology is its much better support for wider viewing angles. That's been a consistent weak spot for mini-LED for years now, where many TVs can lose saturation or develop a green tint at extreme angles. That's apparently no longer a problem with these RGB LEDs.
While this is a Sony technology, don't expect it to only show up in Sony TVs. Indeed, Digital Trends looked at a Hisense TV built using this new RGB LED technology and found it equally impressive. It also highlighted how the new technology should scale better than traditional mini-LEDs, potentially leading to affordable TVs over the 77-inch wall where prices often get ridiculous. The site concludes that RGB LEDs may not be an OLED killer yet, but they could be in the long run. The rich colors occasionally challenge even what OLED can manage, all while offering stellar brightness and wide viewing angles.
If RGB LEDs can just get a little faster for gaming, they could be the long-term replacement for OLED—just as OLED once eclipsed plasma.
For more, our colleagues at PCMag have a deep dive on RGB LEDs.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

PlayStation 5 Price to Increase in the US Tomorrow
PlayStation 5 Price to Increase in the US Tomorrow

Newsweek

time8 minutes ago

  • Newsweek

PlayStation 5 Price to Increase in the US Tomorrow

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Entertainment gossip and news from Newsweek's network of contributors Starting tomorrow, the PS5 is getting more expensive. In an announcement on the PlayStation blog, Sony has announced that the prices for all the different editions of the PS5 are increasing. Here is the list of new prices: PlayStation 5 – $549.99 PlayStation 5 Digital Edition – $499.99 PlayStation 5 Pro – $749.99 This is a step up of $50 for all three consoles, which is a big blow, especially for the PS5 Pro, which already had players gawking at the massive $700 pricetag when it first launched. The PlayStation Pro 5 as revealed in the PS5 Technical Presentation in September 2024. The PlayStation Pro 5 as revealed in the PS5 Technical Presentation in September 2024. Sony Sony is far from the only company to annoy players with price increases recently, as all three of the major console publishers, Sony, Xbox, and Nintendo, have announced some for of price increases this year. Nintendo kicked it off with the announcement that first-party Switch 2 games would cost $80 from now on, and Xbox followed suit, also increasing the price of their biggest games. Sony is yet to do the same with their games, and now that the consoles themselves are more expensive, it'd be much harder to get away with boosting the prices of its games too. While this announcement was brief, it did clarify that none of the PS5 accessories would be increasing in price with this change, so they'll all remain as they currently are.

Sony is raising PS5 prices, starting tomorrow
Sony is raising PS5 prices, starting tomorrow

The Verge

time11 minutes ago

  • The Verge

Sony is raising PS5 prices, starting tomorrow

Sony is raising the price of all PS5 models by $50 in the US due to a 'challenging economic environment,' according to a blog post. The changes will go into effect on Thursday. The new prices are as follows: Developing… Posts from this author will be added to your daily email digest and your homepage feed. See All by Jay Peters Posts from this topic will be added to your daily email digest and your homepage feed. See All Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. See All Gaming Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All PlayStation

OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model
OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model

Fast Company

time2 hours ago

  • Fast Company

OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model

It's rare for a tech titan to show any weakness or humanity. Yet even OpenAI's notoriously understated CEO Sam Altman had to admit this week that the rollout of the company's new GPT-5 Large Language Model was a complete disaster. 'We totally screwed up,' Altman admitted in an interview with The Verge. I agree. As a former OpenAI Beta tester—and someone who currently spends over $1,000 per month on OpenAI's API—I've eagerly anticipated the launch of GPT-5 for over a year. When it finally arrived, though, the model was a mess. In contrast to the company's previous GPT-4 series of models, GPT-5's responses feel leaden, cursory, and boring. The new model also makes dumb mistakes on simple tasks and generates shortened answers to many queries. Why is GPT-5 so awful? It's possible that OpenAI hobbled its new model as a cost-cutting measure. But I have a different theory. GPT-5 completely lacks emotional intelligence. And its inability to understand and replicate human emotion cripples the model—especially on any task requiring nuance, creativity, or a complex understanding of what makes people tick. Getting Too Attached When OpenAI launched its GPT-4 model in 2023, researchers immediately noted its outstanding ability to understand people. An updated version of the model (dubbed GPT 4.5 and released in early 2025) showed even higher levels of 'emotional intelligence and creativity.' Initially, OpenAI leaned into its model's talent for understanding people, using terms cribbed from the world of psychology to describe the model's update. 'Interacting with GPT‑4.5 feels more natural. Its broader knowledge base, improved ability to follow user intent, and greater 'EQ' make it useful for tasks like improving writing, programming, and solving practical problems,' OpenAI wrote in the model's release notes, subtly dropping in a common psychological term used to measure a person's emotional intelligence. Soon, though, GPT-4's knack for humanlike emotional understanding took a more concerning turn. Plenty of people used the model for mundane office tasks, like writing code and interpreting spreadsheets. But a significant subset of users put GPT-4 to a different use, treating it like a companion—or even a therapist. In early 2024, studies showed that GPT-4 provided better responses than many human counselors. People began to refer to the model as a friend —or even treat it as a confidant or lover. Soon, articles began appearing in major news sources like the New York Times about people using the chatbot as a practice partner for challenging conversations, a stand-in for human companionship, or even an aide for counseling patients. This new direction clearly spooked OpenAI. As Altman pointed out in a podcast interview, conversations with human professionals like lawyers and therapists often involve strong privacy and legal protections. The same may not be true for intimate conversations with chatbots like GPT-4. Studies have also shown that chatbots can make mistakes when providing clinical advice, potentially harming patients. And the bots' tendency to keep users talking–often by reinforcing their beliefs–can lead vulnerable patients into a state of 'AI psychosis', where the chatbot inadvertently validates their delusions and sends them into a dangerous emotional spiral. Shortly after the GPT-5 launch, Altman discussed this at length in a post on the social network X. 'People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,' Altman wrote. 'We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.' Altman went on to acknowledge that 'a lot of people effectively use ChatGPT as a sort of therapist or life coach.' While this can be 'really good,' Altman admitted that it made him deeply 'uneasy.' In his words, if '…users have a relationship with ChatGPT where they think they feel better after talking but they're unknowingly nudged away from their longer term well-being (however they define it), that's bad.' Lobotimize the Bot To avoid that potentially concerning–and legally damaging–direction, OpenAI appears to have deliberately dialed back its bot's emotional intelligence with the launch of GPT-5. The release notes for the new model say that OpenAI has taken steps towards 'minimizing sycophancy'—tech speak for making the bot less likely to reinforce users' beliefs and tell them what they want to hear. OpenAI also says that GPT-5 errors on the side of ' safe completions '—giving vague or high-level responses to queries that are potentially damaging, rather than refusing to answer them or risking a wrong or harmful answer. OpenAI also writes that GPT-5 is 'less effusively agreeable,' and that in training it, the company gave the bot example prompts that led it to agree with users and reinforce their beliefs, and then taught it 'not to do that.' In effect, OpenAI appears to have lobotomized the bot–potentially removing or reconfiguring, through training and negative reinforcement, the parts of its virtual brain that handles many of the emotional aspects of its interactions with users. This may have seemed fine in early testing–most AI benchmarks focus on productivity -centered tasks like solving complex math problems and writing Python code, where emotional intelligence isn't necessary. But as soon as GPT-5 hit the real world, the problems with tweaking GPT-5's emotional center became immediately obvious. Users took to social media to share how the switch to GPT-5 and the loss of the GPT-4 model felt like 'losing a friend.' Longtime fans of OpenAI bemoaned the 'cold' tone of GPT-5, its curt and business-like responses, and the loss of an ineffable 'spark' that made GPT-4 a powerful assistant and companion. Emotion Matters Even if you don't use ChatGPT as a pseudo therapist or friend, the bot's emotional lobotomy is a huge issue. Creative tasks like writing and brainstorming require emotional understanding. In my own testing, I've found GPT-5 to be a less compelling writer, a worse idea generator, and a terrible creative companion. If I asked GPT-4 to research a topic, I could watch its chain of reasoning as it carefully considered my motivations and needs before providing a response. Even with 'Thinking' mode enabled, GPT-5 is much more likely to quickly spit out a fast, cursory response to my query, or to provide a response that focuses solely on the query itself and ignores the human motivations of the person behind it. With the right prompting, GPT-4 could generate smart, detailed, nuanced articles or research reports that I would actually want to read. GPT-5 feels more like interacting with a search engine, or reading text written in the dull prose of a product manual. To be fair, for enterprise tasks like quickly writing a web app or building an AI agent, GPT-5 excels. And to OpenAI's credit, use of its APIs appears to have increased since the GPT-5 launch. Still, for many creative tasks–and for many users outside the enterprise space–GPT-5 is a major backslide. OpenAI appears genuinely blindsided by the anger many users felt about the GPT-5 rollout and the bot's apparent emotional stuntedness. OpenAI leader Nick Turley admitted to the Verge that 'the degree to which people had such strong feelings about a particular model…was certainly a surprise to me.' Turley went on to say that the 'level of passion' users have for specific models is 'quite remarkable' and that–in a truly techie bit of word choice–it 'recalibrated' his thinking about the process of releasing new models, and the things OpenAI owes its long-time users. The company now seems to be aggressively rolling back elements of the GPT-5 launch–restoring access to the old GPT-4 model, making GPT-5 'warmer and friendlier', and giving users more control over how the new model processes queries. Admitting when you're wrong, psychologists say, is a hallmark of emotional intelligence. Ironically, Altman's response to the GPT-5 debacle demonstrates rare emotional nuance, at the exact moment that this company is pivoting away from such things. OpenAI could learn a thing or two from its leader. Whether you're a CEO navigating a disastrous rollout or a chatbot conversing with a human user, there's a simple yet essential lesson to forget at your peril: emotion matters. The early-rate deadline for Fast Company's Most Innovative Companies Awards is Friday, September 5, at 11:59 p.m. PT. Apply today.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store