
You Asked: ULED vs QLED explained, plus AI videos with sound are here
Apple TV 4K vs Panasonic Smart TV Upscaling
@_Jiggle asks: If I were to get a Panasonic Z95A with incredible upscaling, but I don't like the operating system, Fire TV, so instead get the Apple TV 4K, is the upscaling any good or should I stick with the TV's Smart OS Fire TV? Does the upscaling change in any way if I get an Apple TV over Panasonic's incredible upscaling?
There are layers to this one, but I'll try to answer the question in parts and keep it simple. As always, I encourage viewers to weigh in on these questions too—especially if these are issues that you've faced and can help with.
So, Jiggle, the first thing to know is the Apple TV 4K is going to upscale the content to whatever you have set in the format section of the settings. If you have it set to 4K, it's going to take whatever you're watching, upscale it to 4K, and then send that to your Panasonic TV. The good news is, yes, the Apple TV 4K upscaling is pretty good. And if you're that bothered by Fire TV, it's a solid option.
I haven't tested how good the Apple TV upscaling is versus using the apps built into the TV, but if it's well-produced content—which most current movies and shows on the big streaming services are—you probably won't be able to tell the difference.
That said, if you go that route, be sure to go into your Apple TV 4K settings and select Match Content Range and Frame Rate. This ensures that the Apple device won't upscale SDR content into HDR and give you a weird, fake HDR-looking image. It keeps things natural to how the content was created.
It's not recommended, but if you wanted your TV to do more of the upscaling to 4K, you could set the output on your Apple TV 4K to match the content. So, for example, set the output to 1080p if you're watching 1080p content and then let the TV take it up to 4K. But that feels like too much work for minimal benefit. And there's a chance you may be doing more harm than good by limiting the Apple TV's upscaling.
ULED vs QLED: What's the Difference?
@phalisatumblin1249 asks: What is the difference between ULED and QLED?
Great question—and one that, though we've probably answered before, deserves an explanation every now and then. At its most simple definition, the difference between ULED and QLED is… kind of marketing.
QLED—Quantum Dot Light Emitting Diode—is a type of LED panel that uses quantum dots to enhance color and contrast.
And here's where marketing and a bit of confusing tech comes in. If you let Hisense define ULED (since it's their proprietary technology), it's described as a panel equipped with Ultra Local Dimming, Ultra Wide Color Gamut, Ultra 4K Resolution, and Ultra Smooth Rate.
Given it's '20 picture patents working together to optimize backlight, motion and color data for the best viewing experience,' yes, it's a step up from your average LED-backlit TV.
Where things get confusing is that there's not really a clear, super-distinguishable difference like there is between other TV types. There's not a specific piece of hardware—like a unique panel type or backlight—that definitively qualifies a TV as being ULED. It's just Hisense's branding to set themselves apart.
If you're in the market for a TV and see QLED and ULED come up, dig into multiple reviews from trusted sources to ensure you're getting accurate information about the technology used.
TCL QM6K vs Sony X90L
Nikhil Subash asks: Recently, I was interested in the TCL C6K/QM6K series based on your recommendations. However, during a visit to a local store (here in Dubai), the model wasn't yet available. Instead, the salesperson strongly recommended the Sony Bravia X90L, praising its color accuracy and picture quality. While I've owned a Bravia before (which unfortunately developed issues after 3 years, with costly repairs), I'm hesitant due to the high price.
The salesperson also raised concerns about TCL and Hisense, particularly regarding high brightness and potential eye strain for children. As a parent of a 3-year-old who enjoys watching YouTube, this gave me pause.
Which model would be the better choice for 2025 considering durability, eye comfort, and value? Are there any upcoming releases in the next two months worth waiting for?
Thanks, Nikhil. To address the eye comfort issue—first, I am not a doctor. That is clear. However, I have a degree in journalism and spent more than 10 years reporting. I know how to do research with credible sources, which tell me that it's more the amount of time spent in front of the TV than the TV picture itself that can cause eye strain.
Though none of us who spend long amounts of time in front of screens for work follow this advice, it's recommended to take 15-minute breaks every two hours. Take your eyes off the screen. Focus on something else in the distance. So do with that what you will in terms of eye comfort for you and your three-year-old.
I'll also note that in the tests we did on this channel—results you can see in each of these TVs' reviews—the Sony X90L's peak brightness is around the same and often higher than the TCL QM6K. I wouldn't recommend maxing out the brightness on either if eye comfort is a concern. In SDR, peak brightness in a 10 percent window was just shy of 600 nits on the Sony and around 650 nits on the TCL. In HDR, the Sony hits 1600 nits in smaller windows and 800 nits with full-screen white. The TCL returned 750 nits in a 25 percent window, which would be even lower in full-screen white. So I wouldn't worry about the TCL being too bright.
Finally, in terms of color, the X90L was very accurate out of the box—as you'd expect from a Sony TV. But to quote the reviewer, the TCL was one of the most color-accurate TVs tested at its price point, which, by the way, retails for $200 less than the Sony—at least here in the U.S.
Bottom line: if the TCL QM6K has your eye, you won't be disappointed, especially considering the performance for the price.
Google Veo 3: AI Video with Sound and Speech
Moving on from TVs, let's cross the pond to managing editor John McCann to answer your AI-related questions around Google Veo 3.
Google announced the latest version of its AI video generator during its I/O keynote in the middle of May. And with Veo 3, we get a major upgrade.
It's moving out of the silent age of film and into the audio era. Now it's not only able to generate eerily convincing video, it will also add sound effects, background audio, and even speech to those videos. Yes, your AI-generated moving pictures can now talk—and in a variety of accents.
Has it nailed the British one? Not quite, in the view of this particular Brit. There's still a bit more work for Google to do. However, what it is able to do is already impressive, and we've shared some of these examples on our social feeds, which has really got you talking.
David wants to know how to access Veo 3, while Tuhin asks if there's a cost involved.
Well, David, it's no surprise you want to try it. Veo 3 is a very interesting engine with a lot of possibilities. However, getting to use it is a little trickier. First of all, you have to be in the U.S. Veo 3 isn't available in other countries at the moment. And you'll also need a subscription to Google's AI Ultra Plan.
How much is that? $250 per month.
That is a lot of money, and means not many of us will be rushing to try it out right away.
Eddie asks: Is this attached to Google Gemini?
Yes, it is. If you're able to spring for $250 a month, you'll be able to access Veo 3 via the Gemini app. You'll also be able to experience Flow 4, a filmmaking service from Google that uses both Veo 3 and image generation. You'll even be able to pull from your own image and video sources to create a film-style video, with additional controls like camera angles and editing tools.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
a few seconds ago
- Forbes
How Will You Be Found When AI Replaces Google Search?
Zac Brandenberg is Co-Founder and CEO of DRINKS, a leading AI-powered SaaS platform transforming the U.S. alcohol market. AI platforms are reshaping how consumers discover content, products and brands—and in the process, reshaping how that content is surfaced. The brands that understand this shift now will be positioned to dominate tomorrow's discovery landscape. An analysis of millions of AI citations reveals that each platform—ChatGPT, Perplexity and Google AI Overviews—operates with fundamentally different algorithms than traditional search engines. For example, Reddit drives 47% of Perplexity citations, whereas Wikipedia commands 48% of ChatGPT references. The Authority Game Has Changed Traditional search prioritized backlinks and domain authority. AI search prioritizes clarity, context and community validation. So, a single mention on Reddit would carry more weight in Perplexity than months of link-building campaigns. ChatGPT is often influenced by first-page results (regardless of engine) because that's what's most available. Perplexity loves community content (like the aforementioned Reddit). Google AI Overviews balances traditional authority with user-generated content from platforms like Quora and LinkedIn. So, where do brand leaders start? Most e-commerce leaders don't know where their brands appear in AI search results. Start with a simple audit across all major platforms, like ChatGPT, Perplexity and Google's AI Overviews. Document which sources get cited and how your brand is positioned. Pay attention to citation patterns. If competitors dominate Wikipedia mentions, you need a presence there. If Reddit discussions drive industry conversations, deploy community managers authentically. Each AI platform rewards different content strategies. Generic optimization won't work. • For ChatGPT Success: Focus on neutral, reference-style content. Build a comprehensive Wikipedia presence. Get featured in established business publications like Forbes and TechCrunch. When ChatGPT searches the web, it favors Bing results, so optimize there too. It's important to note that when ChatGPT does this, it uses Bing as a gateway, but its output is more curated than a basic list of search results. • For Perplexity Visibility: Create short-form video content and engage authentically in relevant Reddit communities. Perplexity values Yahoo and MarketWatch, making expert commentary on these platforms valuable. • For Google AI Overviews: Develop thought leadership on LinkedIn and engage strategically with Quora discussions. Balance authoritative content with community insights. Wineries and alcoholic beverage producers face the challenges of most industries, often with increased intensity due to a hyper-competitive traditional search environment. Low discoverability, hyper-competition with distribution channels and massive competitive saturation in their categories make winning the search war challenging. As a result, moving now to address the AI-search opportunity is even more important. Successful wineries now post harvest updates on r/wine and create YouTube Shorts showing vineyard operations. These authentic community contributions get cited when users search "sustainable winemaking 2025." Lifestyle brands selling wine can engage in broader communities like r/fashion or r/gifting, reaching audiences that traditional wineries never access. A clothing brand discussing "wine and style pairings" captures citations in lifestyle search queries. AI systems extract information differently than human readers do. Complex, creative language that performs well in traditional content marketing can hurt AI visibility. Structure content for immediate comprehension. Use clear statements, numbered lists and quotable information. AI platforms need content they can easily parse and excerpt. We restructured our product descriptions and case studies using this approach. Instead of creative wine metaphors, we use clear, factual statements about our technology capabilities. This improved both AI citations and customer understanding. For a winery, this might mean instead of poetic descriptions like "sun-kissed grapes dancing in morning dew," writing "estate-grown Pinot Noir, 14.2% alcohol, aged 18 months in French oak barrels." AI platforms can extract and cite specific details. The Zero-Click Future AI-generated summaries are creating more zero-click searches, where users get answers without visiting websites. This doesn't eliminate the need for visibility—it makes citation more valuable than clicks. Being cited establishes authority even when users don't visit your site. So, marketing budgets should shift from link-building to community engagement and structured content creation. Invest in Reddit community management, Wikipedia presence and platform-specific content optimization. Double down on short-form video. The convergence is happening fast. Traditional search engines are integrating AI features while pure AI platforms gain market share. By 2028, a study by Semrush indicates that AI search traffic is projected to exceed traditional search engine usage—a significant shift for industries like technology, where discovery drives business development. The brands that adapt quickly to these new discovery rules will be the winners in an AI-first search world. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Harvard Business Review
a few seconds ago
- Harvard Business Review
How AI Can Help Tackle Collective Decision-Making
Collective decision-making is hardly a perfect science. Broken processes, data overload, information asymmetry, and other inequities only compound the challenges that come from large, disparate factions with different goals trying to work together. And the tools that often help with decision-making—data analysis, scenario planning, decision trees, and so on—can falter in the face of the scale and complexity of the biggest problems that groups and leaders face. This is where AI can help, and is helping. With its ability to analyze vast troves of data about the status quo, understand group preferences, run sophisticated simulations to evaluate hundreds of future possibilities against those preferences, and facilitate consensus-building among participants, AI can be a powerful tool for all leaders facing complex decisions, especially those that must be made collaboratively. One field already taking advantage of AI's collective decision-making support is city planning. Three years ago, we started working with the United States Conference of Mayors to understand how AI is helping cities solve their most pressing challenges. Along the way, we studied the story of the German city of Hamburg, which has addressed a housing crisis exacerbated by an influx of refugees. In 2016, Hamburg partnered with MIT Media Lab, the creator of an AI platform called CityScope. The platform allows urban planners to collect and digest the needs and preferences of swaths of residents, simulate hundreds of building scenarios, identify hidden opportunities, and find common ground among conflicting factions. By demonstrating how CityScope is working in Hamburg, we hope to show leaders across governments, nonprofits, universities, and corporations how they can harness data and AI to democratize and improve their decision-making processes and outcomes. The Crisis in Hamburg In 2016, Germany decided to welcome 1 million refugees from the Middle East, and Hamburg was tasked with finding housing for an anticipated 80,000 families in a city of under 2 million people. At the time, the city had been stuck in unproductive conversations about zoning laws for decades, struggling to build enough houses for its own residents. Three challenges tend to come to the fore in situations of collective decision-making, and Hamburg was no exception: Processes and incentives are broken. The traditional process to get things done (in this case, to get new housing built) involves dozens of steps and institutions, each with its own procedural logic and internal culture. A single technical impropriety can delay a project by months, if not years. Stakeholders have no incentive to come together. In Hamburg, the city struggled to get anything built because each project required the approval of a dozen bureaucracies, often sclerotic and opaque. Information is increasingly abundant and isn't equally distributed. Every decision (in this case, about specific developments or zoning laws) involves vast amounts of information across domains—from resident preferences and technical documents to traffic and usage metrics. Furthermore, processes are often expressed in long, detailed, technical documents that the average person cannot be expected to understand. Those with more resources have the time, money, and expertise to get the information they need to form an educated opinion while other community members do not. Other inequities. Dominant players possess a wide array of tools to block transformations. In this case, property owners can stop urban developments with historic preservation regulations, minimum lot size requirements, height restrictions, etc. In Hamburg, the only people who participated in decision-making about housing and zoning were wealthier, older homeowners. Successive mayors had launched a few outreach campaigns to get other parts of the community engaged in zoning debates, but none had gained traction. How CityScope Helped AI can help to solve these challenges and improve the way that groups make decisions together. Ariel Noyman, one of the key engineers behind CityScope, told us that his team designed the platform to fulfill four key functions: Insight: Building a dynamic model of social, economic, and environmental conditions through comprehensive data collection, an environmental scan, transaction analytics, and sentiment analysis; visualization with feedback. Prediction: Identifying needs and simulating the impact of alternative interventions by evaluating thousands of 'what-if' scenarios. Transformation: Iterating possible interventions into validated paths of action. Consensus: Engaging stakeholders in a shared, facilitated decision-making process to reach a unified vision of the future. Here's how these functions played out in the process of working with residents and planners in Hamburg to move the needle forward. The first step involved gathering as much data as possible. CityScope drew on data about housing and zoning laws, but also economic development, purchasing patterns, city-wide events and amenities, transportation and infrastructure, employment opportunities, demographic diversity, environmental impact, safety, and more. It also administered surveys to residents to gather their preferences. To deliver the first function, insight, the platform then correlated the core dimensions of housing—density and diversity of people—with performance indicators such as energy use, safety, resident preferences, and so on. With that data, the platform analyzed the relationship between housing and quality of life in the status quo. Then the platform went on to make predictions about current trends and hypothetical transformations, identifying constraints that might be valuable to alter along the way. CityScope highlighted the systematic underuse of commercial properties, for instance. It also highlighted the areas where public services were most likely to be strained and those with the most capacity to welcome new residents. Once the analysis was done, CityScope displayed the results in a simple, easy-to-understand diagram to help citizens and other participants to easily see how potential changes would affect key performance indicators that reflect the common priorities of the city's residents. The labels on the vertical bars indicate that the community cared about environmental impact, energy performance, infrastructure, innovation, and overall livability (each of these indicators aggregated hundreds of metrics). The height of the fill of each bar indicates the city's current performance on each priority (higher is better), and the horizontal lines on each bar indicate the performance for each of these priorities for a given future scenario. The bars with red fill indicate areas that would change for the better; those that are green show the city has already met the targets. Through this chart, CityScope helps residents understand their situation (insight again), extrapolate current trends (prediction again), evaluate possible alternatives (transformation), and find common ground (consensus). In Figure 2, these metrics are aggregated by street into a 3D map of the city. Positive changes are in green, negative ones in red, and alternatives can be modeled at the scale of the individual street, neighborhood, or the city as a whole. In Hamburg, these representations allowed users to understand trade-offs and find ways to overcome them. For example, the visualization made starkly clear the differences between the preferences of affluent communities, where proposals that threatened lower-density zoning received the worst ratings, and the city as a whole, where building more houses in underdeveloped areas to welcome refugees scored well. CityScope then helped the residents find the best way to accommodate these conflicting preferences. By evaluating competing proposals, it demonstrated that wealthier neighborhoods could benefit from more houses provided that a new metro line were also built in the process, thereby paving the way to consensus. The vizualizations also allow CityScope to yet again collect people's preferences, this time on the trade-offs and competing proposals. In Hamburg, the team administered surveys and organized workshops across the city with an augmented reality (AR) version of the platform (see Figure 3). The AR interface allowed participants to collaborate with CityScope to see the implications of their choices. Hamburg residents would come into the room and rearrange the LEGO-like bricks representing residential units, office buildings, parks, and other urban amenities in a specific zone, redesigning the city one brick at a time (Figure 3). When participants made these changes, the digital projection updated in real time to show how the proposed changes would affect quality of life. The platform also connected these changes with the zoning laws that would make them possible, bridging the gap between the LEGO game and policymaking. The interface also allowed participants to collaborate with each other in workshops across the city. That way, CityScope became a platform of direct community engagement, where technical and non-technical people, with different levels of understanding, gathered around the table to understand the impact that their common decisions would have on their city (the consensus function again). In Hamburg, 5,000 residents participated in CityScope workshops in 2016, considerably more than at any conventional town hall. What's more, by targeting diverse communities across the city, the CityScope team has managed to attract a representative sample of the population, rather than just the older, wealthier citizens who have historically participated in the city's urban planning. Making Decision-Making More Democratic Through its use of AI, CityScope addresses the key challenges we identified in city planning, and in group decision-making more generally: First, they circumvent slow bureaucratic processes. By aggregating all the relevant data into a dynamic model, CityScope analyzes trade-offs better than city officials ever could on their own because it integrates all perspectives and tests thousands of alternatives. The result is a considerably streamlined process, and also one that takes more perspectives into account more accurately. Second, they solve the problem of information overload and asymmetry. By intaking and processing vast troves of data and then providing clear visuals and methods for interacting with and sharing them, CityScope removes the informational barriers that favor those with more resources over those who lack money, time, or expertise, giving anyone the opportunity to understand and propose changes. Third, CityScope enables the full community to find a path to consensus, not just the elite. Residents may not agree on this or that housing project, but they can find common ground around shared priorities. By shifting the focus of deliberation from specific projects or laws to broader priorities for the city, CityScope reframes the discussion towards the bigger picture. Larson calls the platform a 'consensus machine' for a reason. Eighteen months after the partnership with CityScope began, Hamburg had not just housed thousands of refugees: It had strategically distributed them across the city to maximize social cohesion, economic opportunity, and community resilience. Since then, the city has been integrating CityScope into its decision-making processes more broadly, for transportation, energy use, and environmental regulation. When Russia invaded Ukraine in 2022, Hamburg had the tools to welcome tens of thousands of refugees in a fraction of the usual time And the United Nations now funds a project exporting CityScope to other cities that face an unexpected influx of refugees. Beyond CityScope Humans are not especially good at processing immense amounts of information and translating it into policy. They struggle to understand complexity and, left to their own devices, seldom find common ground on contentious issues. CityScope shows that AI can help. However, by themselves, platforms like CityScope cannot solve contentious problems. Most group decisions remain inescapably prone to conflict and while AI can help us understand and navigate tradeoffs, it cannot make tradeoffs disappear. No algorithm, however sophisticated, can replace a culture of healthy disagreement and mutual respect. In Hamburg, it was the residents and their leaders who made this conversation productive, not just the platform itself. Further, these tools only matter if they are integrated into the right ecosystem. In Hamburg, the city could not take advantage of CityScope without also reforming bureaucratic hurdles that prevented certain spaces from being repurposed or built upon. CityScope helped identify and prioritize those reforms, but without them, the remainder of its functions would have been futile. While AI can streamline processes and help us make better decisions together, only with the right leadership can these tools translate into collective action. Nevertheless, tools which combine sophisticated simulations with direct engagement can change how we make decisions in all kinds of institutions—cities, but also corporations, universities, or non-profits. Platforms like AnyLogic, FlexSim, and Visual Components are already developing similar tools for corporations, a trend that is likely to accelerate in the years to come. Far from a substitute for human decision-making, these platforms will offer a powerful complement to it: a way to harness data at the service of common goals.

Associated Press
a few seconds ago
- Associated Press
YouTube to begin testing a new AI-powered age verification system in the U.S.
YouTube on Wednesday will begin testing a new age-verification system in the U.S. that relies on artificial intelligence to differentiate between adults and minors, based on the kinds of videos that they have been watching. The tests initially will only affect a sliver of YouTube's audience in the U.S., but it will likely become more pervasive if the system works as well at guessing viewers' ages as it does in other parts of the world. The system will only work when viewers are logged into their accounts, and it will make its age assessments regardless of the birth date a user might have entered upon signing up. If the system flags a logged-in viewer as being under 18, YouTube will impose the normal controls and restrictions that the site already uses as a way to prevent minors from watching videos and engaging in other behavior deemed inappropriate for that age. The safeguards include reminders to take a break from the screen, privacy warnings and restrictions on video recommendations. YouTube, which has been owned by Google for nearly 20 years, also doesn't show ads tailored to individual tastes if a viewer is under 18. If the system has inaccurately called out a viewer as a minor, the mistake can be corrected by showing YouTube a government-issued identification card, a credit card or a selfie. 'YouTube was one of the first platforms to offer experiences designed specifically for young people, and we're proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy,' James Beser, the video service's director of product management, wrote in a blog post about the age-verification system. People still will be able to watch YouTube videos without logging into an account, but viewing that way triggers an automatic block on some content without proof of age. The political pressure has been building on websites to do a better job of verifying ages to shield children from inappropriate content since late June when the U.S. Supreme Court upheld a Texas law aimed at preventing minors from watching pornography online. While some services, such as YouTube, have been stepping up their efforts to verify users' ages, others have contended that the responsibility should primarily fall upon the two main smartphone app stores run by Apple and Google — a position that those two technology powerhouses have resisted. Some digital rights groups, such as the Electronic Frontier Foundation and the Center for Democracy & Technology, have raised concerns that age verification could infringe on personal privacy and violate First Amendment protections on free speech.