logo
Samsung's Galaxy Unpacked 2025 event in 11 minutes.

Samsung's Galaxy Unpacked 2025 event in 11 minutes.

The Verge7 days ago
Samsung Galaxy Unpacked 2025: Everything announced at the July event
See all Stories Posted Jul 13, 2025 at 3:55 PM UTC
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Think smartphone cameras have peaked? Here's what's still to come
Think smartphone cameras have peaked? Here's what's still to come

Android Authority

time5 minutes ago

  • Android Authority

Think smartphone cameras have peaked? Here's what's still to come

Robert Triggs / Android Authority I've seen some absolutely phenomenal camera phones cross my desk this year. The extravagant Xiaomi 15 Ultra and more mainstream OnePlus 13, in particular, have upped the ante on the iPhone, Galaxy, and Pixel triopoly, providing avid photographers more choice than ever before. While some of their best bits will inevitably trickle down to more affordable price points, thereby introducing more consumers to these superb capabilities, there's also a sense that we might finally be bumping up against the wall of progress. We can't fit bigger sensors in our phones without accepting colossal camera bumps or overly-wide fields of view. Likewise, the best zoom cameras are increasingly resorting to clever crops and upscaling, given the lack of space for physically longer focal lengths, even with periscope designs. All hope is not lost, though; there are some important mobile photography innovations on the horizon that you should still keep an eye out for. Better HDR than mirrorless cameras Robert Triggs / Android Authority Let's face it, smartphone sensor sizes will never reach the lofty heights of compact mirrorless or even Micro Four Thirds cameras. 1-inch sensors like the LTY-900 are about as large as is feasible, and even then, it's arguably a bit too large to achieve an ideal focal length; bigger sensors require more space between them and the lens to avoid ultrawide fields of view. The best camera phones have sat quite comfortably at around 1/1.3-inches for a while now, and could well remain there for the foreseeable future. Instead, sensors are turning to smarter tricks to capture better light from the same-sized sensors and pixels. Take Sony's newly announced LYT-818; a 1/1.28-inch sensor sporting brand new Hybrid Frame-HDR (HF-HDR) technology. HF-HDR builds on the Dual ISO Conversion Gain (DCG) approach for dark shadows by merging short-time exposure frames into the mix, allowing it to capture bright highlights. Sony claims this provides over 100dB or up to 17 stops of dynamic range (the range between the lightest and darkest capture before clipping), resulting in fewer blown-out backgrounds or much better subject detail capture, even when zooming in. Sony's LYT-828 also debuts proprietary ultra-high conversion gain (UHCG) technology to reduce random noise for improved low-light capture, which will help phones reduce reliance on post-processing. Sony isn't the only company investing in improved HDR capabilities and light capture. OmniVision's new 1-inch OV50X sports lateral overflow integration capacitor (LOFIC) for improved dynamic range in bright light, along with more conventional DCG HDR technology. Apparently, the OV50X 'provides close to 110 decibels (dB) single-exposure HDR.' It's a big sensor, but could make for a real photography and videography powerhouse. Longer zoom without the bulky size C. Scott Brown / Android Authority Of course, Samsung's ISCOCELL has some new tricks up its sleeve too. Last year's 200MP ISOCELL HP9 made quite an impression when I tested the Xiaomi 15 Ultra, as it boasts 14-bit RAW, iDCG and Staggered HDR, and 2x or 4x in-sensor zoom. That last one has become an important zoom trick for many of this year's best and future zoom cameras. It essentially takes the center 12.5MP from a 50MP or 200MP sensor to obtain 2x or 4x 'lossless' zoom in hardware, rather than looking at sub-pixel frames in software (à la previous Pixel models). In-sensor zoom and more compact periscopes will provide seamless zoom coverage. The trade-off is that resorting to smaller pixels results in a loss of light capture, and it requires careful remosaic to extract accurate colors, such as Samsung's full-resolution E2E AI Remosaic. Samsung's latest GNJ, Sony's LYT series, and the OmniVision OV50 all sport similar technologies, meaning this feature is likely to become far more common across models throughout 2026. However, some truly promising long-range zoom innovations have emerged recently. Back at CES 2025, Samsung Semiconductor demonstrated its All Lenses on Prism (ALoP) concept to enhance zoom quality. ALoP places the lens elements atop the prism in periscope cameras, trimming module thickness while allowing a larger effective aperture — essentially tackling the two big problems of phone zoom cameras in one swoop. Narrow apertures are the bane of low-light capture at a distance. Similar technology is already available in commercial smartphones. OPPO's Triple Prism Periscope Structure powers the mighty 3x cameras, backed by a 50MP 1/1.95-inch LYT-600 inside the Find X8 Pro and X8 Ultra. Presumably, the same setup is also in the OnePlus 13. Elsewhere, multi-lens 'stacked' setups and longer-folded periscope cameras are helping make longer-range zoom more powerful, while novel ideas like Sony's variable focal length and HUAWEI's dual-lens periscope aid with flexibility. It wouldn't be the future without AI C. Scott Brown / Android Authority AI photography is already a core part of the smartphone camera experience—and it's only going to become more central. From subtle exposure tweaks to full-scene reconstruction, AI is now doing a lot of the heavy lifting behind the scenes, and the latest chipsets from MediaTek and Qualcomm are helping features quickly descend the price tiers. While AI implementations can be seamless or horrendously heavy-handed, some brands are already showing just how powerful it can be when used well. Take the OnePlus 13, for example. Its impressive zoom capabilities from a modest 3x telephoto lens are a showcase of what smart AI-driven processing can do. And of course, Google's Pixel series continues to set the standard for computational photography — handling everything from HDR fusion to skin tone accuracy with remarkable finesse. The upcoming Google Pixel 10 will likely push even further, introducing next-gen features like advanced semantic segmentation and AI-enhanced zoom that delivers sharp results at higher magnifications without relying on heavy optics. Likewise, Apple's Photonic Engine will undoubtedly see upgrades with the upcoming launch of the iPhone 17 series. Looking ahead, expect AI to play an even bigger role in low-light photography. Rather than stacking multiple frames and hoping for the best, phones will increasingly use learned noise models to clean up shadows while preserving texture and avoiding ghosting. This should result in cleaner, more natural night shots, especially for moving subjects like people or pets. AI can undoubtedly make photos better, but expect to sift through the gimmicks too. Video will also see an AI upgrade. Some new phones already use real-time semantic processing to recognize what you're filming and optimize focus, exposure, and tone on a per-object basis. Soon, more mainstream phones will offer this superior highlight control, more accurate skin tones, and improved background separation for video as well as photos. We're also likely to see stabilized cinematic bokeh and AI relighting become more interactive and editable after the fact. In short, AI is no longer just a buzzword — it's becoming the backbone of smartphone photography. From better zoom and cleaner low-light shots to smarter video and creative post-processing tools, AI is reshaping how cameras see and understand the world. The best part? Much of this innovation will run natively on the device, giving users faster performance and more control without relying on the cloud. Don't sleep on the the mid-range Joe Maring / Android Authority Flagships still get the flashiest new toys, but mid-range phones are quietly catching up — and fast. Today's upper-mid-tier models already support features like 4K video, optical image stabilization (OIS), and phase-detect autofocus. That spec list belonged to $1,000 phones just a few short years ago. We've recently seen telephoto zoom on some very adorable phones, significantly upping their shooting flexibility. Sensor makers like OmniVision are helping drive this shift. Take its OV50M40, for example, which is designed to reduce cost and complexity while still supporting advanced features like multi-frame HDR and fast autofocus. The trade-off is typically a smaller sensor format and more basic optics, but the overall capabilities continue to climb. Mid-range phones will quickly catch up with features recently considered flagship-tier. By 2026–2027, it's reasonable to expect mainstream phones to offer 50MP main cameras with quad-pixel binning, 4K/30 video, and AI-powered scene optimization as standard — effectively matching the spec sheets of 2023–2024 flagships. At least as far as the main camera in concerned. In short, users in the mid-range segment won't be left behind; they stand to benefit from continued improvements in both hardware and software as flagship tech filters down. More of what we already know and love Robert Triggs / Android Authority Looking at the big picture, I'm not expecting a major revolution in mobile camera sizes or form factors in the next couple of years. However, that doesn't mean that today's smartphones are as good as they can possibly be. The combination of new sensor tech, optical design, and AI means we can expect more dramatic improvements even by 2026/27 — especially in flagship devices. Features like multi-camera zoom fusion, ultra-resolution night modes, and full-scene HDR could all become standard and will even arrive in more affordable models before long. While smartphone cameras are already amazing, provided you have the money to spend, the industry continues to evolve rapidly on both hardware and software fronts. There's plenty for photography buffs to look forward to in the coming generations.

AI isn't coming for your job—it's coming for your whole org chart
AI isn't coming for your job—it's coming for your whole org chart

Fast Company

time35 minutes ago

  • Fast Company

AI isn't coming for your job—it's coming for your whole org chart

Last month, my friend Amy, a mid-level marketing manager at a Fortune 500 company, had her entire junior analyst team 'restructured.' Why officially? 'Strategic realignment.' Reality? AI tools now handle what used to be three full-time positions. Amy isn't alone in this new reality. AI has eliminated 76,440 jobs in 2025 alone, and 41% of global employers plan to reduce their workforce in the next five years due to AI automation. But, you don't just lose your current job when this happens, you lose the corporate ladder you were climbing. The relationships you made and the personal career brand you built that led to promotions and growth are gone. The Career Ladder is Breaking (and No One's Talking About It) We are currently experiencing changes in the job market that we have never seen post-industrial revolution, specifically in Big Tech. Big Tech reduced hiring new graduates by 25% in 2024 compared to 2023. Simultaneously, they increased hiring professionals with 2–5 years of experience by 27%. How can you pay your dues, learn, and build your career when there are no entry-level positions to be had? This paradox is becoming more and more common in today's workforce; companies want someone with experience, but there are fewer and fewer positions that allow an employee to gain experience. This sea change feels different. The past 35 years have given us more rapid change than at any time in history. The speed at which technology has advanced has placed us in the dot-com boom, the mobile phone revolution, and the cloud transformation. AI isn't just changing what we do and how we perform, it's eliminating the steps we traditionally started with to learn, grow, and develop our soft and hard skills to build a foundation for a career. Speed and Efficiency Now, Devastation Later The entry-level people who filled the office floor, built a unique and diverse team, and brought life and energy into the office are now being phased out. AI does what they did faster and AI doesn't take sick days or need health insurance. Lawyers who have just passed the bar, learning the basics of a profession via document review? That process is now automated. The new generation of the workforce feels a risk when investing in a four-year degree. A study from the World Economic Forum revealed that 49% of US Gen Z job hunters believe AI has reduced the value of their college education. What will this lead to in 10–15 years, as people with experience and knowledge begin to retire and fewer people are qualified to assume those roles? Another question: for those of us in the midst of a career, how do we advance when the ladder that was once just a few rungs up is chopped off and thrown in a corporate fireplace? Companies Currently Solving the Problem When studying organizations meeting these changes head-on and winning, I've seen a few commonalities. They don't simply cut costs to cut costs. They are fundamentally reimagining how work gets done. For example: BCI increased productivity by 10% to 20% for 84% of their Microsoft Copilot users while increasing job satisfaction by 68%. This resulted in saving more than 2,300 person-hours with automation. This was accomplished not by simply implementing AI, but by the way they redesigned workflows around human-AI collaboration. Daiichi Sankyo Within a month of building their internal AI system (DS-GAI), over 80% of employees reported improved productivity and accuracy. They're using AI advancements not to replace current employees, but to augment their capabilities. These are the types of approaches any company looking to implement AI and automation can work into their deployment project plans can follow. How can they foster increased human-tech collaboration? How can they make their current team more productive and take the business to levels previously unattainable? People Ahead of the Curve The good news is, there are plenty of professionals who are thriving during these days of upheaval and transition. For the most part, these people are taking three common approaches to find ways to use AI to their advantage. They orchestrate with AI Successful people I know don't fight AI, they teach themselves how to direct it and use it to their benefit. They understand that humans will always be in charge of technology. With that knowledge, they can position themselves as the conductor with an orchestra of AI at their command. They Focus on Uniquely Human Skills Develop and hone the skills that AI amplifies. Humans will be freed to build creative problem-solving, strategic thinking, relationship-building processes, and guidelines. When AI is deployed to do all mundane repetitive tasks, these skills are where humans must thrive. They Position Themselves at the Intersection The future will be written and commanded by individuals who bridge the unique creative minds of humans with the efficiency, accuracy, and speed of AI. What is the common thread of these three points? How you use AI to your advantage. You can stand on the beach and scream at the coming tidal wave or grab a surfboard and teach yourself to ride that wave. Those who choose the latter path will be those who run the world. The World We Know is on Death's Door The truth we all must face today is that 2025–2026 will be the year companies prepare for a generational change in how we work with AI. This will disrupt nearly every industry. Org charts will be completely rewritten or scrapped entirely. But remember that you can make a difference and influence this change by simply preparing yourself as I have laid out in this article. The choice is no longer whether AI is for you, the choice is how you decide to leverage AI to your benefit. We've seen this before; I remember people pushing back against using computers, people pushing back against using email, people pushing back against cellphones. Pushing back against AI today is precisely what those people did. The professionals who embrace this change and use AI as a tool for advancement will be the ones who write the org charts of the future.

NSF Gives Georgia Tech $20 Million To Build AI-Focused Supercomputer
NSF Gives Georgia Tech $20 Million To Build AI-Focused Supercomputer

Forbes

time36 minutes ago

  • Forbes

NSF Gives Georgia Tech $20 Million To Build AI-Focused Supercomputer

The Georgia Institute of Technology is building a new supercomputer that will advance the nation's ... More capacity to use artificial intelligence. The National Science Foundation has awarded the Georgia Institute of Technology $20 million to lead the construction of a new supercomputer — named Nexus — that will use artificial intelligence to advance scientific breakthroughs. According to the NSF announcement of the award, Nexus will provide 'a critical national resource to the science and engineering research community.' It will function 'both as a standalone platform and as a gateway to effectively utilizing other national resources, significantly accelerating AI-driven scientific discovery.' Nexus is expected to advance American leadership in artificial intelligence, growing the nation's capacity in 'diverse areas of science and engineering, enabling breakthrough discoveries, increasing economic competitiveness, and advancing human health.' 'Georgia Tech is proud to be one of the nation's leading sources of the AI talent and technologies that are powering a revolution in our economy,' said Georgia Tech President Ángel Cabrera, in the university's announcement. 'It's fitting we've been selected to host this new supercomputer, which will support a new wave of AI-centered innovation across the nation. We're grateful to the NSF, and we are excited to get to work.' Nexus will have enormous computing capacity. According to Georgia Tech, it will: 'The Nexus system's novel approach combining support for persistent scientific services with more traditional high-performance computing will enable new science and AI workflows that will accelerate the time to scientific discovery,' said Katie Antypas, NSF's director of the Office of Advanced Cyberinfrastructure. 'We look forward to adding Nexus to NSF's portfolio of advanced computing capabilities for the research community.' Georgia Tech will construct Nexus in partnership with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, home to several of the country's top academic supercomputers. The two universities will establish a new high-speed network, creating a national research infrastructure available for use by U.S. researchers, who will be able to apply for NSF support to access the supercomputer. 'Nexus is more than a supercomputer — it's a symbol of what's possible when leading institutions work together to advance science,' said Charles Isbell, chancellor of the University of Illinois and former dean of Georgia Tech's College of Computing. 'I'm proud that my two academic homes have partnered on this project that will move science, and society, forward." Plans call for construction to begin this year, with completion expected by spring 2026. Georgia Tech will manage Nexus, provide support, and reserve up to 10% of its capacity for its own campus research. 'This is a big step for Georgia Tech and for the scientific community,' said Vivek Sarkar, the John P. Imlay Dean of Computing. 'Nexus will help researchers make faster progress on today's toughest problems — and open the door to discoveries we haven't even imagined yet.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store