Will Foveon sensors ever return, and does anyone outside Sigma still care?
When you buy through links on our articles, Future and its syndication partners may earn a commission.
In a recent PetaPixel interview with Sigma's CEO, President Kazuto Yamaki, it was revealed that Sigma remains committed to the development of the Foveon sensor. This is both predictable news given Sigma's unwavering loyalty to the technology, but it's also surprising. Sigma's last camera with a Foveon sensor was the sd Quattro, equipped with a 29MP APS-C Foveon X3 sensor. This was launched in 2016, and though it was in production for the remainder of the decade alongside older Sigma cameras also using the Foveon X3, we've seen no new Foveon-equipped camera launched since.
Rumors have circulated during the intervening years that a full-frame Foveon was in development, but this has never materialized. In 2023 it was said that a production sensor might be ready in 2024, though development was difficult. Subsequently in early 2024 Yamaki also stated that "not much progress has been made". Reasons for this include the Foveon sensor being fundamentally different to conventional Bayer sensors, thereby requiring a dedicated manufacturing process. In order to minimize the production complexity, Yamaki stated that "currently, we are trying to realize a three-layer structure using as many standard processes as possible at the design stage".
Fast-forward 12 months to this latest PetaPixel interview and it seems the mythical full-frame Foveon is just as elusive as ever. Sigma is still apparently working on it, and Yamaki says "we will do our best" (regarding its development). However, he is also honest in stating that development is taking much more time than expected, and that technical issues have been encountered with prototypes (Skip to 42:53 in the YouTube interview for the Foveon details).
So after this many years of trying, why is Sigma still chasing its Foveon dream? The core reason is likely that a Foveon sensor promises, in theory, superior image quality versus a conventional Bayer sensor, due to its unique design. Where the surface area of a Bayer sensor is split into pixels that capture red, blue and green light, (25%, 25% and 50% of the sensor area, respectively), a Foveon sensor has a separate layer for each of these three color wavelengths, stacked on top of each other. Consequently, by stacking the red, green and blue pixel layers, the sensor can (theoretically) capture around 3x more light than a Bayer sensor of equivalent dimensions. This then potentially translates to significantly improved color fidelity, dynamic range, and reduced image noise.
Sounds great, but in practice Foveon sensors have proven to be more of a mixed bag. The sd Quattro was praised for its ability to resolve fine detail, with its 29MP sensor said to more comparable to 39MP Bayer sensor in this metric. However, this advantage came at the cost of image noise, which was reportedly higher than that from contemporary cameras with conventional sensors.
The theoretical benefits of Foveon are compelling, and it's easy to see why Sigma, a small player in the camera market, would want to stand out from the likes of Canon, Nikon and Sony by producing a full-frame camera with a fundamentally different (and hopefully superior) image sensor. But could such huge development costs ever be recouped by sales of a camera that would surely only ever appeal to a niche market? Given that Sigma has abandoned Foveon for its fp cameras and the new BF, it seems even less likely that we'll ever see a Foveon return. If development really is continuing, we have to assume it's on a small scale. There's surely no logical business case for ploughing significant financial investment and technical expertise into a sensor that, when judged on past performance, doesn't offer a convincing advantage over Bayer technology.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
11 hours ago
- Tom's Guide
Is an RTX 4050 gaming laptop still good enough? I put this Acer Nitro V 14 to the test
I'll set this straight: gaming on a budget is getting pretty darn good. I'm all for a cheap gaming laptop that can push out more than acceptable performance at a wallet-friendly price, even if its specs are a little late to the party. But just because we're seeing RTX 5090-equipped rigs powering through graphically demanding PC games without batting an eye doesn't mean we have to give in to their soaring prices, with some coming close to $7,000 (looking at you, MSI Titan 18 HX). In this past year, I've seen an RTX 4070-powered laptop drop to just $1,000 (never miss a big sale event), seen how a *hopefully* $1,099 RTX 5060 gaming laptop handles games with DLSS 4, and now the $449 Nintendo Switch 2 bring even more high-end gaming to handhelds. Heck, I've even tried gaming on a Chromebook, and it's far from terrible now. However, in a world where Nvidia RTX 50-series and AMD Radeon RX 9060 XT GPUs are taking over, how does a laptop like the Acer Nitro V 14 with an RTX 4050 fit in? Well, it landed on my desk, so I gave it a test drive. And to cut to the chase, this Nitro V 14 can hold its own when it comes to many of the latest PC titles, even playing games like Cyberpunk 2077 at high frame rates (close to 100 FPS) thanks to Nvidia DLSS. It's a solid option for entry-level gaming. That is, if you can find one. This specific model of the Acer Nitro V 14 in a tasteful Pearl White isn't easy to find in the U.S. or the U.K. Outside the Acer store, where it will set you back $1,099 / £1,299, I did finally find a couple at retailers, though. This all-white Acer Nitro V 14 sports an AMD Ryzen 7 8845HS CPU, RTX 4050 GPU, 16GB of DDR5 RAM and a 512GB SSD. It's far from the latest or most powerful specs, but now that it's down to $899, it makes for a fine first gaming laptop for those on a budget It's not in Pearl White, but this is the closest I could find an Acer Nitro V 14 with similar specs in the U.K. (that isn't on the Acer store). It comes with a £200 discount and comes with an AMD Ryzen 5 8645HS CPU instead. So, can the lowest-end, last-gen RTX 40-series GPU in a laptop still pump out acceptable frame rates that gamers would be happy with? Not if you've played on high-end gear and can never looked back, no. But for first-time buyers or casual gamers on a budget, there's something to like here. Acer Nitro V 14 Price From $1,099/£1,299 Display 14.5-inch WUXGA (1920 x 1200) , 120Hz CPU AMD Ryzen 7 8845HS GPU Nvidia GeForce RTX 4050 Memory 16GB DDR5 Storage 1TB Ports 2x USB-A 3.2 Gen 1, 1x USB-C 4 Gen 1, 1x HDMI, 3.5mm audio jack Connectivity Wi-Fi 6E, Bluetooth 5.3 Dimensions 12.9 x 9.2 x 0.8 inches Weight 3.7 pounds Put simply: of course it can! Even with its 6GB of GDDR6 VRAM (video memory), the RTX 4050 in the Acer Nitro V 14 can deliver smooth 1080p gameplay and medium to high settings, depending on the games you're expecting to play. For example, I put on one of my recent favorites, Hades 2 — a roguelike action RPG that doesn't require heavily demanding specs. Even at 1200p with High settings, I had a blast conjuring spells and slicing down foes (and getting my behind handed to me by Prometheus) at over 100 FPS without any dips. And it all looked gorgeous. For something a tad more demanding, I also played South of Midnight, a whimsically dark single-player adventure in the Deep South with stunning graphics. Here, I had settings on High and quality set to Ultra Performance, and while there were a few dips in combat, I was still getting over 60 FPS generally. The minor stutters weren't enough to stop me from getting immersed in its haunting world. For those less demanding yet creatively brilliant PC titles that are fine-tuned to work well on many machines, the Nitro V 14 with its RTX 4050 offers more than enough to create a smooth, engaging gaming experience. If those titles are generally your cup o' tea, like Celeste, Inscryption, Dredge and, of course, multiple titles like Fortnite or Counter-Strike 2, then a budget system like this will suit your fancy. When bringing Nvidia's DLSS into the fold, though, supported games can shine far better. We've seen how AAA games perform on an HP Victus 15, boasting similar specs, and sure enough, the Nitro V 14 offers similar capabilities. Next, I threw on Cyberpunk 2077. With settings set to high and DLSS turned on at 1080p, I was seeing close to 100 FPS, and sometimes over that. That alone is fantastic, and it's clear the Nitro V 14 has enough gears turning under the hood to make it work. Without DLSS, though, this comes crashing down to just below 20 FPS. That ain't playable. That was with ray tracing switched on, but turning it off meant I was reaching just under 60 FPS. That's fine, but let's compare that to some of the latest offerings we've seen. Cyberpunk (1080p High/Ray Tracing Psycho settings) Acer Nitro V 15 (RTX 4050) 19 FPS MSI Stealth A16 (RTX 5060) 33 FPS Wait, so even an RTX 5060 gaming laptop can't reach a standard 60 FPS? Not with those settings, and not without DLSS 4. With x4 multi-frame gen on its side, an RTX 5060-equipped rig can get up to 150 FPS with ease, and that's without any big dips when in combat. A tad unfair comparing it to a next-gen RTX 5060 (better if there was an RTX 5050 to compete with it...). Still, it's better than the Acer Nitro V 15 with an RTX 3050 I tested, and even that could manage some strong PC titles. Is the Acer Nitro V 14 the best budget gaming laptop? No, as that title still belongs to the MSI Cyborg 15 with an RTX 4050 (it's cheaper and more readily available). Stretch the budget a bit further, and the Asus TUF Gaming A14 with an RTX 4060 is a solid bet. But that doesn't take away from what an RTX 4050 gaming laptop can do. If you're fine getting some aid from Nvidia DLSS to crank up frame rates, you'll find competitive multiplayer titles like Fortnite or Marvel Rivals reaching a smooth 120 FPS at High settings and demanding games such as Cyberpunk 2077 getting triple figures. Without DLSS, expect those frame rates to drop hard. You may still get an average of 60 FPS in titles at medium to high settings, which isn't unplayable, but that brings into question what you can get for similar pricing now. At a discount, the Acer Nitro V 14 isn't a bad shout, especially with its sharp, compact design, brilliantly bright 14.5-inch WUXGA (1920 x 1200) display with a 120Hz refresh rate and solid gaming performance for the price. But for a more modern, future-proofed system with budget in mind, there are options around right now. Just like these two below. This Lenovo LOQ 15 sports the latest RTX 5060 GPU with a frame that can handle the heat. You'll also find an Intel Core i7-13650HX, 16GB of RAM and a 512GB SSD. It's budget-friendly, so don't expect a premium construction or display, but expect great performance for the price. One of our favorite gaming laptops for value, now with an RTX 5060. This model sports an AMD Ryzen 9 270, 32GB DDR5 RAM and 1 TB SSD, and its 15-inch FHD+ (1920 x 1200) display comes with a 144Hz refresh rate.
Yahoo
18 hours ago
- Yahoo
Bayer subsidiary Vividion secures rights to Werner helicase inhibitor
Bayer subsidiary Vividion Therapeutics has secured exclusive worldwide rights to develop and commercialise VVD-214, the Werner helicase (WRN) covalent inhibitor, enhancing its oncology pipeline. Roche and Vividion discovered and developed VVD-214 through a global partnership and licence agreement. In 2020, the companies agreed to discover and develop small molecules for a range of therapeutic targets. The acquisition of VVD-214 rights by Vividion complements its portfolio of investigational therapeutics aimed at treating cancers and immune disorders. Preliminary data from a Phase I trial indicated that the therapy is well-tolerated and exhibits signs of activity. The trial is assessing VVD-214 as a single agent and in conjunction with pembrolizumab for those with various solid tumours showing microsatellite instability (MSI), such as endometrial, colorectal, gastric and ovarian cancers. WRN is a deoxyribonucleic acid (DNA) repair enzyme and a synthetic lethal target for cancers with MSI. VVD-214's mechanism aims to induce lethal DNA damage in these cancers while sparing healthy cells. This approach could offer a new treatment avenue for patients with limited options, particularly those who relapse or become refractory to immune checkpoint inhibitors. Vividion Therapeutics CEO Aleksandra Rizo stated: 'Bringing VVD-214, the only clinical-stage covalent inhibitor of WRN in development worldwide, into our portfolio marks an incredibly exciting moment for Vividion. 'We are eager to progress development of this compound, building on the encouraging clinical data we've seen to date, as part of our mission to transform treatment for patients with cancer and other serious diseases.' Vividion has Phase I trials for other potential oral cancer therapies, including a Kelch-like ECH-associated protein 1 (KEAP1) activator, a RAS-phosphatidylinositol 3-kinase alpha (PI3Kα) inhibitor and a signal transducer and activator of transcription 3 (STAT3) inhibitor. The company is progressing the discovery of several drug programmes towards the clinic and has a pipeline of early discovery opportunities in immunology and oncology, utilising its chemoproteomics platform. "Bayer subsidiary Vividion secures rights to Werner helicase inhibitor" was originally created and published by Pharmaceutical Technology, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
a day ago
- Business Insider
AI isn't replacing radiologists. Instead, they're using it to tackle time-sucking administrative tasks.
Generative AI powered by large language models, such as ChatGPT, is proliferating in industries like customer service and creative content production. But healthcare has moved more cautiously. Radiology, a specialty centered on analyzing digital images and recognizing patterns, is emerging as a frontrunner for adopting new AI techniques. That's not to say AI is new to radiology. Radiology was subject to one of the most infamous AI predictions when Nobel Prize winner Geoffrey Hinton said, in 2016, that " people should stop training radiologists now." But nearly a decade later, the field's AI transformation is taking a markedly different path. Radiologists aren't being replaced, but are integrating generative AI into their workflows to tackle labor-intensive tasks that don't require clinical expertise. "Rather than being worried about AI, radiologists are hoping AI can help with workforce challenges," explained Dr. Curt Langlotz, the senior associate vice provost for research and professor of radiology at Stanford. Regulatory challenges to generative AI in radiology Hinton's notion wasn't entirely off-base. Many radiologists now have access to predictive AI models that classify images or highlight potential abnormalities. Langlotz said the rise of these tools "created an industry" of more than 100 companies that focus on AI for medical imaging. The FDA lists over 1,000 AI/ML-enabled medical devices, which can include algorithms and software, a majority of which were designed for radiology. However, the approved devices are based on more traditional machine learning techniques, not on generative AI. Ankur Sharma, the head of medical affairs for medical devices and radiology at Bayer, explained that AI tools used for radiology are categorized within computer-aided detection software, which helps analyze and interpret medical images. Examples include triage, detection, and characterization. Each tool must meet regulatory standards, which include studies to determine detection accuracy and false positive rate, among other metrics. This is especially challenging for generative AI technologies, which are newer and less well understood. Characterization tools, which analyze specific abnormalities and suggest what they might be, face the highest regulatory standards, as both false positives and negatives carry risks. The idea of a kind of gen AI radiologist capable of automated diagnosis, as Hinton envisioned, would be categorized as "characterization" and would have to meet a high standard of evidence. Regulation isn't the only hurdle generative AI must leap to see broader use in radiology, either. Today's best general-purpose large language models, like OpenAI's GPT4.1, are trained on trillions of tokens of data. Scaling the model in this way has led to superb results, as new LLMs consistently beat older models. Training a generative AI model for radiology at this scale is difficult, however, because the volume of training data available is much smaller. Medical organizations also lack access to compute resources sufficient to build models at the scale of the largest large language models, which cost hundreds of millions to train. "The size of the training data used to train the largest text or language model inside medicine, versus outside medicine, shows a one-hundred-times difference," said Langlotz. The largest LLMs train on databases that scrape nearly the entire internet; medical models are limited to whatever images and data an institution has access to. Generative AI's current reality in radiology These regulatory obstacles would seem to cast doubt on generative AI's usefulness in radiology, particularly in making diagnostic decisions. However, radiologists are finding the technology helpful in their workflows, as it can undertake some of their daily labor-intensive administrative tasks. For instance, Sharma said, some tools can take notes as radiologists dictate their observations of medical images, which helps with writing reports. Some large language models, he added, are "taking those reports and translating them into more patient-friendly language." Dr. Langlotz said a product that drafts reports can give radiologists a "substantial productivity advantage." He compared it to having resident trainees who draft reports for review, a resource that's often available in academic settings, but less so in radiology practices, such as a hospital's radiology department. Sharma said that generative AI could help radiologists by automating and streamlining reporting, follow-up management, and patient communication, giving radiologists time to focus more on their "reading expertise," which includes image interpretation and diagnosis of complex cases. For example, in June 2024, Bayer and Rad AI announced a collaboration to integrate generative AI reporting solutions into Bayer's Calantic Digital Solution Platform, a cloud-hosted platform for deploying AI tools in clinical settings. The collaboration aims to use Rad AI's technology to help radiologists create reports more efficiently. For example, RadAI can use generative AI transcription to generate written reports based on a radiologist's dictated findings. Applications like this face fewer regulatory hurdles because they don't directly influence diagnosis. Looking ahead, Dr. Langlotz said he foresees even greater AI adoption in the near future. "I think there will be a change in radiologists' day-to-day work in five years," he predicted.