logo
Bizarre ‘impossible colours' your eyes CAN'T see – days after scientists discover mystery new hue ‘Olo' we've all missed

Bizarre ‘impossible colours' your eyes CAN'T see – days after scientists discover mystery new hue ‘Olo' we've all missed

The Sun24-04-2025

THE world is full of colour - but there's more than what meets the human eye.
There is an untold number of "impossible colours" that humans can't see or even imagine.
4
4
Earlier this week, researchers at the University of California, Berkeley, and the University of Washington claimed to have discovered a new hidden colour called "Olo".
They found a way to hijack the retina and artificially expand the natural human capacity for colour.
Using a prototype known as "Oz", researchers flashed a laser light with a single monochromatic colour (usually observed as green) at participants eyes.
This is when participants began seeing a new colour - one they were unable to recreate when given red, green and blue light to mix.
But given enough white to desaturate it, participants and researchers discovered "Olo", a light bluey-green colour.
There is no definitive number of "impossible" colours on Earth - but it's not zero.
And it's unclear where they might occur.
Although some animals can see them.
This is because different species have different numbers and types of colour receptors in their eyes, allowing them to see a broader or narrower range of the electromagnetic spectrum.
You have eyes of a hawk if you can find the missing bunny in this colourful Easter-themed brainteaser in 30 seconds
The mantis shrimp is king of the colour wheel here, detecting ten times more colour than a human.
It is believed to see more colours than any other animal on the planet.
Why can't humans see "impossible
Human eyes are trichromatic - meaning there are only three types of cones that respond to colour.
The three colours they respond to are primary: red, green, and blue.
These primary colours are the foundation of all the colours we see.
What are considered "impossible" colours are shades that exist outside our trichromatic vision.
Other animals, however, have more cones in their eyes that respond to colour.
Like the mantis shrimp, which has a whopping 16 colour-receptive cones behind its eyes.
There are three main categories of these mysterious shades: forbidden, chimerical, and imaginary colours.
4
Forbidden
Forbidden colours are those that are not possible to see directly because of the way our cone cells in the retina work.
For example, red-green or blue-yellow.
Although some people report that they can see these forbidden colours, according to Anne-Laure, founder of Ness Labs and a PhD researcher at King's College London.
You can use the templates above to see whether you are one of the lucky few.
Simply make your eyes cross so that both "+" symbols are on top of each other.
Chimerical
Chimerical colours are another type of "impossible" colour, like stygian blue, for example.
These are colours that can only be perceived through brain tricks like optical illusions or afterimages.
Imaginary colours are ones that cannot be physically produced or perceived by the human eye.
They exist in mathematical colour spaces but not in the 'real world'.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nvidia, Dell to supply next US Department of Energy supercomputer
Nvidia, Dell to supply next US Department of Energy supercomputer

Reuters

time29-05-2025

  • Reuters

Nvidia, Dell to supply next US Department of Energy supercomputer

BERKELEY, California - May 29 (Reuters) - The U.S. Department of Energy on Thursday said its "Doudna" due in 2026 will use technology from Nvidia and Dell. The computer, named for Nobel Prize-winning scientist Jennifer Doudna who made key CRISPR gene-editing discoveries, will be housed at Lawrence Berkeley National Laboratory in Berkeley, California. At an event at the lab attended by Secretary of Energy Chris Wright, officials said that the system will use Nvidia's latest "Vera Rubin" chips built into liquid-cooled servers by Dell and will be used by 11,000 researchers.

US supercomputer named after Nobel laureate Jennifer Doudna to power AI and scientific research
US supercomputer named after Nobel laureate Jennifer Doudna to power AI and scientific research

The Independent

time29-05-2025

  • The Independent

US supercomputer named after Nobel laureate Jennifer Doudna to power AI and scientific research

A new supercomputer named after a winner of the Nobel Prize in chemistry will help power artificial intelligence technology and scientific discoveries from a perch in the hills above the University of California, Berkeley, federal officials said Thursday. U.S. Energy Secretary Chris Wright is scheduled to announce the project Thursday alongside executives from computer maker Dell Technologies and chipmaker Nvidia. The new computing system at the Lawrence Berkeley National Laboratory will be called Doudna after Berkeley professor and biochemist Jennifer Doudna, who won a Nobel in 2020 for her work on the gene-editing technology CRISPR. It's due to switch on next year. 'One of the key use cases will be genomics research,' said Dion Harris, a product executive in Nvidia's AI and high-performance computing division, in an interview. 'It was basically just a nod to her contributions to the field.' Dell is contracted with the energy department to build the computer, the latest to be housed at Berkeley Lab's National Energy Research Scientific Computing Center. Previous computers there have been named after other Nobel winners: Saul Perlmutter, an astrophysicist, and Gerty Cori, a biochemist. It's not clear yet how the computer will rank on the TOP500 listing of the world's fastest supercomputers. The current top-ranked computer is El Capitan, located about an hour's drive away at the Lawrence Livermore National Laboratory. That's followed by other supercomputers at U.S. national labs in Tennessee and Illinois.

John Wiederspan puts on an AI-powered wearable camera designed to detect medication errors. David Jaewon Oh for NBC News Health news Medical errors are still harming patients. AI could help change that. Medication mistakes — where the wrong drug or the wrong dosage is given to a patient — are among the most common errors in medicine.
John Wiederspan puts on an AI-powered wearable camera designed to detect medication errors. David Jaewon Oh for NBC News Health news Medical errors are still harming patients. AI could help change that. Medication mistakes — where the wrong drug or the wrong dosage is given to a patient — are among the most common errors in medicine.

NBC News

time25-05-2025

  • NBC News

John Wiederspan puts on an AI-powered wearable camera designed to detect medication errors. David Jaewon Oh for NBC News Health news Medical errors are still harming patients. AI could help change that. Medication mistakes — where the wrong drug or the wrong dosage is given to a patient — are among the most common errors in medicine.

May 25, 2025, 5:00 AM EDT By David Cox John Wiederspan is well aware of how things can go wrong in the high-pressure, high-stakes environment of an operating room. 'During situations such as trauma, or a patient doing poorly, there's a real rush to try and get emergency drugs into the patient as fast as possible,' said Wiederspan, a nurse anesthetist at UW Medicine in Seattle. 'And that's when mistakes can occur, when you're flustered, your adrenaline's rushing, you're drawing up drugs and you're trying to push them.' Despite ongoing efforts to improve patient safety, it's estimated that at least 1 in 20 patients still experience medical mistakes in the health care system. One of the most common categories of mistakes is medication errors, where for one reason or another, a patient is given either the wrong dose of a drug or the wrong drug altogether. In the U.S., these errors injure approximately 1.3 million people a year and result in one death each day, according to the World Health Organization. In response, many hospitals have introduced guardrails, ranging from color coding schemes that make it easier to differentiate between similarly named drugs, to barcode scanners that verify that the correct medicine has been given to the correct patient. Despite these attempts, medication mistakes still occur with alarming regularity. 'I had read some studies that said basically 90% of anesthesiologists admit to having a medication error at some point in their career,' said Dr. Kelly Michaelsen, Wiederspan's colleague at UW Medicine and an assistant professor of anesthesiology and pain medicine at the University of Washington. She started to wonder whether emerging technologies could help. As both a medical professional and a trained engineer, it struck her that spotting an error about to be made, and alerting the anesthesiologists in real time, should be within the capabilities of AI. 'I was like, 'This seems like something that shouldn't be too hard for AI to do,'' she said. 'Ninety-nine percent of the medications we use are these same 10-20 drugs, and so my idea was that we could train an AI to recognize them and act as a second set of eyes.' The study Michaelsen focused on vial swap errors, which account for around 20% of all medication mistakes. All injectable drugs come in labeled vials, which are then transferred to a labeled syringe on a medication cart in the operating room. But in some cases, someone selects the wrong vial, or the syringe is labeled incorrectly, and the patient is injected with the wrong drug. In one particularly notorious vial swap error, a 75-year-old woman being treated at Vanderbilt University Medical Center in Tennessee was injected with a fatal dose of the paralyzing drug vecuronium instead of the sedative Versed, resulting in her death and a subsequent high-profile criminal trial. Michaelsen thought such tragedies could be prevented through 'smart eyewear' — adding an AI-powered wearable camera to the protective eyeglasses worn by all staff during operations. Working with her colleagues in the University of Washington computer science department, she designed a system that can scan the immediate environment for syringe and vial labels, read them and detect whether they match up. 'It zooms in on the label and detects, say, propofol inside the syringe, but ondansetron inside the vial, and so it produces a warning,' she said. 'Or the two labels are the same, so that's all good, move on with your day.' Building the device took Michaelsen and her team more than three years, half of which was spent getting approval to use prerecorded video streams of anesthesiologists correctly preparing medications inside the operating room. Once given the green light, she was able to train the AI on this data, along with additional footage — this time in a lab setting — of mistakes being made. 'There's lots of issues with alarm fatigue in the operating room, so we had to make sure it works very well, it can do a near perfect job of detecting errors, and so [if used for real] it wouldn't be giving false alarms,' she said. 'For obvious ethical reasons, we couldn't be making mistakes on purpose with patients involved, so we did that in a simulated operating room.' In a study published late last year, Michaelsen reported that the device detected vial swap errors with 99.6% accuracy. All that's left is to decide the best way for warning messages to be relayed and it could be ready for real-world use, pending Food and Drug Administration clearance. The study was not funded by AI tech companies. 'I'm leaning towards auditory feedback because a lot of the headsets like GoPro or Google Glasses have built-in microphones,' she said. 'Just a little warning message which makes sure people stop for a second and make sure they're doing what they think they're doing.' Wiederspan has tested the device and said he's optimistic about its potential for improving patient safety, although he described the current GoPro headset as being a little bulky. 'Once it gets a bit smaller, I think you're going to get more buy-in from anesthesia providers to use it,' Wiederspan said. 'But I think it's going to be great. Anything that's going to make our job a little bit easier, spot any potential mistakes and help bring our focus back to the patient is a good thing.' It isn't a fail-safe Patient safety advocates have been calling for the implementation of error-preventing AI tools for some time. Dr. Dan Cole, vice chair of the anesthesiology department at UCLA Health and president of the Anesthesia Patient Safety Foundation, likened their potential for reducing risk to that of self-driving cars and improving road safety. But while Cole is encouraged by the UW study and other AI-based research projects to prevent prescribing and dispensing errors in pharmacies, he said there are still questions surrounding the most effective ways to integrate these technologies into clinical care. 'The UW trial idea was indeed a breakthrough,' he said. 'As with driverless taxis, I'm a bit reluctant to use the technology at this point, but based on the potential for improved safety, I am quite sure I will use it in the future.' Melissa Sheldrick, a patient safety advocate from Ontario who lost her 8-year-old son Andrew to a medication error in 2016, echoed those thoughts. Sheldrick said that while technology can make a difference, the root cause of many medical errors is often a series of contributing factors, from lack of communication to vital data being compartmentalized within separate hospital departments or systems. 'Technology is an important layer in safety, but it's just one layer and cannot be relied upon as a fail-safe,' she said. Others feel that AI can play a key role in preventing mistakes, particularly in demanding environments such as the operating room and emergency room, where creating more checklists and asking for extra vigilance has proved ineffective at stopping errors. 'These interventions either add friction or demand perfect attention from already overburdened providers in a sometimes chaotic reality with numerous distractions and competing priorities,' said Dr. Nicholas Cordella, an assistant professor of medicine at Boston University's Chobanian & Avedisian School of Medicine. 'AI-enabled cameras allow for passive monitoring without adding cognitive burden to clinicians and staff.' AI is only going to be used more AI tools are likely to be deployed to prevent errors in an even broader range of situations. At UW Medicine, Michaelsen is considering expanding her device to also detect the volume of the drug present in a syringe, as a way of preventing underdosing and overdosing errors. 'This is another area where harm can occur, especially in pediatrics, because you've got patients [in the same department] where there can be a hundredfold difference in size, from a brand-new premature baby to an overweight 18-year-old,' she said. 'Sometimes we have to dilute medications, but as you do dilutions there's chances for errors. It isn't happening to every single patient, but we do this enough times a day and to enough people that there is a possibility for people to get injured.' Wiederspan said he can also see AI-powered wearable cameras being used in the emergency room and on the hospital floor to help prevent errors when dispensing oral medications. 'I know Kelly's currently working on using the system with intravenous drugs, but if it can be tailored to oral medications, I think that's going to help too,' Wiederspan said. 'I used to work in a cardiac unit, and sometimes these patients are on a plethora of drugs, a little cup full of all these pills. So maybe the AI can catch errors there as well.' Of course, broader uses of AI throughout a hospital also come with data protection and privacy concerns, especially if the technology happens to be scanning patient faces and screens or documents containing their medical information. In UW Medicine's case, Michaelsen said this is not an issue as the tool is only trained to look for labels on syringes, and does not actively store any data. 'Privacy concerns represent a significant challenge with passive, always-on camera technology,' Cordella said. 'There needs to be clear standards with monitoring for breaches, and the technology should be introduced with full transparency to both patients and health care staff.' He also noted the possibility of more insidious issues such as clinicians starting to excessively rely on AI, reducing their own vigilance and neglecting traditional safety practices. 'There's also a potential slippery slope here,' Cordella said. 'If this technology proves successful for medication error detection, there could be pressure to expand it to monitor other aspects of clinician behavior, raising ethical questions about the boundary between a supportive safety tool and intrusive workplace monitoring.' But while the prospect of AI entering hospitals on a wider basis certainly presents the need for stringent oversight, many who work in the operating room feel it has enormous potential to do good by keeping patients safe and buying medical professionals valuable time in critical situations. 'Time is of the essence in an emergency situation where you're trying to give blood, lifesaving medications, checking vital signs, and you're trying to rush through these processes,' Wiederspan said. 'I think that's where this kind of wearable technology can really come into play, helping us shave off vital seconds and create more time where we can really focus on the patient.' David Cox David Cox is a freelance journalist focusing on all aspects of health, from fitness and nutrition to infectious diseases and future medicines. Prior to becoming a full-time journalist, he was a neuroscientist attempting to understand how and why the brain goes wrong.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store