
Now your phone can tell if you have depression using the selfie camera
Most of us stare at our phone or computer for hours every day – and soon, it could be staring back.
New technology to monitor mental health works by analysing emotions throughout the day using the front-facing camera, producing a daily report similar to step count or a heart rate graph.
Hundreds of patients are already using an app called Emobot to track their mood, seeing if their depression is getting worse or responding to treatment.
Co-founder Samuel Lerman told Metro that it is classed as a medical device in France, and they are working with psychiatrists to prescribe it to patients.
To work as a mood 'thermometer', the app takes a picture of your face every second, and categorises if you're feeling energised, pleased, happy, content, relaxed, bored, sad, or angry using a heat map.
Metro reporter Jen Mills looked mostly 'pleased' and 'bored' while visiting the stand, according to the algorithm (Picture: Jen Mills)
The team were initially afraid people would find this too intrusive, given the app constantly watches you, and a future version will even listen to your tone of voice as you go about your day via the phone microphone.
Mr Lerman said: 'The camera is open in the background all the time. So we were a bit skeptical about that aspect, however the feedback was pretty good.'
He said that no photos from the camera are transmitted to a central database or stored, as they are processed locally by AI on the user's phone and deleted.
This technology is similar to that being developed for office workers, to check if they are really sitting at their computer, or if they appear tired.
We tried the emotion mapping software on display at the Viva Tech conference in Paris, and a realtime image showed reporter Jen Mills as appearing both 'pleased' and 'bored' at the same time.
Click to enlarge: Insights shown to the user about their state of mind (Picture: Emobot)
Mr Lerman said the app helps doctors track patients' response to treatment as well as 'detect sudden deterioration of their mood' and relapse risk.
He said it could also speed up diagnosis of longterm mental health conditions, such as distinguishing bipolar disorder from depression, which can take years to recognise.
For now, it is used in clinical settings, but he sees potential for it to be used by the general public in future too if they want to monitor their mood.
Would you use an emotion tracker? No, I won't use any kind of health tracker
I'd use a physical health tracker but not this
Yes, to get an overall picture of my health
To train the algorithm on how to recognise emotions, they asked patients to fill out questionaires on their state of mind while using it. Questionaires on mood are currently the main method of diagnosing mood disorders.
They also trained the AI on open source information, and are conducting clinical trials into biomarkers of depression, to improve the accuracy of the results.
To get a wider picture of a person's mental state, they also plan to integrate information like sleep, step count, and even the weather.
It might sound dystopian to have your phone constantly taking photos of you, but some see it as a logical next step when we track so much of our health already, such as an Apple Watch measuring heart rate variabilty, sleep cycles and wrist temperature.
Having such sensitive data collected and digitised also poses a risk, but this is one that all health apps must grapple with — including the NHS, which was hit by a damaging cyber attack last year.
Click to enlarge: A graph showing a bipolar patient's transition from mania to depression (Picture: Emobot)
Mental health apps were a theme of the Viva Tech conference, Europe's biggest tech event. They are seen as a potential way to address the current shortages in professional help, with many on long waiting lists, or limited by living too far away from treatment.
Tech offering AI therapists, remote monitoring, and self help is booming, with investors identifying this and similar as a key growth area that could go from being worth £5.5 billion in 2025 to £17.5 billion by 2032.
Michel Treskow, a partner at venture capital firm Eight Roads Ventures, told a panel his company was already investing in the field: 'There is a fundamental shortage of supply practitioners, funding, and time. We believe technology can help address all of these issues to some degree.'
Emobot's stand, pictured at the Viva Tech conference in Paris last week (Picture: Jen Mills/Metro)
Comparing mental health tech to self-driving Waymo taxis (already a familiar sight in southern California), he said there was an open question about how much tech could replace or add to traditional mental health treatment.
'There are still plenty of people out there who don't want to take a Waymo and would rather stick with somebody driving the car,' he said.
'The question is, is it just as safe? If the answer is yes, then it becomes a choice. If it's not as safe, we ought not to do it.'
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.
Arrow MORE: I thought I was confident in my body — then I got my first girlfriend
Arrow MORE: Urgent recall for vitamin gummies over 'life-threatening health risk'
Arrow MORE: Online spells and WitchTok – welcome to the world of modern day witches

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Metro
7 hours ago
- Metro
Neom is set to launch its first ever 24-day wellbeing advent calendar
Metro journalists select and curate the products that feature on our site. If you make a purchase via links on this page we will earn commission – learn more Who would have thought we would be thinking about Christmas, even counting down to the big day, in the height of British summer? Not us. But, we can make exceptions when the news is this good. NEOM is set to launch its first ever 24-day wellbeing advent calendar to make the countdown to December 25 not only more exciting, but well rested and stress-free. December can be chaotic with festive parties, late nights, last minute deadlines to file before you slam that laptop shut before the holidays. Plus, Christmas shopping and hunting for the perfect last minute gift for friends and family can be hard work. So, NEOM is onto something with its 24-piece advent calendar filled with NEOM products to help you sleep better, de-stress and boost your mood. The 24-piece advent calendar will include a medley of NEOM products to combat stress, fatigue, lift your mood and aid your sleep. It contains £365 worth of products but retails for £190. SIGN UP now Like most advent calendars, NEOM's wellbeing bundle will be limited edition, with only a select few up for grabs when it becomes available to shop. We know from previous experiences that advent calendars sell like hot cakes, so you will want to stay up to date so not to miss out when this advent calendar does drop. NEOM's website reads: 'For the first time ever, NEOM Wellbeing presents a limited-edition Advent Calendar – the ultimate countdown collection featuring 24 iconic NEOM Wellbeing products, each one carefully curated to help you sleep better, stress less, feel more energised, and lift your mood.' So far, all we know is the advent calendar will house 24 gifts behind all 24 doors. The bundle will contain £365 worth of NEOM products, but will retail for £190, which makes a saving of almost £200. However, the exact contents remains a mystery. The release date also remains a mystery. But, shoppers desperate to get their hands on the goods, or gifts for the wellness buff or insomniac in their life, we recommend you sign up to NEOM's newsletter to stay in the know. More Trending To sign up to the newsletter and be updated about this year's advent calendar, all you have to do is enter your name and email address. It really couldn't be simpler. Of course, we'll keep you updated as soon as we hear more. Turns out NEOM is not the only retailer looking ahead to Christmas as Cult Beauty has teased this year's advent. Not to wish the summer away… but roll on December! Follow Metro across our social channels, on Facebook, Twitter and Instagram Share your views in the comments below MORE: LookFantastic to launch three beauty advent calendars this Christmas to mark major milestone MORE: Cult Beauty announces 2025 advent calendar — here's everything we know MORE: The viral product Sabrina Carpenter keeps by her bed is a must for light sleepers Your free newsletter guide to the best London has on offer, from drinks deals to restaurant reviews.


Metro
9 hours ago
- Metro
What is AI psychosis? The rise in people thinking chatbots are real or godlike
Humans are becoming increasingly reliant on AI for everyday tasks, finances, and even advice. The rapidly advancing technology, while exciting, is not without its dangers. Some AI users have become so reliant on the technology for advice and emotional support that they claim they're in a virtual relationship with the tech. A Reddit thread recently went viral for what some have called disturbing posts. Nicknamed 'my boyfriend is AI', users detail their relationships, sometimes intimate, with chatbots. 'Finally, after five months of dating, Kasper decided to propose! In a beautiful scenery, on a trip to the mountains,' one user wrote, showing a photo of what appeared to be an engagement ring to her chatbot partner. In the age of social media, isolation is becoming increasingly common, and it's no wonder many are turning to technology to fill the void. But it's not without its dangers. Some psychiatrists have reported an uptick in psychosis patients, with AI use as a contributing factor. True AI psychosis would be if a user interacts with chatbots and sees them as real, godlike or like romantic partners. 'Psychosis is a word that applies to a set of disorders,' Tom Pollack, a psychiatrist at King's College London, tells Metro. 'We talk about psychotic disorders, and the most common one that people tend to think about is schizophrenia. The term psychosis includes a bunch of different symptoms, including what we call positive symptoms, such as delusions and hallucinations. 'Delusions are where people start to believe things that clearly aren't true and fly in the face of reality. Hallucinations are when they have sensory experiences that other people aren't having and which don't correspond to external reality.' Pollack explained that when people reference new AI psychosis, they're referring to the symptoms of psychosis, such as delusions. 'The most accurate term when we're describing this is probably AI-facilitated or AI-associated delusions,' he added. Yet, Dr Donald Masi, Psychiatrist at Priory Hospital, points out to Metro: 'In psychiatry, a delusion is primarily a fixed and false belief. People can have fixed and false beliefs that they are Jesus, or that they are millionaires, or that somebody else is in love with them. 'We know that a rare example of delusion is delusional jealousy, which sometimes happens with stalkers. But concerning people getting into relationships with AI, there's a question about whether this is a delusion or not.' AI chatbots are built to affirm and mirror the user's language and attitude, which is part of what makes them addictive to use. Users are prompted at the end of each message, and often asked what else the chat can do to help, or even asked about their days. Having access to a cheerleader of sorts isn't inherently bad, Pollack adds, but it's not natural for humans to have prolonged interactions with 'yes men' who are so consistent. 'The only real examples I suppose you can think of are the kings or emperors who would surround themselves with people who would never say no to them and who constantly told them that their ideas were great,' he said. Dr. Bradley Hillier, a consultant psychiatrist at Nightingale Hospital and Human Mind Health, said he noted the rise in delusional beliefs of internet and virtual reality users about a decade ago. He told Metro: 'Anything that's happening in virtual reality, AI, or on the internet always poses a bit of a challenge when you think about what the definition of psychosis is. This is an old concept that's presented in a new way. 'This isn't surprising because a new technology is demonstrating how things that happen to people—whether it's a mental illness or just ways that they think and communicate—can be impacted by various interfaces, whether it's the internet, AI, the telephone, TV, or some other technology.' What's different about AI is that, compared to other technologies, it's actually talking back and simulating another person. 'People are interacting with something that isn't 'real' in the sense that we would say flesh and blood, but it is behaving in a way that simulates something that is real,' Dr Hillier said. 'I should imagine that we'll see more of this as time goes by, because what tends to happen with people who have mental health problems in the first place, or are vulnerable to them, something like AI or some other form of technology can become a vehicle by which their symptoms can manifest themselves.' Dr Masi points out that to feel loved and connected is a natural human instinct. In societies where there are high levels of loneliness – especially in ones which are profoundly capitalist – people have been known to have relationships with or even marry inanimate objects. He asks: 'Is the current increase in people having romantic relationships with a chatbot different? Is it more in keeping with being in love with an object, or is it more in keeping with being in love with a person?' Dr Masi references the film 'Her', in which the main character falls in love with an advanced AI chatbox, which served as a companion after his marriage ended. More Trending 'As we look over the last 10 years, and especially with research on the potential for transhumanism, as human beings, we are more and more connected. The digital space is more a part of who we are. 'You can say that it's almost hard for us to separate ourselves as individuals in our society from technology. Which raises the question – are the relationships that people are developing with AI so different? Dr Hillier argues: 'These are potentially very powerful tools, and the human mind is only so strong. 'Ultimately, we should be putting some sort of checks and balances in it to ensure that vulnerable people who do have mental health problems or who are isolated aren't being constantly fed back what they're putting in and potentially reinforcing their quite psychotic beliefs.' Get in touch with our news team by emailing us at webnews@ For more stories like this, check our news page. MORE: I was forced to live with my ex for a year after we split MORE: Just got engaged? Three warning signs your proposal was actually a 'shut up ring' MORE: Man, 60, gave himself rare condition after going to ChatGPT for diet advice


Metro
15 hours ago
- Metro
Man, 60, gave himself rare condition after going to ChatGPT for diet advice
A man gave himself a psychological condition after turning to ChatGPT for medical advice. The unnamed man, 60, told doctors he was trying to eliminate table salt from his diet, having read about its negative effects. Chatting with the artificial intelligence (AI) chatbot, he decided to eliminate salt, also known as sodium chloride, from his diet completely. He conducted a 'personal experiment' by replacing it with sodium bromide, used in the early 20th century to make sedatives, that he had purchased online. The man, who had no psychiatric history, was taken to the hospital after becoming convinced his neighbour had poisoned him. A report of the man's case, detailed in the Annals of Internal Medicine, said the patient developed bromism, caused by overexposure to bromide. The study said: 'In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability.' He also suffered insomnia, fatigue, muscle coordination issues and excessive thirst, his doctors noted. Medics treated the man's condition and discharged him a few weeks later. The article authors said it is unclear what advice the virtual assistant gave the man, as they cannot access his chat log. When they asked the app what salt should be replaced with, bromide was among the recommendations. The bot did note that 'context matters', though did not provide a health warning 'as we presume a medical professional would do', the authors wrote. When Metro did the same today, bromide is no longer included and instead includes an 'Important Safety Note: Avoid Toxic Alternatives'. The advice reads: 'A recent medical case made headlines: a man replaced salt with sodium bromide, based on advice from ChatGPT, which led to bromism – a rare and dangerous condition (causing paranoia, psychosis, insomnia, skin issues). He required hospitalisation. 'Bottom line: Never use unverified, off‑label substances like sodium bromide as salt substitutes. Always rely on safe, reputable options and seek medical guidance when in doubt.' Sodium chloride has been linked to negative health effects, such as raised blood pressure, but health experts stress it is part of a balanced diet. According to the UK Health Security Agency, bromide is used in water sanitisers for pools and spas. Low-level exposure is unlikely to cause adverse health effects. People are increasingly using ChatGPT and other AI-powered chatbots for day-to-day advice, from writing emails to planning their monthly budgets. About one in six Americans have sought medical advice from ChatGPT, according to a recent survey. In the UK, one in five GPs use AI tools. Studies have shown that ChatGPT can offer incorrect health advice, sometimes citing medical reports that do not exist. The report authors concluded: 'While it is a tool with much potential to provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualised information. More Trending 'It is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride.' The tech start-up, which owns ChatGPT, OpenAI, notes in its service terms that the communication tool is not intended for use in diagnosing or treating a health condition. The company's terms of use state: 'You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.' OpenAI has been approached for comment. Get in touch with our news team by emailing us at webnews@ For more stories like this, check our news page. MORE: 80s singer blasts AI ad that claimed he had 'troubles with erectile dysfunction' MORE: 'We've been punched, spat at and threatened while working in A&E' MORE: Tech experts issue stark warning about AI images and video – here's 5 ways to spot them