
How AI note-taking can ease health's documentation fatigue?
'Our goal is simple,' says James Gordon, co-founder and COO of Nora AI.
'We want to give doctors back their time. Nora AI reduces friction in the clinical workflow, integrates with existing electronic health record systems, and ensures that documentation is complete, well-structured, and consistent.'
Pivoting to health tech
Originally developed as an educational tool to auto-generate study materials from various resources, Nora AI pivoted into the health tech space when Healthbridge identified its potential to solve a growing clinician pain point: documentation fatigue.
Healthbridge integrated Nora AI into its Healthbridge Clinical platform.
'This isn't theoretical innovation,' says Luis da Silva, CEO of Healthbridge.
'Nora AI is already reducing burnout, improving documentation quality, and helping clinicians focus on what matters most: patient care.'
Improving healthcare delivery
Cross-sector partnerships are crucial for scaling health technology solutions and improving healthcare delivery.
'By pairing health tech infrastructure with AI applications, the ecosystem becomes more agile, efficient, and scalable, benefitting medical practitioners and patients alike,' says Da Silva.
It is also invaluable in emerging markets, where legacy systems and resource constraints call for more flexible and intelligent design.
'Across the African continent, where many healthcare systems still rely on paper-based processes and fragmented supply chains, AI-powered solutions offer a leapfrog opportunity.
'Given clinician shortages, inconsistent data capture, and limited access to care, the case for ambient AI is even more compelling,' Gordon says.
Growing health tech market
Investors are taking notice, and with Africa's health tech market projected to grow at a CAGR of 23.4% from 2024 to 2030, a new generation of African startups is using technology to reimagine healthcare access, equity, and efficiency.
'This is what we mean by AI with purpose,' concludes Da Silva.
'It's not about hype or disruption for its own sake. It's about building tools that work in the background to make people's lives better, starting with the clinicians who keep healthcare running.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Khaleej Times
5 hours ago
- Khaleej Times
Why AI can't replace human creativity and never should
I have been negative about AI, or rather the large language models and hive-minds of underpaid desk-jockeys and migrants on computers forced to masquerade as chat-bots (look it up), while also being certain it will be worse for humanity than social media. Those fears are bearing out with people using LLMs as therapists who don't have medical degrees and study pals who will lie and tell you what you want to hear if it keeps you engaged. Again, look it up, and do not use the same search engine unless it is to find another. This isn't widely known or even understood, again thanks to social media chopping up society into little bite-sized pieces to get gobbled up by the endless churn of rich and powerful surrounded by sycophants seeking more money and power to fill a hole inside themselves. But it is a fact that these so-called artificial intelligences are being trained on stolen material, and generate their work based on that theft. In the UK, this has reached a head with the government doubling down on plans to make it easier for these companies to steal original material, where the creator does not know to what extent they need to 'opt out' of having their work stolen. This in turn is due to a lack of AI transparency laws, but the fact of the matter is if I tell an LLM to make me a picture of one thing in the style of a specific artist, I know that the AI is going into its database of that artist's work to generate a composite of it and others into what I want made. Because it isn't drawn, it is generated. Drawings, or writing for that matter, has stages. You pause, you edit, you back up and readjust based on feeling and emotion. You do all the things humans do about everything, primarily over-think, just to be dissatisfied at the result. I experienced this first hand and even asked one LLM to try writing a column for me; it was not bad, but it was far from good because it had no life. I'm Gen-Z and a proud digital native, and there's a certain way of communicating that only people who grew up online have; that is, we talk in the exact same tone as we write. For me, just without the stutter. In needing to emerge and find friends online, that connection between what I would say and how I would write it, is short. I can self-edit very well, and what comes to matter most is ensuring I am understood, rather than plastering words together emotionally, and if I feel good about it too bad for everyone else. That is what all LLMs do; take the lowest common denominator of what was asked for and spit it out, because that is what its purpose is. Truly, LLMs are only digital machines; they would only be intelligent if they thought for themselves, which they do not, because they are incapable of recalling on past experiences or accessing memory through emotional triggers. In the case of one famous chat bot everyone is using far too much, it will save a response a user liked to be regurgitated at a later time for another user, with some teachers knowing their students used AI on an assignment by the similarity of individual sentences and passages across different students' assignments. The only thing left to do is figure out a way to secure creative product from being sucked into a training model, while also leaving it free for dissemination among the people it is intended for.


Khaleej Times
5 hours ago
- Khaleej Times
No, ChatGPT can't be your mental health counsellor
As people across the world, especially in the UAE, turn increasingly to artificial intelligence (AI) as a convenient tool to navigate life, experts are becoming concerned that it is being used to work through some of their biggest emotional challenges in lieu of a seeking a professional therapist. Sreevidhya Kottarapat Srinivas, Clinical Psychologist at Medcare Camali Clinic, told wknd. that the growing dependence on ChatGPT and other AI tools for mental health guidance reflects a larger shift in the ways in which people are seeking help. 'It is like a 'quick fix', often providing immediate solutions. [Plus, it's] easily accessible and anonymous,' she said. The trend is seen more in the younger generation, for whom AI is becoming the first point of contact to explore emotions or understand symptoms before reaching out to a professional. 'While this trend offers potential for early psychoeducation and de-stigmatising of mental health concerns, it should not be a substitute for qualified mental health professionals,' she explained. Last year, a study by the Oliver Wyman Forum found that 36 per cent of Gen-Z and millennials would consider using AI for mental health support, while only 27 per cent of other generations would. The move has been sparked by an uptick in mental health issues and awareness even as the stigma slowly lifts. Since the pandemic, there has been a 25-27 per cent rise in depression and anxiety, according to the World Health Organisation. And about half of the world's population is expected to experience a mental health disorder during their lifetime, according to researchers at Harvard Medical School and the University of Queensland. Srinivas said concern arises when individuals begin to over-rely on AI responses, especially in complex or high-risk situations that require more hands-on solutions and the involvement of another human being with skills such as empathy, diagnostic clarity, and real-time crisis management. 'ChatGPT doesn't know the full context, and often emotions such as pain, trauma, and anger may not be well communicated over text. AI has its limitations in forming a therapeutic alliance, and offers limited accountability. So, while the advice may seem helpful on the face of it, it can undermine or miss the signs of underlying trauma, nuanced behaviour, or even reinforce cognitive distortions,' she said. Dr Rebecca Steingiesser, a consultant clinical psychologist and clinical neuropsychologist based in Dubai, said the issue is becoming more prevalent. 'I'm hearing a lot about this now in my practice, with my clients using AI to help themselves organise their goals and make important life decisions, for example,' she said. 'It's obvious that AI is already beginning to reshape the landscape of therapy and mental health support. We are seeing the emergence of AI-powered tools offering psychoeducation, symptom checkers, journaling prompts, mood tracking, and even basic cognitive-behavioural interventions. These are what I would normally share with clients in sessions on paper forms,' she added. She said while these tools can be helpful adjuncts to therapy, particularly for monitoring progress between sessions or providing immediate, low-intensity in-the-moment support, they are not substitutes for the nuanced, relational, and highly individualised work that occurs in therapy. 'I've also seen individuals use it for exploring whether their experiences might be consistent with certain diagnoses, though that comes with serious risks, especially if they are making decisions about medications based on this information without consulting with their psychiatrists,' she added. Devika Mankani, psychologist at The Hundred Wellness Centre Dubai, who has 25 years' experience, has seen the consequences of using AI in patients who came after using it before turning to a professional. 'I've seen clients come into therapy after months of relying on AI tools. In one case, a woman believed she was in a 'toxic' marriage because ChatGPT repeatedly affirmed her frustrations without context or challenge. She later admitted what she needed was a safe space to explore her own patterns, not to be nudged toward an exit,' she said. 'In another case, a man with narcissistic traits used AI to validate his belief that others were always at fault, reinforcing his grandiosity and avoiding self-reflection.' She says that while the interaction may feel therapeutic at the time, it is not always rooted in clinical assessment, supervision, or accountability. Srinivas explained that AI models are trained on generalised data and cannot replace clinical judgment. 'There is also a risk of emotional dependence on a system that cannot provide attuned human responsiveness, which is a key part of therapeutic healing,' she warned. She too has seen the cases first hand with concerning consequences for those depending on the technology that has taken the world by storm. 'I've had clients mention they consulted ChatGPT about whether their partner had a narcissistic personality or whether they were struggling with Attention deficit hyperactivity disorder (ADHD), often based on a list of traits and no proper assessment. In one case, a client who was a child completely withdrew from social interactions in the real world and would often communicate her thoughts and feelings through the app. When asked, she said: 'ChatGPT is my only friend.' This was a case of AI unintentionally validating a skewed narrative because of a lack of therapeutic insight. The stigma of seeking therapy also remains a deterrent, globally. According to a 2022 study in Springer Nature, 'Attitudes towards mental health problems in a sample of United Arab Emirates' residents', researchers said: 'Mental health issues are still stigmatised in the United Arab Emirates (UAE), possibly due to cultural reasons.' This attitude has of course undergone a change since then, with the UAE government making strides in de-stigmatising mental healthcare. Still, for some, it is easier to engage with AI than an actual person. Dr Steisenger says not only is AI use more common among younger adults and teens already comfortable with digital platforms and more open to experimenting with technology, it is also turned to as it is seen as less intimidating or judgmental than a real-life therapist. 'That said, I'm also seeing an increase in busy professionals using AI for support in managing stress or burnout, particularly when access to therapy is delayed or limited due to long waitlists or challenges with a busy work schedule,' she added. Context in using AI, she agrees is key. Listening to AI lacking critical pieces of a complex human puzzle, especially for people making major life decisions such as ending relationships, changing careers, or self-diagnosing mental health conditions, can be disastrous. 'It is also very clear that AI can't detect red flags for risk of harm, such as suicidal ideation, in the way a trained professional can, how a person presents in person, their energy, their demeanour. So many subtle indicators would never be picked up on,' she reasons while explaining why online therapy is unsuitable for high-risk clients. Reading between the lines 'Misdiagnosis, minimisation of distress, or reinforcement of harmful thinking patterns are very real concerns, and I always caution my clients from putting too much emphasis on it,' she explained. Alarmingly, this month, during the first episode of OpenAI's official podcast, OpenAI CEO Sam Altman said he is surprised at how much people rely on ChatGPT: 'People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much.' Srinivas said the issue sheds light on how challenging it is to access appropriate mental health services in many parts of the world. 'Turning to AI is not always a matter of preference, it might also be the only option for some individuals who find it difficult to access or afford mental health services,' she said. 'Policymakers need to prioritise community mental health funding, make insurance coverage available, and make mental health a part of primary care. If this gap is not addressed in making services accessible and available, we are only increasing the risk where individuals have to resort to alternatives that are potentially harmful and inadequate.' Mankani agrees: 'This trend is not going away. Our responsibility is to build a future where technology supports humans flourishing, without replacing human care.'


Tahawul Tech
6 hours ago
- Tahawul Tech
Sustainability Compliance Archives
'Blocking states from regulating AI without a national standard in place would hand another victory to greedy big tech companies who put profits over people'. Learn more about this developing story below. #USSentate #AIRegulation #tahawultech