logo
Emirates Health Services to Unveil AI-Powered Tool to Detect Breast Cancer at Arab Health

Emirates Health Services to Unveil AI-Powered Tool to Detect Breast Cancer at Arab Health

Hi Dubai27-01-2025

Emirates Health Services (EHS) is set to unveil an innovative programme focused on the early detection of breast cancer using Artificial Intelligence (AI) at Arab Health 2025.
Developed in collaboration with the Ministry of Education and the Ministry of Health and Prevention, this groundbreaking AI tool enhances traditional mammography, significantly improving early detection rates and survival outcomes for patients.
The AI-powered tool will be one of 19 innovative projects EHS will showcase at the event, with 13 of them making their debut either regionally or globally. In addition to the AI programme, EHS will present several other pioneering projects, including the "Usrati Bundle" and the "In-Utero Gene Editing" developed in collaboration with the Children's Hospital of Philadelphia.
Arab Health 2025, one of the largest healthcare exhibitions in the world, kicks off at Dubai's World Trade Centre on January 29 and runs until January 30. This year's event, under the theme "Where the World of Healthcare Meets," will feature over 3,800 exhibitors and more than 60,000 visitors from across the globe. Attendees will have access to a diverse array of innovations spanning medical equipment, diagnostics, wellness, and prevention.
The UAE will be represented by over 200 companies, showcasing the latest advancements across nine product sectors. Arab Health 2025 also promises a relaxed networking space with the return of the Arab Health Village, where food and beverages will be available throughout the event.
Supported by key government entities, including the UAE Ministry of Health and Prevention and the Dubai Health Authority, Arab Health 2025 serves as a platform for innovation, collaboration, and education in the healthcare sector.
News Source: Khaleej Times

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Virtual companions, chatbot-therapists: Is AI replacing human connections?
Virtual companions, chatbot-therapists: Is AI replacing human connections?

Khaleej Times

timea day ago

  • Khaleej Times

Virtual companions, chatbot-therapists: Is AI replacing human connections?

I was listening to the radio on my way to work when a well-known RJ on one of the channels here in the UAE mentioned how she couldn't sleep the previous night, so she chatted with ChatGPT until she finally drifted off. The point of mentioning this isn't to debate insomnia remedies (as the show did), but to highlight something deeper — our growing emotional and cognitive reliance on digital solutions. While social media platforms offer 'free' services quietly shaping our behaviours, the market constantly convinces us we need the latest smartphone, algorithms trap us in filter bubbles, feeding us only what aligns with our existing views. And now, artificial intelligence (AI) chatbots and virtual companions are being designed to fulfill human desires, companionship, validation, and even therapy. Yet as a society, we are experiencing a disconnect like never before. The question is no longer just whether technology is replacing human connection, but how much of our emotional lives we are willing to outsource. As loneliness surges, so do our efforts to fight it. It's instinct. With digital platforms around us so much of the time, we naturally turn to it. A 46-year-old woman living in a family-friendly neighbourhood shared her struggle with loneliness on social media. People had plenty of solutions, including making an AI friend. A growing number of AI-powered virtual companions are now available online, offering round-the-clock support without fear of judgment. These digital assistants have become a valuable resource for individuals hesitant to share their emotions and concerns with others due to social anxiety or fear of criticism. These services aren't just for early adopters anymore, they are going mainstream fast. Among them, Replika has gained significant attention and popularity. Marketed as 'the AI Companion who cares, always here to listen and talk, always on your side', the app's reassuring message has resonated with users. Several reputable news outlets have featured the program as well. Social media apps promote connectivity, but studies show that regular users frequently experience loneliness, indicating that the sense of connection may be superficial. The benefits of AI characters come with significant risks, particularly for adolescents. Emotional dependency on AI can erode real-world social interactions and coping skills, potentially isolating users from familial and communal networks. AI remembers every detail of the conversation and gives the illusion of being a sincere friend, says Dr Jihene Mrabet, a psychologist with academic expertise in AI. She elaborates that these mental health applications are capable of diagnosing psychological issues and even providing coaching advice. 'However, the concern is always about to what extent one can rely on these chatbots, since we don't know who is behind the technology, what their understanding of human psychology is, or how confidentiality is maintained in such interactions. We do not even know if the designers have proper guardrails,' Dr Jihene explains. A Florida mother, Megan Garcia, is holding AI accountable for her 14-year-old son's death. In a lawsuit against the company, Garcia alleges deeply personal AI exchanges contributed to her son's suicide. She's demanding accountability to shield other families from similar devastation. As we increasingly turn to AI for companionship, we must ask: Are we creating a world where technology replaces human connection. And if so, at what cost? Developers, policymakers, and mental health experts must collaborate to enforce ethical safeguards, especially for vulnerable users.

Red Cross booth at AI defence conference offers stark reminder of technology's potential threat to civilians
Red Cross booth at AI defence conference offers stark reminder of technology's potential threat to civilians

The National

time2 days ago

  • The National

Red Cross booth at AI defence conference offers stark reminder of technology's potential threat to civilians

The exhibition booth for the International Committee for the Red Cross at the AI+ military defence conference in Washington definitely stands out – that's the whole point. 'When technology makes it into a battlefield, it's going to have consequences,' Jonathan Horowitz, legal adviser to the Red Cross, said at the ICRC 's exhibit, which focused on the potential problems in the use of artificial intelligence conflicts. The ICRC's booth at the Walter E Washington Convention Centre was surrounded by exhibits from some of the world's most influential companies with US military defence contracts, along with other entities from around the world. Palantir, Lockheed Martin, Booz Allen Hamilton, Google and Microsoft all had a large presence at the AI+ event. The three-day conference was organised by the Special Competitive Studies Project, which describes itself as a group that 'seeks to recapture the competitive mindset and unifying national mission from past eras, and then adapt them to the age of AI and 21st-century strategic rivalry'. Proponents of AI on the battlefield say that it can help minimise casualties and enhance capabilities, but critics say the technology is far from perfect, missing nuances that get lost in the fog of war and often including developers' biases. Critics also point out that the fast-developing implementations of AI in military landscapes have the potential to disregard international standards. "Just because technology is new doesn't mean you can use it in unconstrained ways," Mr Horowitz said. "We want to remind people here what those rules are, and you can find the rules in the Geneva Conventions, in international humanitarian law." He said that while the ICRC is concerned about the use of AI in war, it also sees its potential in improving the lives of civilians amid conflict. "It could give militaries better awareness of where hospitals or critical infrastructure are located, and with that knowledge militaries should know not to attack those locations," he explained. Yet the concern raised in recent months is that once AI platforms are handed over to various militaries, there's little accountability for how they're used. Microsoft recently carried out an internal review in response to recent accusations that its AI technology was being used to harm civilians in the war in Gaza. While it found "no evidence" its products were being used in such a way by the Israeli military, it pointed out the potential for the software and platforms to be used on highly secure, independent military networks of militaries, limiting the company's investigation. The company is not alone in facing criticism over how its AI technology is used. Alphabet-owned Google, Scale AI and Palantir have faced similar accusations. At the AI+ conference, demonstrators echoed concerns over the potential harm the technology poses to civilians, particularly in Gaza, with protesters interrupting various speeches and panel discussions. Mr Horowitz added that in the months ahead, ICRC will continue working to "solve the puzzle" with AI and militaries to "for the embedment of civilians, who often are the ones that suffer the most in armed conflict." "We include a new set of recommendations on AI decision support systems and the top of our priorities is the need to retain human control and judgment within those systems," Mr Horowitz said of the ICRC's updated guidance on the technology. The organisation recently submitted its AI military recommendations to the UN Secretary General. In that document, the ICRC expresses concern about the use of AI in automatic weapon systems along with the implementation of AI to expedite military decision-making. It also seeks to raise awareness about the potential for AI to increase the speed at which misinformation and disinformation spreads, potentially contributing to and even encouraging violence. "This submission is intended to support states in ensuring that military applications of AI comply with existing legal frameworks and, where necessary, in identifying areas where additional legal, policy or operational measures may be required," the document concludes.

MoHAP launches 'Hajj Safely' to support pilgrims at UAE's airports
MoHAP launches 'Hajj Safely' to support pilgrims at UAE's airports

Al Etihad

time3 days ago

  • Al Etihad

MoHAP launches 'Hajj Safely' to support pilgrims at UAE's airports

3 June 2025 22:04 ABU DHABI (ALETIHAD)The Ministry of Health and Prevention (MoHAP) has launched the 'Hajj Safely' campaign to support pilgrims travelling to perform Hajj from the UAE's airports. In a post on its official X account, the ministry said: "For a safe and healthy Hajj', we have provided support at the country's airports to pilgrims before the start of their journey, offering health and safety guidelines and conducting medical examinations. We wish all pilgrims an easy and safe journey, beginning with prevention and ending with health and wellbeing." The ministry offered health and safety guidelines to educate the pilgrims before travelling and administered the necessary medical examinations. For a 'safe and healthy Hajj', we provided support at the country's airports to pilgrims before the start of their journey, offering health and safety guidelines and conducting medical examinations. We wish all pilgrims an easy and safe journey, beginning with prevention and… — وزارة الصحة ووقاية المجتمع - MOHAP UAE (@mohapuae) June 3, 2025 Source: Aletihad - Abu Dhabi

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store