
People with ME have key genetic differences to other people, study finds
The DecodeME study, said to be the largest of its kind in the world, uncovered eight areas of genetic code in people with ME/CFS (myalgic encephalomyelitis/chronic fatigue syndrome) that are markedly different to the DNA of people without the condition.
Researchers hope the findings will boost 'validity and credibility' for patients, and help rebuff some of the stigma and lack of belief that exists around the condition.
There is currently no diagnostic test or cure for ME/CFS, which is believed to affect around 67 million people worldwide, and very little is known about what causes it.
A key feature of the condition is a disproportionate worsening of symptoms following even minor physical or mental activity, which is known as post-exertional malaise (PEM,) while other symptoms include pain, brain fog and extreme energy limitations that do not improve with rest.
For the new study, researchers analysed 15,579 DNA samples from the 27,000 people with ME/CFS participating in DecodeME, described as the world's largest data set of people with the disease.
The eight regions of DNA where scientists found genetic differences involve genes linked to the immune and nervous systems.
At least two of the genetic signals relate to how the body responds to infection, which researchers said aligns with long-standing patient reports that the onset of symptoms often followed an infectious illness.
Professor Chris Ponting, DecodeME investigator from the University of Edinburgh, said: 'This is a wake-up call. These extraordinary DNA results speak the language of ME/CFS, often recounting people's ME/CFS symptoms.
'DecodeME's eight genetic signals reveal much about why infection triggers ME/CFS and why pain is a common symptom.
'ME/CFS is a serious illness and we now know that someone's genetics can tip the balance on whether they are diagnosed with it.'
As a person's DNA does not change over time, experts say the genetic signals identified would not have developed because of ME/CFS and are therefore likely to reflect the causes of the disease.
Populations used in the initial study were limited to those from European ancestries.
DecodeME research studying DNA data from all ancestries is ongoing.
ME/CFS, thought to affect around 404,000 people in the UK, affects more females than males, although researchers found nothing to explain why this is the case.
The DecodeME team is now calling on researchers from around the world to access its 'rich' dataset and help drive forward targeted studies into ME/CFS.
Sonya Chowdhury, chief executive of Action for ME and a DecodeME co-investigator, said: 'These results are groundbreaking.
'With DecodeME, we have gone from knowing next to nothing about the causes of ME/CFS, to giving researchers clear targets.'
She also hopes the discoveries will help change the way the condition is viewed.
Ms Chowdhury said: 'This really adds validity and credibility for people with ME.
'We know that many people have experienced comments like ME is not real, or they've been to doctors and been disbelieved or told that it's not a real illness.
'Whilst things have changed and continue to change, that is still the case for some people and we hear that repeatedly as a charity.
'Being able to take this study into the treatment room and say there are genetic causes that play a part in ME is going to be really significant for individuals.
'It will rebuff that lack of belief and the stigma that exists.'
The findings have been reported in a pre-print publication, or unpublished study.
During a media briefing about the study, researchers were asked about similarities between the symptoms of long Covid and ME/CFS.
Prof Ponting said: 'It's very clear that the symptomology between long Covid and ME is highly similar.
'Not for everyone but there are substantial similarities but as a geneticist the key question for me is are there overlapping genetic factors, and we haven't found that in DECode ME with the methods that we've employed.
'One of the key things that we're doing is enabling others to use their different approaches to ask and answer the same question.'
DecodeME is a collaboration between the University of Edinburgh, the charity Action for ME, the Forward ME alliance of charities, and people with ME/CFS.
It is funded by the Medical Research Council and National Institute for Health and Care Research.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


South Wales Guardian
2 hours ago
- South Wales Guardian
AI-powered ‘hearing glasses' could help filter out background noise in real time
The project, led by Heriot-Watt University and involving researchers from the University of Edinburgh, Napier University and the University of Stirling, aims to help people with hearing loss by filtering out background noise in real-time, even in loud environments. The technology combines lip-reading technology, artificial intelligence and cloud computing, and uses a small camera built into glasses to track the speaker's lip movements while a smartphone app uses 5G to send both audio and visual data to a powerful cloud server. AI then isolates the speaker's voice from surrounding noise and sends the cleaned-up sound back to the listener's hearing aid or headphones almost instantly. More than 1.2 million adults in the UK have hearing loss severe enough to make ordinary conversation difficult, according to the Royal National Institute for Deaf People, and the impact of hearing aids can be limited in noisy places. Researchers hope to have a working version of the glasses by 2026. They are speaking to hearing aid manufacturers about future partnerships and hope to reduce costs to make the devices more widely available. Scientists have collected noise samples, from washing machines to traffic, to improve the system's training. Project leader Professor Mathini Sellathurai, of Heriot-Watt University, said: 'We're not trying to reinvent hearing aids. We're trying to give them superpowers. 'You simply point the camera or look at the person you want to hear. 'Even if two people are talking at once, the AI uses visual cues to extract the voice of the person you're looking at.' This approach, known as audio-visual speech enhancement, takes advantage of the close link between lip movements and speech. Some noise-cancelling technologies already exist, but struggle with overlapping voices or complex background sounds — something this system aims to overcome. By shifting the heavy processing work to cloud servers — some as far away as Stockholm — the researchers can apply powerful deep-learning algorithms without overloading the small, wearable device. The technology is still in the prototype stage but researchers have tested the technology with people who use hearing aids and said early results are promising. Prof Sellathurai said: 'There's a slight delay, since the sound travels to Sweden and back, but with 5G, it's fast enough to feel instant. 'One of the most exciting parts is how general the technology could be. 'It's aimed to support people who use hearing aids and who have severe visual impairments, but it could help anyone working in noisy places, from oil rigs to hospital wards. 'There are only a few big companies that make hearing aids and they have limited support in noisy environments. 'We want to break that barrier and help more people, especially children and older adults, access affordable, AI-driven hearing support.' The project is funded by the Engineering and Physical Sciences Research Council (EPSRC).


North Wales Chronicle
3 hours ago
- North Wales Chronicle
AI-powered ‘hearing glasses' could help filter out background noise in real time
The project, led by Heriot-Watt University and involving researchers from the University of Edinburgh, Napier University and the University of Stirling, aims to help people with hearing loss by filtering out background noise in real-time, even in loud environments. The technology combines lip-reading technology, artificial intelligence and cloud computing, and uses a small camera built into glasses to track the speaker's lip movements while a smartphone app uses 5G to send both audio and visual data to a powerful cloud server. AI then isolates the speaker's voice from surrounding noise and sends the cleaned-up sound back to the listener's hearing aid or headphones almost instantly. More than 1.2 million adults in the UK have hearing loss severe enough to make ordinary conversation difficult, according to the Royal National Institute for Deaf People, and the impact of hearing aids can be limited in noisy places. Researchers hope to have a working version of the glasses by 2026. They are speaking to hearing aid manufacturers about future partnerships and hope to reduce costs to make the devices more widely available. Scientists have collected noise samples, from washing machines to traffic, to improve the system's training. Project leader Professor Mathini Sellathurai, of Heriot-Watt University, said: 'We're not trying to reinvent hearing aids. We're trying to give them superpowers. 'You simply point the camera or look at the person you want to hear. 'Even if two people are talking at once, the AI uses visual cues to extract the voice of the person you're looking at.' This approach, known as audio-visual speech enhancement, takes advantage of the close link between lip movements and speech. Some noise-cancelling technologies already exist, but struggle with overlapping voices or complex background sounds — something this system aims to overcome. By shifting the heavy processing work to cloud servers — some as far away as Stockholm — the researchers can apply powerful deep-learning algorithms without overloading the small, wearable device. The technology is still in the prototype stage but researchers have tested the technology with people who use hearing aids and said early results are promising. Prof Sellathurai said: 'There's a slight delay, since the sound travels to Sweden and back, but with 5G, it's fast enough to feel instant. 'One of the most exciting parts is how general the technology could be. 'It's aimed to support people who use hearing aids and who have severe visual impairments, but it could help anyone working in noisy places, from oil rigs to hospital wards. 'There are only a few big companies that make hearing aids and they have limited support in noisy environments. 'We want to break that barrier and help more people, especially children and older adults, access affordable, AI-driven hearing support.' The project is funded by the Engineering and Physical Sciences Research Council (EPSRC).

Leader Live
7 hours ago
- Leader Live
AI-powered ‘hearing glasses' could help filter out background noise in real time
The project, led by Heriot-Watt University and involving researchers from the University of Edinburgh, Napier University and the University of Stirling, aims to help people with hearing loss by filtering out background noise in real-time, even in loud environments. The technology combines lip-reading technology, artificial intelligence and cloud computing, and uses a small camera built into glasses to track the speaker's lip movements while a smartphone app uses 5G to send both audio and visual data to a powerful cloud server. AI then isolates the speaker's voice from surrounding noise and sends the cleaned-up sound back to the listener's hearing aid or headphones almost instantly. More than 1.2 million adults in the UK have hearing loss severe enough to make ordinary conversation difficult, according to the Royal National Institute for Deaf People, and the impact of hearing aids can be limited in noisy places. Researchers hope to have a working version of the glasses by 2026. They are speaking to hearing aid manufacturers about future partnerships and hope to reduce costs to make the devices more widely available. Scientists have collected noise samples, from washing machines to traffic, to improve the system's training. Project leader Professor Mathini Sellathurai, of Heriot-Watt University, said: 'We're not trying to reinvent hearing aids. We're trying to give them superpowers. 'You simply point the camera or look at the person you want to hear. 'Even if two people are talking at once, the AI uses visual cues to extract the voice of the person you're looking at.' This approach, known as audio-visual speech enhancement, takes advantage of the close link between lip movements and speech. Some noise-cancelling technologies already exist, but struggle with overlapping voices or complex background sounds — something this system aims to overcome. By shifting the heavy processing work to cloud servers — some as far away as Stockholm — the researchers can apply powerful deep-learning algorithms without overloading the small, wearable device. The technology is still in the prototype stage but researchers have tested the technology with people who use hearing aids and said early results are promising. Prof Sellathurai said: 'There's a slight delay, since the sound travels to Sweden and back, but with 5G, it's fast enough to feel instant. 'One of the most exciting parts is how general the technology could be. 'It's aimed to support people who use hearing aids and who have severe visual impairments, but it could help anyone working in noisy places, from oil rigs to hospital wards. 'There are only a few big companies that make hearing aids and they have limited support in noisy environments. 'We want to break that barrier and help more people, especially children and older adults, access affordable, AI-driven hearing support.' The project is funded by the Engineering and Physical Sciences Research Council (EPSRC).