Latest news with #clinicalresearch


Forbes
a day ago
- Health
- Forbes
AI Can Revolutionize Women's Healthcare – Here's How
People who look closely at clinical data and medical research are recognizing that in the past, women have been left out. One aspect of this is a narrow view of what women's health encompasses – and a historic lack of paying attention to the unique ways that women's bodies differ from men's. A New York Academy of Sciences article expresses some of these disparities – that women were not generally included in medical research until the mid-1990s, that a woman's diagnosis tends to happen later than a man's in many cases, and that some conditions more common to women take a long time to be diagnosed, or receive less research. Brooke Grindlinger writes about a panel at SXSW in 2024 that talked about how much of this clinical research on women has been relegated to specific and narrow applications to reproductive health. 'There's so much more to women's health than that,' said Christina Jenkins, MD, a panelist and a general partner at Convergent Ventures. 'We consider 'women's health' as a specific practice… focused on things that are unique to women, which are those reproductive organs and [associated conditions] Of course, AI has also brought better diagnosis and new approaches to mammography: Maureen Salamon writes at Harvard Health Publishing about applications to breast cancer, while acknowledging that this is far from the only aspect of women's health that's being explored with the new technologies. There's also a focus on the vast trove of data that come from wearables, and thoughts on how they can contribute. 'In coming years, A.I. may be able to produce personalized breast cancer risk assessment scores, which offer a more detailed picture of an individual woman's risk for the disease,' Salamon writes. 'Currently, women's breast cancer risks are calculated using questionnaires that ask about factors such age, ethnicity, family history of breast cancer, breast biopsy history, breast density, when they first had a menstrual period, and — for those who have children — how old they were when they first gave birth. All of these issues can influence breast cancer risk.' In a recent TED talk, Lily Janjigian went over some of the ways that we are addressing women's health right here at MIT. We've heard a lot about this in recent conferences and events where people are excited about clinical improvements with AI. But Janjigian's story is unique – as she points out in her TED talk, she was on a sports team at MIT, and ended up developing stress fractures. Then she looked around and asked why these rates of injury were so much higher among female athletes. (getty image: a female athlete) The result was a focus on whether women's health is getting the billing that it deserves. 'We don't know a lot about women's health,' she said, referencing a statistic from McKinsey, that less than one percent of medical research looks into women's health, beyond work on female cancers. 'This ends up (leading to) really unfair outcomes for women.' Then she found MIT's Female Medicine through Machine Learning office (check it out here) where the focus is on exploring women's health with the new tech, and, as Janjigian pointed out, looking at patterns in large data sets for things like endometriosis. She showed how scientists can group three elements of data: genetic data, biological data, and symptoms – and bring those together for diagnosis and patient care. 'AI can finally let us questions about women's about women's health that we haven't been able to answer in the past. So why are we not going and deploying it everywhere we can?' The answer, Janjigian suggested, is that in some ways, AI is not a perfect solution. It can extend bias, she noted, with the wrong approach. 'AI systems are a reflection of what and who we choose to value, so let's make sure that we're all part of that from the start,' she said. More on AI's Power Good research calls for a deliberative approach, but there's another reason I think that AI will be helpful here It has to do with the attention mechanism, and how traditional research has worked. Stephen Wolfram, for one, is fond of talking about how AI's attention differs from that of humans, and what that means for our use of AI tools. In light of that, the above experts talk about the disparities – how human research has focused on men's issues and men's health. Well, when you're using AI data, you're working on the data sets that come in. So there's an opportunity to reduce the bias, and bring a broader lens to healthcare in general, and to women's health in particular. In other words, the same capabilities that let AI do great work in radiology diagnosis could help to focus in on how certain conditions affect women, with really great surveys of female clinical trials and resulting solutions. This is something a lot of people are excited about, and we should keep watching as we continue to integrate these solutions into our lives.


Medscape
5 days ago
- Health
- Medscape
Scientists Invent a Literal Thinking Cap
This transcript has been edited for clarity. Welcome to Impact Factor , your weekly dose of commentary on a new medical study. I'm Dr F. Perry Wilson from the Yale School of Medicine. My job (my real job) as a clinical researcher is complex. It's cognitively challenging; there are multiple studies to keep track of, grants and papers to write, a large group of mentees and trainees and staff in the lab to manage. It's emotionally stressful too — recently more than ever, in fact. But if I'm tired, or I ate a bad burrito for lunch, or I get some bad news on a personal level, it's not a crisis. I'm not making life-or-death decisions in a split second. I can take a break, gather myself, prioritize, and come back when I'm feeling better. Not every job has that luxury. A surgeon doesn't get to take a break in the middle of an operation if they feel like they are not at 100%. An air traffic controller can't walk away from ensuring that planes land safely because their kid woke them up in the middle of the night. These jobs and others like them have a unique challenge: a constant cognitive workload in a high-stakes environment. And the problem with constant cognitive work is that your brain can't do it all the time. If you force it to, you start to make mistakes. You can literally get tired of thinking. Think of how the world might change if we knew exactly how overloaded our cognitive processes were. I'm not talking about a subjective rating scale; I'm talking about a way to measure the brain's cognitive output, and to warn us when our ability to keep thinking hard is waning before we make those critical mistakes. We're closer than you think. The standard metric for assessing cognitive workload is the NASA Task Load Index. Yes, that NASA. The Task Load Index is a survey designed to assess how hard a task is. It was originally designed to be used in human-machine interactions, like piloting a spaceship. It's subjective. It asks you to rate how mentally demanding a task is, how frustrating, how much effort it takes, and so on. Cognitive researchers have used this scale to demonstrate how successive mentally stressful tasks degrade task performance. Science has demonstrated that taking breaks is a good thing. I know — news at 11. The problem with subjective scales, though, is that people have a tough time being objective with them. Astronauts might tell you a task was easier than it really was because they want to be chosen to ride on the rocket. Or a doctor might evaluate a complex surgery as less mentally taxing so they can continue to operate that day. Bringing objectivity to the brain is hard. Sure, you can do an fMRI scan, but sitting inside a metal tube is not conducive to real-world scenarios. You can measure brain fatigue in the real world with an EEG, though. The problem is that an EEG involves wires everywhere. You're tethered. And the goo, the sticky stuff that they use to put the electrodes on your head, is very sensitive to motion. In anywhere but a dedicated neuroscience lab, this isn't going to work. I thought the day of real-time monitoring of cognitive load would be pretty far off because of these limitations, and then I saw this study, appearing this week in the journal Device, from CellPress. It reimagines the EEG in a way that could honestly be transformational. There's a not-too-distant future when you'll be able to recognize people with highly cognitively intense jobs because they will look something like this. What you're looking at is a completely wireless EEG system. The central tech here is what the researchers call an 'e-tattoo' — but think of it like those temporary tattoos your kids wear. Conductive wires are printed on a thin transparent backing which conforms to the forehead. Electrodes make contact with the skin via a new type of conductive adhesive. The squiggles in the wires allow you to flex and move without breaking connections. That whole printed setup is made to be disposable; apparently the material cost is something like $20. The blue square is the ghost in the machine, a processor that receives the signals from the electrodes and transmits them, via low-energy Bluetooth, to whatever device you want. It's got a tiny battery inside and lasts for around 28 hours. In other words, even in this prototype phase, you could wear this thing at your cognitively intense job all day. And yeah, you might get a few looks, but the joke will be on them when the algorithm says your brain is full and you need to take a 15-minute rest. Of course, cool tech like this is only cool if it actually works, so let's take a look at those metrics. The first thing to test was whether the device could perform as well as an EEG on a simple task. Six adults were recruited and wore the tattoo at the same time as a conventional EEG. They were then asked to open and close their eyes. There's a standard finding here that with eyes closed, alpha frequencies, mid-range brain oscillations, dominate. You can see the patterns recorded by the standard EEG and the new tattoo system here. They are basically indistinguishable. But the tattoo system, with its flexible design, offers some particular advantages. One of the problems with conventional EEGs is how sensitive they are to motion. You turn your head, you get a bunch of noise. Walk around, and the signal becomes useless. Not so with the tattoo. These graphs show the electronic noise levels when the participant was doing various motions. Broadly speaking, you can see that the tattoo continues providing solid, reliable recordings even when walking or running, while the EEG goes all over the place with noise. The only exception to this was with eyebrow raising — maybe not surprising because the tattoo goes on the forehead. But I didn't start off telling you we have a new flexible EEG tech. I told you we had tech that could quantify our cognitive load. Here's how they tested this. In the lab, they had their volunteers do a cognitive task called the N-back test. It starts at level 0. Basically, they ask you to click a button whenever you see the letter Q or something. Easy. Level 1 is a bit harder. You have to click the button when the image on the screen matches, in either location or content, the image from one screen ago — one image back. Get it? Level 2 is even harder. You click when the current image matches, in content or location, the image from two screens ago. Level 3 gets really stressful. You have to click when you see something that matches three screens ago. And, of course, this keeps going, so you have to keep this information in your memory as the test continues. It's hard. It taxes the brain. Here are the results on the NASA survey scale. This is what the participants reported as to how mentally taxed they were. As the N gets higher, the cognitive stress gets higher. So the system works. The participants, you won't be surprised to hear, performed worse as the N increased. At higher N, the detection rate — the rate at which matches were appropriately clicked — declined. The reaction time increased. False alarms went up. All hallmarks of cognitive stress. And the e-tattoo could tell. Feeding its wireless output into a machine learning model, the researchers could predict the level of cognitive stress the participant was under. They show the results for the participant where the system worked the best — a bit of cherry-picking, certainly, but it will illustrate the point. The blue line indicates what level of the N-back test the participant was actually taking. The red line is what the machine learning model thought the participant was doing, just from reading their brain waves. They match pretty well. Again, that was just the time the experiment worked best. The overall results aren't quite as good, with a weighted accuracy metric ranging from 65% to 74% depending on the subject. Clearly better than chance, but not perfect. Still, these are early days. It seems to me that the researchers here have solved a major problem with monitoring people doing cognitively intense tasks — a way to read brain waves that does not completely interfere with the task itself. That's a big hurdle. As for the accuracy, even an imperfect system may be better than what we have now, since what we have now is nothing. But I have no doubt that with more data and refinement, accuracy will increase here. When it does, the next step will be to test whether using these systems on the job — in air traffic control towers, in operating rooms, in spaceships — will lead to more awareness of cognitive strain, more rest when it is needed, and better decision-making in the heat of the moment.


Medscape
20-05-2025
- Health
- Medscape
Biologics, Targeted Therapies Best for Acute PsA Dactylitis
Biologic or targeted synthetic disease-modifying antirheumatic drugs (DMARDs) seemed to be the most effective treatments for patients with acute psoriatic arthritis (PsA) dactylitis. METHODOLOGY: In a prospective observational cohort, the median age of 1735 patients with PsA was 44.9 years, and 55% were men. During a median follow-up of 12.1 years, dactylitis occurred in 753 (43.4%) patients and the remaining 982 (56.6%) never had it. Among the patients with dactylitis, 295 (39.2%) were treated with nonsteroidal anti-inflammatory drugs (NSAIDs), 273 (36.3%) with conventional synthetic DMARDs, 100 (13.3%) with biologic or targeted synthetic DMARDs, 64 (8.5%) with glucocorticoid injection and a change in DMARD therapy, and 21 (2.8%) with glucocorticoid injection. TAKEAWAY: Univariable and multivariable analyses showed that the most significant clinical factors associated with acute PsA dactylitis included younger age, male sex, the presence of nail disease, a higher Clinical Disease Activity Index for Psoriatic Arthritis score and a higher modified Steinbrocker score for radiographic damage. The median time to resolution of acute symptoms was 0.8 months. Treatment with biologic or targeted synthetic DMARDs was associated with faster resolution of acute PsA dactylitis symptoms and longer time to recurrence than other therapies, including conventional synthetic DMARDs and NSAIDs. One third of patients experienced recurrence of acute PsA dactylitis following treatment, with a median time to recurrence of 2 years. IN PRACTICE: 'Acute dactylitis is common in about 43% of patients with psoriatic arthritis and considered a marker of disease severity, but we saw that treatment with biological or targeted DMARDS was associated with the best outcomes in terms of faster resolution of symptoms and prevention of recurrence,' Fadi Kharouf, MD, clinical research fellow at the University of Toronto/University Health Network, Toronto, Ontario, Canada, said in an interview. SOURCE: Kharouf presented the study at the Spondyloarthritis Research and Treatment Network (SPARTAN) 2025 Annual Meeting in Toronto, Ontario, Canada. LIMITATIONS: No limitations were reported. DISCLOSURES: The study was conducted as part of the Gladman Krembil Psoriatic Arthritis Program, which is supported by the Krembil Foundation and the Schroeder Arthritis Institute. Kharouf disclosed being supported by a fellowship from the Krembil Foundation.
Yahoo
19-05-2025
- Business
- Yahoo
Responsible Conduct of Research (RCR): Good Writing Practice (GWP) Training Course - Ethical Standards, Data Integrity, and Plagiarism Prevention (ONLINE EVENT: June 2, 2025)
Dublin, May 19, 2025 (GLOBE NEWSWIRE) -- The "Responsible Conduct of Research (RCR): Good Writing Practice (GWP) Training Course" training has been added to offering. This course covers key principles of medical and scientific writing, including ethical standards, data integrity, and plagiarism prevention. Participants will learn how to tailor content for various audiences, structure documents (e.g., manuscripts, protocols), and adhere to writing standards such as clarity, consistency, and correct citation. The course also introduces tools for quality control and regulatory compliance. Benefits of attending Understand Good Writing Practice (GWP) standards Create clear, accurate, and well-structured scientific documents Be compliant with regulatory and industry requirements Who Should Attend: Clinical research professionals, including clinical research associates, investigators, and study coordinators Medical writers, regulatory writers, and publication professionals Quality assurance specialists and compliance officers in pharmaceutical and clinical research organisations Anyone involved in clinical trial documentation, regulatory submissions, or scientific publications seeking to improve their understanding of good practices and ethical guidelines For more information about this training visit About is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends. CONTACT: CONTACT: Laura Wood,Senior Press Manager press@ For E.S.T Office Hours Call 1-917-300-0470 For U.S./ CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900


Forbes
16-05-2025
- Business
- Forbes
AI Can't Fix Clinical Trials Without The Right People, New Parexel Report
Parexel's research found 6% of surveyed executives say their organizations are "exceptionally ... More prepared" to handle AI-driven drug development — more than 50% named AI training as a high priority. The potential of artificial intelligence in pharmaceutical research is intriguing. AI software is making its way into clinical research workflows from predictive analytics through to pharmacovigilance. However, based on new research from global clinical research organization Parexel — and the company's Chief Business Officer, Keri Mattox — there is a much more important variable determining trial success: humans. Published Thursday, Parexel's staffing report provides one of the fullest pictures to date of how the biopharma sector is rising to the challenge of technological disruption. Based on a global survey of 501 professionals, the report looks at where AI is being utilized, what is successful and where talent deficits could hinder advancement. 'There's an explosion of data across the clinical landscape,' Mattox said during a Thursday fireside chat at the Financial Times U.S. Pharma and Biotech Summit. 'But that data only becomes useful if you have a workforce that knows how to harness it —critically, thoughtfully and in service of the patient.' AI use is a bit patchy within the CRO space right now, Parexel's study finds. Fewer than 40% of respondents reported regular use of AI tools across their clinical workflows. Enthusiasm is growing, nonetheless, particularly for narrow but high-value use cases such as automated site reports, regulatory monitoring bots and early-stage data synthesis. Mattox gave one unexpectedly simple example, 'There are regulatory websites that won't push updates to us. Someone had to sit and manually refresh those pages. Now, we've got AI bots scanning them in real time. That small automation translates into measurable acceleration when you multiply it across thousands of trials.' This chase is not surface-level. With clinical trial complexity and costs on the rise, AI offers a chance to shift from reactive to proactive R&D — if businesses invest in the talent to make it happen. Mattox's sentiments mirrored the implicit message of Parexel's report: the biopharmaceutical industry is not yet ready to leverage AI to its fullest capabilities. Just 6% of surveyed executives feel their organizations are "exceptionally prepared" to handle AI-driven drug development. Over half indicated that training in AI capabilities is now a high priority. 'More than half of [biopharma leaders] say that AI experts are going to be the most important role to fill in the next three to five years,' Mattox noted. 'And that training them to actually use AI is a top priority. It's not just about teaching someone how to use a tool. It's training people to know when that tool adds value, when it doesn't and how to interpret what comes out of it.' According to the Parexel study, AI is proving to be a useful tool for clinical research. However ... More there is near term need for more and better "human-in-the-loop" AI training in the sector. Contrary to headline fears that robots will displace scientists, near-term reality is more collaborative. Parexel's report substantiates the notion that AI will thrive in "human-in-the-loop" systems where humans remain responsible for verifying conclusions, identifying biases and guiding outcomes. Mattox echoed the same sentiment. 'AI isn't a substitute for clinical expertise. It's a new arrow in the quiver — a powerful one — but its value depends entirely on how well humans can engage with it. That's why we're focused on upskilling at every level of the organization.' Surprisingly, the report found that frontline workers are more proficient in entering data than in understanding AI-generated outputs. The asymmetry in capability comes with some risk — most particularly as regulatory processes become increasingly data-intensive and dispersed. AI is also reinventing classic job roles. Parexel's survey says "multi-fluent" roles — roles that are a combination of clinical, technical and data smart — will be the future norm. In addition to bringing in AI specialists, organizations need to create staff members who can manage ambiguity, work across functions and identify patterns that machines can't. 'We're seeing AI added to nearly every role, not just isolated to new job titles,' Mattox explained. 'Junior team members are using AI to sift through thousands of pages of data, while senior leaders are being trained to make final decisions based on AI-assisted insights.' Yet that uptake is not unqualified. Sponsors are being careful about introducing AI into data sets that are already under regulatory scrutiny, Mattox indicated. 'There's not a fear of AI per se, but there's real sensitivity around when and where it gets used — especially if it could influence submission outcomes.' In addition to tech savviness, the report identifies a second, less apparent revolution – culture. Top-performing organizations aren't only investing in AI technologies, they're building cultures in which continuous learning, mentorship and meaning flourish. Parexel's research demonstrates that face-to-face, hands-on training leads to greater retention and more successful outcomes than web-based modules by themselves. 'We're hearing from experts across the field that the best way to train people is in person. It's hands-on mentoring. And the companies doing that are retaining those workers better,' Mattox explained. That clarity of purpose is more than an inspirational mantra. It's a retention strategy in an open-talent market where AI-enabled workers are in increasingly high demand across industries because AI natives drive results. 'We tie everything — AI adoption included — back to our core mission of getting therapies to patients faster,' she said. 'If we can show that you can close a trial 35% faster using AI at three touchpoints, even the skeptics become supporters,' she said. As sponsors demand more efficient, adaptive and patient-centric trials, CROs such as Parexel are evolving their role as talent engines instead of service providers. Regardless of whether full-service outsourcing or hybrid approaches are used, the capability to quickly mobilize trained, AI-enabled talent is emerging as a strategic differentiator. Finally, the report and Mattox's comments meet at one place: AI can drive the future of clinical development, yet human capacity will dictate how quickly —and how far — we get there.