logo
#

Latest news with #Patterns

Kelsea Ballerini Delivers an Acoustic Set to Celebrate ELLE's Women in Music Issue
Kelsea Ballerini Delivers an Acoustic Set to Celebrate ELLE's Women in Music Issue

Elle

time4 days ago

  • Entertainment
  • Elle

Kelsea Ballerini Delivers an Acoustic Set to Celebrate ELLE's Women in Music Issue

To celebrate ELLE's Women in Music issue, some of music's brightest stars came together for an intimate dinner at Chez Margaux in New York City. The evening was co-hosted by ELLE editor-in-chief Nina Garcia and Fendi Chief Communication Officer Cristiana Monfardini, and attendees included Ryan Destiny, Dora Jar, Charlotte Lawrence, Muni Long, Grace VanderWaal, Maria Zardoya of The Marías, Blu DeTiger, Frawley, and more. '[Our Women in Music issue] is one of my favorites to put together,' Garcia said in her toast at the beginning of the night. 'It's really about identifying and celebrating the women whose music and voices are changing the industry and influencing our culture.' Guests were then invited to enjoy a family-style meal featuring a delicious assortment of dishes from chef Jean-Georges, including fresh salads, gourmet pizzas, prime tenderloin, crispy French fries, and classic rigatoni pomodoro. For the perfect nightcap, ELLE's digital cover star Kelsea Ballerini performed a cover of 'Dreams' by Fleetwood Mac. Garcia praised the country singer, who just finished a tour and wrapped up a season of The Voice. 'She's not only talented and beautiful, but also an incredibly gifted musician and writer,' she said. 'We are so happy to have you here.' A known advocate for women's and LGBTQ+ rights, Ballerini was natural fit for ELLE's Women in Music issue. On her album Patterns, she worked with an all-female songwriting team, and even brought drag queens onstage for her performance at the CMT Music Awards. In her cover story, the country star said, 'I never really was loud about anything for a really long time, because I just had to get my footing. And then I was like, 'At the end of the day, I want the people who listen to my music to know what I stand for and hopefully align with it.'' Dessert brought the evening to a sweet close, featuring rich chocolate cake, fresh fruit, and coffee. Click through the gallery below for exclusive photos from inside the event. Samuel is the Content Strategy Manager at Hearst. Prior to this role, he was an Associate Editor and the Assistant to ELLE's Editor-in-Chief, Nina Garcia. Raised in Des Moines, Iowa, Samuel attended Northwestern University and currently resides in New York City. He is probably humming a tune at his desk right now.

Dreamers Investment Guild Commemorates 15 Years With Founding Speech by Sterling Preston
Dreamers Investment Guild Commemorates 15 Years With Founding Speech by Sterling Preston

Business Upturn

time5 days ago

  • Business
  • Business Upturn

Dreamers Investment Guild Commemorates 15 Years With Founding Speech by Sterling Preston

New York, NY, May 28, 2025 (GLOBE NEWSWIRE) — Dreamers Investment Guild today hosted a commemorative event marking its 15th year as a leader in cognitive investment education. The focal point of the ceremony was a keynote speech delivered by founder Sterling Preston, offering a rare behind-the-scenes narrative on the platform's original purpose, strategic challenges, and long-term mission to redefine how investment knowledge is understood and taught. Preston's address, delivered before an audience of Guild members, global educators, and institutional partners, traced the journey from a single curriculum prototype in 2010 to today's multi-continent education infrastructure spanning 42 countries and thousands of learners. He emphasized that the Guild was never intended to be a financial product company, but rather an 'engine for thinking clarity under market pressure.' 'When Dreamers Investment Guild was founded, most people were chasing hot tips and headlines,' said Preston. 'What was missing—and still is in many places—is the ability to structure information, judge relevance, and hold conviction under uncertainty. That's what we set out to build—a place where investment is taught as a cognitive responsibility.' During his remarks, Preston outlined three defining stages in the Guild's history: Disruption by Discipline (2010–2014): Launch of the first structured investment logic curriculum, focusing on market cycles, risk asymmetry, and probability judgment. Systematization and Global Access (2015–2019): Expansion into modular learning architecture, translation into six languages, and onboarding of institutional learners. Strategic Deepening and Identity (2020–2025): Emphasis on layered thinking, real-time feedback models, and the development of decision calibration tools rooted in behavioral data. He also used the occasion to announce the release of a founder's essay collection titled 'Patterns, Pressure, and Patience: Notes from the Cognitive Edge', which includes unpublished reflections, early drafts of Guild frameworks, and insight into the platform's long-term educational design logic. As part of the commemorative event, members were invited to participate in live forums exploring personal reflections on their own investment learning evolution. The event concluded with the symbolic unveiling of the Guild's updated mission statement: 'To make clarity a skill and strategy a structure, in every investment mind that seeks to understand.' The speech was broadcast live across the Guild's global hubs and will be archived on its institutional portal along with subtitled translations and a downloadable transcript. About Dreamers Investment Guild Founded in 2010, Dreamers Investment Guild is a global cognitive investment education platform that emphasizes structured learning, behavioral analysis, and long-term reasoning. Under the direction of Sterling Preston, the Guild continues to redefine investment education by helping individuals build clarity, strategy, and cognitive resilience in complex market environments. Disclaimer: The information provided in this press release is not a solicitation for investment, nor is it intended as investment advice, financial advice, or trading advice. It is strongly recommended you practice due diligence, including consultation with a professional financial advisor, before investing in or trading cryptocurrency and securities. Disclaimer: The above press release comes to you under an arrangement with GlobeNewswire. Business Upturn takes no editorial responsibility for the same.

Mucem's Marseille exhibition showcases Morocco's Amazigh heritage
Mucem's Marseille exhibition showcases Morocco's Amazigh heritage

Ya Biladi

time19-05-2025

  • Entertainment
  • Ya Biladi

Mucem's Marseille exhibition showcases Morocco's Amazigh heritage

The exhibition «Amazighes. Cycles, Adornments, Patterns», organized by the Museum of European and Mediterranean Civilizations (Mucem) in Marseille in collaboration with the Jardin Majorelle Foundation in Marrakech, is open to the public until November 2. It offers a unique exploration of Moroccan Amazigh culture through 150 objects and artworks spanning from the 19th century to the present day. The exhibition aims to provide a rich and multifaceted perspective on the Amazigh world—a culture that dates back to the Neolithic era and extended across a vast territory in North Africa, from Egypt to Morocco and as far as the Canary Islands. Curated by Moroccan architect and anthropologist Salima Naji and Alexis Sornin, director of the Jardin Majorelle museums, the exhibition highlights the richness and diversity of Amazigh symbolism and emphasizes the importance of cultural transmission. It showcases contemporary initiatives that support this mission, such as the work of Myriem Naji, who documents and shares traditional artisanal techniques, and Amina Agueznay, who collaborates with weavers to incorporate Amazigh symbols into her creations. On display are pieces including jewelry, ceramics, textiles, basketry, sculptures, tools, photographs, videos, and installations. The majority of the works come from the collections of the Pierre Bergé Museum of Berber Arts at the Jardin Majorelle Foundation in Marrakech and Mucem, along with contributions from public and private collections in the Canary Islands, Morocco, and France.

AI health warning as researchers say algorithms could discriminate against patients
AI health warning as researchers say algorithms could discriminate against patients

The Independent

time12-04-2025

  • Health
  • The Independent

AI health warning as researchers say algorithms could discriminate against patients

Artificial intelligence in healthcare has left experts urging caution that a focus on predictive accuracy over treatment efficacy could lead to patient harm. Researchers in the Netherlands warn that while AI -driven outcome prediction models (OPMs) are promising, they risk creating "self-fulfilling prophecies" due to biases in historical data. OPMs utilise patient-specific information, including health history and lifestyle factors, to assist doctors in evaluating treatment options. AI's ability to process this data in real-time offers significant advantages for clinical decision-making. However, the researchers' mathematical models demonstrate a potential downside, namely if trained on data reflecting historical disparities in treatment or demographics, AI could perpetuate these inequalities, leading to suboptimal patient outcomes. The study highlights the crucial role of human oversight in AI-driven healthcare. Researchers emphasise the "inherent importance" of applying "human reasoning" to AI's decisions, ensuring that algorithmic predictions are critically evaluated and do not inadvertently reinforce existing biases. The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models "can lead to harm". 'Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,' researchers said. 'We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. 'These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.' The article, published in the data-science journal Patterns, also suggests the development of AI model development 'needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome'. Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire's department of computer science, said: 'This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics. 'These models will accurately predict poor outcomes for patients in these demographics. 'This creates a 'self-fulfilling prophecy' if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them. 'Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes. 'Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background. 'This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.' AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes. In January, Prime Minister Sir Keir Starmer pledged that the UK will be an 'AI superpower' and said the technology could be used to tackle NHS waiting lists. Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs 'are not that widely used at the moment in the NHS'. 'Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,' he said. Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: 'While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions. 'Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness. 'Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy. 'However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes. 'Patients labelled by the algorithm as having a 'poor predicted recovery' receive less attention, fewer physiotherapy sessions and less encouragement overall.' He added that this leads to a slower recovery, more pain and reduced mobility in some patients. 'These are real issues affecting AI development in the UK,' Prof Harrison said.

AI could lead to patient harm, researchers suggest
AI could lead to patient harm, researchers suggest

Yahoo

time11-04-2025

  • Health
  • Yahoo

AI could lead to patient harm, researchers suggest

Artificial intelligence (AI) could lead to patient harm if the development of models is focused more on accurately predicting outcomes than treatment, researchers have suggested. Experts warned the technology could create 'self-fulfilling prophecies' when trained on historic data that does not account for demographics or the under-treatment of certain medical conditions. They added that the findings highlight the 'inherent importance' of applying 'human reasoning' to AI decisions. Academics in the Netherlands looked at outcome prediction models (OPMs), which use a patient's individual features such as health history and lifestyle information, to help medics weigh up the benefits and risks of treatment. AI can perform these tasks in real-time to further support clinical decision-making. The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models 'can lead to harm'. 'Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,' researchers said. 'We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. 'These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.' The article, published in the data-science journal Patterns, also suggests the development of AI model development 'needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome'. Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire's department of computer science, said: 'This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics. 'These models will accurately predict poor outcomes for patients in these demographics. 'This creates a 'self-fulfilling prophecy' if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them. 'Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes. 'Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background. 'This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.' AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes. In January, Prime Minister Sir Keir Starmer pledged that the UK will be an 'AI superpower' and said the technology could be used to tackle NHS waiting lists. Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs 'are not that widely used at the moment in the NHS'. 'Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,' he said. Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: 'While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions. 'Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness. 'Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy. 'However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes. 'Patients labelled by the algorithm as having a 'poor predicted recovery' receive less attention, fewer physiotherapy sessions and less encouragement overall.' He added that this leads to a slower recovery, more pain and reduced mobility in some patients. 'These are real issues affecting AI development in the UK,' Prof Harrison said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store