logo
#

Latest news with #CBS60Minutes

Virginia Giuffre's lawyer never thought Epstein accuser would take her own life
Virginia Giuffre's lawyer never thought Epstein accuser would take her own life

Metro

time12-05-2025

  • Metro

Virginia Giuffre's lawyer never thought Epstein accuser would take her own life

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video Virginia Giuffre's lawyer has said she never feared the sex abuse survivor would take her own life. Sigrid McCawley fought back tears in a 60 Minutes interview on Sunday night as she struggled to come to terms with her client's suicide. She is the second of Giuffre's lawyers to say there were no signs the Epstein accuser wanted to end her life in the weeks leading up to her death. Her lawyer, Karrie Louden, claimed there were 'big question marks' over her death. Giuffre, who settled a sexual assault lawsuit against Prince Andrew, was found dead at her home in Western Australia on April 25. Her family said she died after being a 'lifelong victim of sexual abuse and sex trafficking', with the 'toll of abuse' becoming Giuffre's attorney McCawley, has now revealed her shock and devastation at the sudden suicide, saying she reacted with 'overwhelming surprise and disbelief. True disbelief.' She told CBS 60 Minutes: 'It took me several hours to even come to terms with the fact that that was real. She was just a dear person in my life. And I think that the world will not be the same without her. It just won't be.' When asked if she was concerned Virginia would take her own life, she responded, 'I did not fear that'. The lawyer added: 'Virginia is always someone to rally. So every time I talk to her, she could find the sunny side of something. 'That's why I think that the disbelief has been so strong. 'I think that while Virginia could face many demons in her life and many villains, that moment of deprivation, I think, was something more than she could handle.' The pair were close friends as well as working in a working relationship. McCawley emotionally recalled when she realised Giuffre valued her as more than her lawyer. 'I used to say that we had broken through the lawyer-client line because she would sign her emails, 'I love you Siggy',' she said. Then less than a month before her death, Giuffre posted on Instagram that she had been in a road accident and had four days to live due to 'kidney renal failure. This led McCawley to be concerned about her physical health. Western Australia Police said they received a report of a 'minor crash' and that there were 'no reported injuries as a result'. McCawley's interview came after Giuffre's dad Sky Roberts, said there was 'no way' Ms Giuffre died by suicide. He alleged to Piers Morgan Uncensored at the beginning of this mouth that there was foul play involved, adding: 'For them to say that she committed suicide, there's no way she did that. Somebody got to her. 'I don't think she committed suicide.' Giuffre was one of the most outspoken accusers of convicted sex offenders Jeffrey Epstein and his former girlfriend, Ghislaine Maxwell. She alleged they trafficked her to Prince Andrew when she was 17, a claim which he has denied. Giuffre sued him for allegedly sexually assaulting her when she was 17 after she was trafficked by paedophile financier Jeffrey Epstein. The Duke paid millions to settle the civil sexual assault case, which he said he had never met. McCawley referred to these legal battles when praising Giuffre's legacy. She said: 'She was just remarkable. More Trending 'She has left us with a feeling that irrespective of whether you're a president, a politician, a billionaire or a prince, you can be held accountable. You are not above the law.' 'She put Epstein in prison. She put Maxwell in prison. 'She had Prince Andrew stripped of his titles. 'Her words, her actions were incredible, and they started a movement of change.' Get in touch with our news team by emailing us at webnews@ For more stories like this, check our news page. MORE: Virginia Giuffre's heartbreaking final letter released by her family MORE: Australian activist suggests Prince Andrew accuser Virginia Giuffre was killed MORE: Virginia Giuffre's death is 'tragic chance for Andrew to reveal all on Epstein'

John Oliver on Trump deportations: ‘usually blatantly racist and always cruel'
John Oliver on Trump deportations: ‘usually blatantly racist and always cruel'

The Guardian

time05-05-2025

  • Politics
  • The Guardian

John Oliver on Trump deportations: ‘usually blatantly racist and always cruel'

John Oliver took a deep dive through the Trump administration's brutal and bewildering campaign of deportations on Sunday evening, starting with the White House's 'nauseating social media posts'. Posts to the official White House Instagram account include a video of shackled people led on to a plane soundtracked to the song Closing Time by Semisonic, along with the caption 'you don't have to go home but you can't stay here.' The track 'obviously isn't the right song choice', the Last Week Tonight host said. 'The right song choice would be no song at all, because deportation Instagram reel is a combination of words that should never exist, like 'Oscar winner Mr Beast' or 'Stephen Miller nudes' or 'Bill Belichick speaks about his relationship with 24-year-old girlfriend.'' (Semisonic has denounced the choice of the song.) The video underscored one of Oliver's key points: 'For all this administration's talk of prioritizing hardened criminals, in practice it seems to value speed, volume and spectacle over all else.' Though Trump's administration has claimed to focus on 'violent criminals', CBS 60 Minutes was unable to find criminal records for over 75% of 238 migrants sent to a Salvadorian prison, and the government even conceded that one man, Kilmar Ábrego García, was sent there due to an 'administrative error'. 'For weeks now, it has been scrambling to come up with reasons why it was OK to send that man to a foreign prison,' said Oliver, 'which has been hard for them to do, given that it had a court order protecting him from deportation to El Salvador and no criminal record.' So Trump posted an image on social media of a photo of Ábrego García's hand with markups attempting to show that his tattoos indicated that he was a member of the gang MS-13. And in an interview with the ABC News correspondent Terry Moran pegged to his first 100 days in office, Trump tried to argue that the clearly superimposed text of 'MS-13' were actually tattooed on Ábrego García's hand. Oliver played the 'absolutely incredible' 90-second clip in full before responding himself: 'Terry, Terry, Terry, you're in hell, Terry. Terry, this is hell right now. I'm genuinely shocked Trump doesn't drink alcohol because that is the most 'drunk at an Ihop' conversation I think I've ever heard. 'And no disrespect to Terry, but maybe don't move on from that,' he continued. 'I know you've got other questions to get to, but if the president of the United States is trying to tell you that this amateur-hour Photoshop is real, let him go get the picture and make him say it again. Point to that Helvetica-looking 'M', and make the president say, 'Yes, I believe that artless M that's weirdly clearer and darker than all the other tattoos is real.' Make him say I believe that man went to a tattoo parlor and said, 'The skull's pretty spooky, but what I'd really like is a neatly aligned '3' directly on the bone of my knuckle, and can you please make it so that it doesn't stretch or bend with the natural curves of the human hand and also make it look like a typewriter did it?' 'Because, Terry, sometimes when Trump's doing his normal racist blue sky, you do need to cut him off to slow the flow of hatred into the world,' he added. 'But if he wants to tell America that this laughably doctored picture is evidence of a major threat to American safety, you have an obligation to let the man cook. 'And for what it's worth, if Trump's going to hash out those claims, he probably should be doing that in court, not on TV, and after he's already shipped someone off to a foreign prison,' he continued. 'But Ábrego García is just one of many horrifying stories surrounding immigration right now,' as the administration has embarked on a fear-based crackdown with blatant disregard for the rule of law. In the first 100 days of his term, Trump's administration undertook 181 immigration-specific executive actions – a sixfold increase over that same period in his first term. To do so, it has bent arcane laws and scoured databases to absurd ends. Oliver pointed to the case of Suguru Onda, a PhD student at Brigham Young University in Utah, who had his legal status revoked after appearing on a criminal records database by Immigration and Customs Enforcement (Ice). Onda, who is from Japan, had no criminal charges, just two speeding tickets and a citation for catching one too many fish. 'That is ridiculous,' Oliver fumed. 'If you can be flagged for deportation for catching one too many fish, then I truly fear for Henry Winkler. We could be just days away from seeing him in an El Salvador prison, which I'm sure the White House will then justify by badly Photoshopping an MS-13 tattoo on to his neck.' Ice later reversed the decision on Onda's legal status, 'but this all feels like the inevitable result of a campaign that fearmongered about an epidemic of so-called migrant crime which, as we've discussed before, was wildly overblown', Oliver explained. 'But having promised mass deportations and even printed signs for people to wave around demanding them, they're now scrambling to deliver.' According to multiple reports, the administration has instructed Ice officials to ramp up arrests to 1,200-1,500 people a day, and no longer target the supposed 'worst offenders' first. 'What the administration is doing is sometimes targeted, sometimes arbitrary, usually blatantly racist and always cruel,' said Oliver, such as deporting a child back to Honduras without his medication for stage four cancer. The cruelty is 'the heart of all of this', Oliver detailed, 'which is Trump loudly selling his supporters the lie that he'll protect them from existential threats, only to further government overreach and state violence while deporting makeup artists, unlucky soccer fans and four-year-olds with cancer'. The host called for pressure on elected officials to try to stop Trump's illegal overreach. 'To their credit, a number of prominent Democrats have gone to El Salvador to call attention to this,' he said. 'Which is definitely preferable to the approach others have taken.' He cited anonymous House Democrats quoted as asking, 'Should it be the big issue for Democrats? Probably not,' and 'complaining that rather than talking about the tariff policy and the economy, we're going to go take the bait for one hairdresser? 'Which is absolutely enraging,' he continued, 'especially as many voters do seem to get the clear problem with deporting people without due process to a prison for life, even in red states.' Oliver urged viewers to call their representatives and make them aware of public opinion. 'It can make a difference,' he said, pointing to the former supreme court chief justice William Rehnquist's assertion that, 'no honorable judge would ever cast his vote because he thought the majority of the public wanted him to vote that way but that in certain cases, judges are undeniably influenced by the great tides of public opinion.' 'I would argue the moment we're in right now isn't just worthy of a great tide,' Oliver concluded. 'It is worthy of a fucking tsunami because this is an absolute outrage and it is one where it is important to remind our elected leaders that all people are worthy of safety, protection and due process.'

Pope Francis did a lot of good, but not for women
Pope Francis did a lot of good, but not for women

Yahoo

time26-04-2025

  • Politics
  • Yahoo

Pope Francis did a lot of good, but not for women

Going to Catholic school, we all knew our patron saints. Mine happened to be St. Francis of Assisi. Even at a young age, I thought it was a good fit due to my Italian heritage and my love and appreciation of nature. During my youth I spent a lot of time at "The Woods," an undeveloped piece of land on the east side of Detroit where we would play, explore and see all kinds of critters. So when Jorge Mario Bergoglio (another Italian) was selected as pope in 2013 and chose Francis as his papal name in honor of St. Francis of Assisi, I had that same spiritual excitement as an adult. And seeing Pope Francis embody the values of St Francis of Assisi in the following years just made me like him even more and more. His concern for all people and the environment. His humble ways and opting for a life of simplicity instead of embracing the normal trappings of the papacy. His openness on complex issues. His connection with everyday people. Even those who weren't people of faith. He was a man who walked the talk of St. Francis. All this inspired me to become a supporter of his namesake charity in Detroit that helps the homeless, the Pope Francis Center, run by Father Tim McCabe and his dedicated staff. Dio benedica Papa Francesco! We will miss you! Frank Michael Seleno Rochester Hills More opinion: Will papal conclave veer from Pope Francis' legacy of inclusion? Others will offer justifiably high praise for Pope Francis' words and deeds aligned with and promoting Gospel values based on the words and deeds of Jesus. And I will agree with most of it. However, what Pope Francis meant to me is disappointment. In Catholic doctrine, there is literally a required physical and gender criterion to qualify for the ordained priesthood in the image of Christ. ("Image" meaning solely the maleness of Christ, not Jesus' ethnicity or race or economic status …) Catholic doctrine, in my opinion, deems femaleness as "less than," and thereby females as lacking the necessary inherent value apparently needed for interpreting Holy Scriptures or guiding Catholic theology in any capacity influential in church development or dogma. Women, by gender, are deemed unworthy to represent Christ as priests and are therefore incapable of receiving or fully understanding divine revelation. Pope Francis firmly reinforced the "women are less than" doctrine during a May 2024 CBS 60 Minutes interview. When asked whether females will ever have the opportunity to be a deacon and participate as a clergy member in the Church, Francis, with unequivocal clarity, responded, "No." He added that any role, deacon or priest, that involves the sacrament of Holy Orders is not open to women. Francis' "no" to women's ordination is not just the exclusion of women from a Church role. It is an insupportable and blatant rejection of women's full and equal dignity as acquired through God's gift and grace. The pope's "no" to women not only keeps the ordination door firmly slammed shut (and thereby excludes women from many other Church roles, including any kind of decision-making leadership, selecting a new pope, closing churches, etc.), it also reinforced the locks, strengthened the no-women brackets, moved a massive boulder behind the door and sat an elephant on top of it, for another couple millennia. As I may happen to catch news coverage of Pope Francis' upcoming funeral, I will be reminded of the many ways that he genuinely demonstrated Gospel goodness. But as I watch a sea of men preside over the pope's funeral liturgy, as was the case for Mother Teresa's funeral, for example, where the less-than women are not only barely seen and only in very limited numbers and capacities, but by doctrine can have no role in preaching or consecrating or blessing, I will be strongly repelled by the church's females-are-less doctrine, and will feel even more strongly the depth of disappointment in the inconsistent and irreconcilable words of Pope Francis. Disappointment is a selectively chosen, tame term used respectfully at this mournful time in recognition of the passing of a good man, Pope Francis. Truly, may he rest in God's great heavenly peace. Susan Scannell Allen Park More: As a Jesuit, losing Pope Francis felt a lot like losing my dad As we reflect on the tenure of Pope Francis, there is one aspect of his legacy that should not be overlooked: his commitment to environmental justice. Shortly after his appointment in March 2013, climate change became a central issue. 'If we destroy creation, creation will destroy us,' he said. This was a year before the Paris Climate Agreement was adopted. In 2019, he addressed leaders of global fossil fuel companies, imploring them to make a rapid transition to renewable energy sources, going so far as to declare the crisis a 'climate emergency.' In the words of Mark Watts, director of C40 Cities, a global network of mayors committed to climate action, 'he established for a worldwide audience that the climate crisis is not just an environmental challenge but a profound social and ethical issue, exacerbated by greed and short-term profit seeking, disproportionately affecting the world's most marginalized communities.' Luke Daniels Rochester I would like to point out the hypocrisy of President Donald Trump's supposed antisemitic crusade. He is responsible for deporting people because they have expressed disapproval with how Palestinians are treated in the ongoing war with Hamas. He has denied funding to universities based upon the perception that these institutions don't do enough to curb what he would describe as antisemitic behavior. However, as no one seems to ever point out, Trump has pardoned Capitol insurrectionists for attempting to facilitate the disruption of certification of a legitimate election. Among those people were those who displayed QAnon flags, wore a Camp Auschwitz (the notorious concentration camp) t-shirt and other antisemitic words and symbols. As is typical of Trump, he selectively punishes those whom he claims are violating some principle when he does so himself George Dziamniski Monongahela, Pennsylvania Submit a letter to the editor at and we may publish it online and in print. If you have a differing view from a letter writer, please feel free to submit a letter of your own in response. Like what you're reading? Please consider supporting local journalism and getting unlimited digital access with a Detroit Free Press subscription. We depend on readers like you. This article originally appeared on Detroit Free Press: Pope Francis did a lot of good, but not for women | Letters

Building AI Foundational Models And Generative AI That Expertly Performs Mental Health Therapy
Building AI Foundational Models And Generative AI That Expertly Performs Mental Health Therapy

Forbes

time22-04-2025

  • Health
  • Forbes

Building AI Foundational Models And Generative AI That Expertly Performs Mental Health Therapy

In today's column, I examine a fast-moving trend involving the development of AI foundational models and generative AI that are specifically tailored to perform mental health therapy. Building such models is no easy feat. Startups are embarking fervently down this rocky pathway, earnestly attracting topnotch VC investments. Academic researchers are trying to pull a rabbit out of a hat to figure out how this might be best accomplished and whether it is truly feasible. It turns out that some of these efforts are genuine and laudable, while others are rather shallow and consist mainly of proverbial smoke-and-mirrors. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). As a quick background, I've been extensively covering and analyzing a myriad of facets about the advent of modern-era AI that ostensibly performs mental health advice and undertakes AI-driven therapy. This rising use of AI has principally been spurred by evolving advances and the widespread adoption of generative AI and large language models (LLMs). There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS 60 Minutes. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis that also recounts a new stellar initiative at the Stanford University School of Psychiatry and Behavioral Sciences called AI4MH, see the link here. Now to the matter at hand. One of the Holy Grail's underlying the entirety of AI for mental health consists of designing, building, testing, and fielding AI that is especially tailored for performing mental health therapy. I will take you behind the scenes so that you'll have a semblance of how this is being undertaken. Please know that we are still in the early days on devising robust and full-on AI for mental health advisement. The good news is that there is a lot we already know, which I'll been sharing with you and hopefully might inspire more interested parties to join these efforts. The bad news is that since these are still rough-edged efforts, we don't yet have a clearcut ironclad path ahead. A whole lot of wide-eyed speculation abounds. I'd say that's more of an opportunity and exciting challenge, as opposed to being a mood dampener or showstopper. We need to keep our nose to the grind, use some elbow grease, and get this suitably figured out. Doing so will benefit the mental wherewithal of society all told. Yes, it's indeed that significant. Let's begin by making sure we are all on the same page about the nature of AI foundational models. For any trolls out there, I am going to keep things short and simple by necessity, so please don't get out-of-sorts (I've also included links throughout to my in-depth coverage for those readers that want to learn more). When you use ChatGPT, Claude, Gemini, or any of the various generative AI apps, you are making use of an underlying large language model (LLM). This consists of a large-scale internal structure, typically an extensive artificial neural network (ANN), see my detailed explanation at the link here. Think of this as a massive data structure that houses mathematical patterns associated with human words. How did the LLM land on the patterns associated with human words? The usual method is that the AI maker scans data across the Internet to find essays, poems, narratives, and just about any human written materials that are available for perusal. The text is then examined by the AI algorithms to ascertain crucial mathematical and computational patterns found among the words that we employ. With some additional tuning by the AI maker, the LLM becomes generative AI, namely that you enter prompts and the AI responds by generating a hopefully suitable response. I'm assuming you've used at least one or more generative AI apps. If you've done so, I think you would generally agree that responses appear to be amazingly fluent. That's because the pattern matching has done a good job of mathematically and computationally figuring out the associations between words. The underlying base model that serves as a core LLM is commonly referred to as an AI foundational model. It is the foundation upon which the rest of the AI resides. AI makers will construct a foundational model based on their preferences and then produce variations of that model. As an example of the variation's notion, an AI maker might use their foundational model and create a slightly adjusted version that is faster at processing prompts, though perhaps this sacrifices accuracy. For that new version, they market it as their speedy option. Now then, they might have another variant that does in-depth logical reasoning, which is handy. But that version might be slower to respond due to the added computational efforts involved. You get the idea. Each of the generative AI apps that you've used are based on a core foundational model, essentially the LLM that is running the show. Each of the AI makers decides how they are going to design and build their foundational model. There isn't one standard per se that everyone uses. It is up to the AI maker to devise their own, or possibly license someone else's, or opt to use an open-source option, etc. An interesting twist is that some believe we've got too much groupthink going on, whereby nearly all the prevailing generative AI apps are pretty much designed and built the same way. Surprisingly, perhaps, most of the AI foundational models are roughly the same. A worry is that we might all be heading toward a dead-end. Maybe the prevailing dominant approach is not going to scale up. Everyone is stridently marching to the same tune. A totally different tune might be needed to reach greater heights of AI. For my coverage of novel ways of crafting AI foundational models that work differently than the conventional means, see the link here and the link here. By and large, there is an AI insider saying about most LLMs and generative AI, which is that the AI is typically a mile long and an inch deep. The gist is that a general-purpose generative AI or LLM is good across the board (said to be a mile long), but not an expert in anything (i.e., it's just an inch deep). The AI has only been broadly data-trained. That's good if you want AI that can engage in everyday conversation. The exquisiteness of this is that you can ask very wide-ranging questions and almost certainly get a cogent answer (well, much of the time, but not all the time). The unfortunate aspect is that you probably won't be able to get suitable answers to deep questions. By deep, I mean asking questions that are based on particular expertise and rooted in a specific domain. Remember that the AI is a mile long but only an inch deep. Once you've probed below that inch of depth, you never know what kind of an answer you will get. Sometimes, the AI will tell you straight out that it can't answer your question, while other times the AI tries to fake an answer and pull the wool over your eyes, see my coverage on this AI trickery at the link here. I tend to classify AI foundational models into two major categories: The usual generative AI is in the first category and consists of using a general-purpose AI foundational model. That's what you typically are using. Mile long, inch deep. Suppose that you want to use AI that is more than just an inch deep. You might be seeking to use AI that has in-depth expertise in financial matters, or perhaps contains expertise in the law (see my coverage domain-specific legal AI at the link here), or might have expertise in medicine, and so on. That would be a domain-specific AI foundational model. A domain-specific AI foundational model is purposely designed, built, tested, and fielded to be specific to a chosen domain. Be cautious in assuming that any domain-specific AI foundational model is on par with human experts. Nope. We aren't there yet. That being said, a domain-specific AI foundational model can at times be as good as human experts, possibly even exceed human experts, under some conditions and circumstances. There is a useful survey paper that was posted last year that sought to briefly go over some of the most popular domain-specific AI foundational models in terms of the domains chosen, such as for autonomous driving, mathematical reasoning, finance, law, medicine, and other realms (the paper is entitled 'An overview of domain-specific foundation model: Key technologies, applications and challenges' by Haolong Chen, Hanzhi Chen, Zijian Zhao, Kaifeng Han, Guangxu Zhu, Yichen Zhao, Ying Du, Wei Xu, and Qingjiang Shi, arXiv, September 6, 2024). Keep in mind that daily the domain-specific arena is changing, so you'll need to keep your eyes open on what the latest status is. AI foundational models have four cornerstone ingredients: Those four elements are used in both general-purpose AI foundational models and domain-specific AI foundational models. That is a commonality of their overall design and architectural precepts. Currently, since general-purpose models by far outnumber domain-specific ones, the tendency is to simply replicate the guts of a general-purpose model when starting to devise a domain-specific model. It is a lazy belief that there is no sense in reinventing the wheel. You might as well leverage what we already know works well. I have predicted that we will gradually witness a broadening or veering of domain-specific models from the predominant general-purpose ones. This makes abundant sense since a given domain is bound to necessitate significant alterations of the four cornerstone elements that differ from what placates a general-purpose model. Furthermore, we will ultimately have families of domain-specific models. For example, one such family would consist of domain-specific models for mental health therapy. These would be mental health domain-specific AI foundational models that are substantially similar in their domain particulars. Envision a library of those base models. This would allow an AI developer to choose which base they want to use when instantiating a new mental health therapy model. Crafting domain-specific AI foundational models is a bit of a gold rush these days. You can certainly imagine why that would be the case. The aim is to leverage the capabilities of generative AI and LLMs to go beyond answering generic across-the-board questions and be able to delve into the depths of domain-specific questions. People are perturbed when they discover that general-purpose AI foundational models cannot suitably answer their domain-specific questions. The reaction is one of dismay. How come this seemingly smart AI cannot answer questions about how to cure my bladder ailment, or do my taxes, or a slew of other deep-oriented aspects? Users will eventually awaken to the idea that they want and need AI that has in-depth expertise. People will even be willing to switch back-and-forth between the topmost general-purpose AI and a particular best-in-class domain-specific AI that befits their needs. There is gold in them thar hills for domain-specific AI, including and especially in the mental health domain. I've got a handy rule for you. The rule is that domain-specific AI foundational models are not all the same. Here's what I mean. The nature of a given domain dictates what should be done regarding the four cornerstone elements of structure, algorithms, data, and interactivity. A finance model is going to be shaped differently than a mental health therapy model. And so on. You would be unwise to merely copy one domain-specific model and assume it will immediately be usable in some other domain. I will in a moment be discussing the unique or standout conditions of a mental health therapy model versus other particular domains. Using ChatGPT, Claude, Gemini, or any of the generative AI that are built on a general-purpose AI foundational model will only get you so far when it comes to seeking mental health advice. Again, the mile long, inch deep problem is afoot. You ask the AI for mental health advice, and it will readily do so. The problem though is that you are getting shallow mental health advice. People don't realize this. They assume that the advice is of the topmost quality. The AI likely leads you down that primrose path. The AI will provide answers that are composed in a manner to appear to be fully professional and wholly competent. A user thinks they just got the best mental health advice that anything or anyone could possibly provide. AI makers have shaped their AI to have that appearance. It is sneaky and untowardly practice. Meanwhile, to try and protect themselves, the AI maker's licensing agreements usually indicate in small print that users are not to use the AI for mental health advice and instead are to consult a human therapist, see my discussion on this caginess at the link here and the link here. Worse still, perhaps, at times general-purpose generative AI will produce a heaping of psychobabble, seemingly impressive sounding mental health advice that is nothing more than babbling psychological words of nothingness, see my analysis at the link here. Let's consider the big picture on these matters. There are three key avenues that generative AI is devised for mental health advisement: Some quick thoughts on those three avenues. I've already mentioned the qualms about generic generative AI dispensing mental health advice. The use of customized generative AI, such as GPT's that are coded to provide mental health therapy are a slight step up, but since just about anyone can make those GPTs, it is a dicey affair, and you should proceed with the utmost caution – see my comments and assessment at the link here and the link here. Thus, if we really want to have generative AI that leans suitably into providing mental health advice, the domain-specific AI foundational model is the right way to go. Subtypes Of Domain-Specific Models Domain-specific models in general are subdivided into two major subtypes: A domain-only type assumes that the model can be devised nearly exclusively based on the domain at hand. This is generally a rare occurrence but does make sense in certain circumstances. The hybrid subtype recognizes that sometimes (a lot of the time) the chosen domain itself is going to inextricably require aspects of the general-purpose models. You see, some domains lean heavily into general-purpose facets. They cannot realistically be carved out of the general-purpose capability. You would end up with an oddly limited and altogether non-functional domain-specific model. Let's see how this works. Suppose I want to make an AI foundational model that is a domain expert at generating mathematical proofs. That's all it needs to do. It is a domain-specific model. When someone uses this mathematical proof model, they enter a mathematical proposition in a customary predicate logic notation. Programmers would think of this interactivity on par with a programming language such as Prolog. What doesn't this domain-only model need? Two traits aren't especially needed in this instance: The mathematical proofs model doesn't need to be able to quote Shakespeare or be a jolly conversationalist. All it does is take in prompts consisting of mathematical propositions and then generate pristine mathematical proofs. Nothing else is needed, so we might as well keep things barebones and keenly focused on the domain task at hand. Let's starkly contrast this mathematical proof model with a mental health therapy model. As I will point out shortly, the domain of mental health therapy requires strong fluency and a robust worldviews capability. Why? Because the requisite domain-specific characteristics in the mental health care domain mandate that therapy be conducted in fluent natural language and with an unstinting semblance of worldviews. To illustrate the fluency and worldviews consideration, go along with me on a scenario entailing a junior-level AI developer deciding to build an AI foundational model for mental health therapy. First, the AI developer grabs up a generic general-purpose AI foundational model that is completely empty. No data training has been done with it. It essentially is a blank slate, an empty shell. This scenario is in keeping with my earlier point about the tendency to reuse general-purpose models when devising domain-specific models. The AI developer then gathers whatever data and text that they can find on the topic of mental health therapy. This includes books, guidebooks like DSM-5 (see my analysis of using DSM-5 for generative AI, at the link here), research papers, psychology articles, and the like. In addition, transcripts of actual real-life client-therapist therapy sessions can be quite useful, though finding them and cleaning them tends to be problematic, plus they aren't readily available as yet on a large enough scale (see my discussion at the link here). The AI developer proceeds to use the collected data to train the AI on mental health therapy. This is immersing the AI into the domain of mental health therapy. The AI will mathematically and computationally find patterns about the words associated with mental health therapy. With a bit of fine tuning, voila, a seemingly ready-to-go domain-specific model for mental health care hits the streets. Boom, drop the mic. Things aren't likely to work out for this AI developer. I'll explain why via an example. Imagine that you are a therapist (maybe you are!). A patient is interacting with you and says this: 'I was driving my car the other day and saw a lonely, distressed barking dog at the side of the road, and this made me depressed.' I'm sure that you instantly pictured in your mind that this patient was in their car, they were at the wheel, they were driving along, and they perchance saw a dog outside the window of the vehicle. The dog was all by itself. It must have looked upset. It was barking, doing so possibly for attention or due to anguish. Your patient observed the dog and reacted by saying that they became depressed. Easy-peasy in terms of interpreting what the patient said. If we give this same line as a prompt to the domain-specific AI that we crafted above, there is a crucial issue that we need to consider. Remember that the AI wasn't data trained across the board. We only focused on content about mental health. Would this domain-specific AI be able to interpret what it means for a person to be driving a car? Well, we don't know if that was covered in the content solely about mental health. Would the AI make sense of the aspect that there was a dog? What's a dog? This is due to the AI not having been data trained in a wide way about the world at large. We have an AI that is possibly a mile deep, but an inch wide. That won't do. You cannot reasonably undertake mental health therapy if the broadness of the world is unknown. Envision a mental health therapist that grew up on a remote isolated island and never discovered the rest of the world. They would have a difficult if not impossible time of comprehending what their worldly patient from a bustling cosmopolitan city is telling them. You might get lucky and maybe the domain-specific AI that is data trained solely in mental health content will be wide enough to be useable on a world-view inclusion, but I would not hold my breath on that assumption. The odds are that you'll need to have the AI versed in an overarching worldview and have fullness of fluency. So, you can do one of three actions: Each of those options has tradeoffs. The first action is the most common, namely wideness followed by depth. You find a general-purpose AI foundational model that has been data trained across-the-board. Assuming you can get full access to it, you then further data train the AI to be steeped in mental health. A frequent approach entails using a retrieval-augmented generation (RAG) method, see my explanation at the link here. Essentially, you take all the mental health gathered content and do additional data training of the model. As an aside, it used to be quite a limited approach because the AI models had narrow restrictions on how much added data they could peruse and assimilate, but those limits are getting larger by the day. The second listed action is less common but a growing option. It takes a somewhat different route. You devise a small language model (SLM) that is steeped solely in mental health. Then, you find a large language model (LLM) that you believe is suitable and has the worldview that you are seeking. You then use the SLM to data train the LLM on the domain of mental health. For more details on the use of SLMs to train LLMs, a process known as knowledge distillation, see my discussion at the link here. The third action is that you do the initial data training by dovetailing both breadth and depth at the same time. You not only scan widely across the Internet, but you also simultaneously feed in the mental health content. From the perspective of the AI, it is all merely data being fed in and nothing came first or last. Let's mull over some bumps in the road associated with all three of the possibilities of how to proceed. A harrowing difficulty that arises in either of those methods is that you need to consider the potential inadvertent adverse consequences that can occur. In short, the conundrum is this. The depth of the mental health therapy aspects might get messed up by intermixing with the worldview aspects. It could be that while the intertwining is taking place, a generic precept about therapy overrides an in-depth element. Oops, we just demoted the depth-oriented content. All kinds of headaches can ensue. For example, suppose that the in-depth content has a guidance rule about therapy that says you are never to tell a patient they are 'cured' when it comes to their mental health (it is a hotly debated topic, of which the assertion is that no one is mentally 'cured' per se in the same sense as overcoming say cancer, and it is misleading to suggest otherwise). Meanwhile, suppose the worldview elements had picked up content that says it is perfectly acceptable and even preferred to always tell a patient that they are 'cured'. These are two starkly conflicting therapeutic pieces of advice. The AI might retain both of those advisory nuggets. Which one is going to come to the fore? You don't know. It could be utilized by random chance. Not good. Or it could be that while carrying on a therapy interaction, something suddenly triggers the AI to dive into the yes-cure over the no-cure, or it could happen the other way around. The crux is that since we are trying to combine breadth with the depth, some astute ways of doing so are sorely needed. You can't just mindlessly jumble them together. Right now, dealing with this smorgasbord problem is both art and science. In case you were tempted to believe that all you need to do is tell the AI that the depth content about mental health always supersedes anything else that the AI has, even that fingers-crossed spick-and-span solution has hiccups. Shift gears to another integral element of constructing generative AI and how it comes up when devising a domain-specific AI model such as for mental health therapy. Let's start with a bit of cherished AI history. A notable make-or-break reason that ChatGPT became so popular when first released was due to OpenAI having made use of a technique known as reinforcement learning from human feedback (RLHF). Nearly all AI makers now employ RLHF as part of their development and refinement process before they release their generative AI. The process is simple but immensely a game changer. An AI maker will hire humans to play with the budding generative AI, doing so before releasing the AI to the public. Those humans are instructed to carefully review the AI and provide guidance about what the AI is doing wrong and what it is doing right. Think of this as the classic case of reinforcement learning that we experience in our daily lives. You are cooking eggs. When you leave the eggs in the frying pan a bit longer than usual, let's assume they come out better cooked. So, the next time you cook eggs, you leave them in even longer. Ugh, they got burned. You realize that you need to backdown and not cook them for so long. The humans hired to guide AI do approximately the same thing. A common aspect involves giving guidance about the wording generated by the AI and especially about the tone of the AI. They tell the AI what is good to do, which the AI then mathematically and computationally internally calculates as a reward. They also tell the AI what not to do, a kind of mathematical and computational penalty. Imagine that a hired person for this task enters a prompt into a nascent general-purpose AI asking why the sky is blue. The AI generates a response telling the person that they are stupid for having asked such an idiotic question. Well, we don't want AI to tell users they are stupid. Not a serviceable way to garner loyalty to the AI. The hired person tells the AI that it should not call users stupid, and nor should the AI label any user question as being idiotic. If we have a whole bunch of these hired people hammering away at the AI for a few days or weeks, gradually the AI is going to pattern-match on what is the proper way to say things and what is the improper way to say things. The AI is getting human feedback on a reinforcement learning basis. The reason that this was huge for the release of ChatGPT was that until that time, many of the released generative AI apps were outlandishly insulting users and often spewing foul curse words. AI makers had to quickly pull down their ill-disciplined AI. Reputations for some AI makers took a big hit. It was ugly. With ChatGPT, partially due to the RLHF, it was less prone to doing those kinds of unsavory actions. For more details on how RLHF works, see my discussion at the link here. Assume that we have devised a domain-specific generative AI that is versed in mental health therapy. It has all kinds of in-depth expertise. We are feeling pretty good about what we've built. Should we summarily hand it over to the public? You would be foolhardy to not first do extensive RLHF with the AI. Allow me to explain why. I have coined a modified catchphrase for RLHF to what I refer to as RLDHF, reinforcement learning from domain human feedback. It is akin to RLHF, but RLHF is usually about generic facets. That's still needed, but in the case of a domain-specific model, you must also undertake RLDHF. An example will illustrate the RLDHF approach. The usual RLHF is done to tone down the AI and be docile, rarely ever going toe-to-toe with users. AI makers want users to like the AI and have good feelings about the AI. Therapists are not customarily docile in that same way. Sure, at times a human therapist will be very accommodating, but other times they need to share some tough news or get the patient to see things in a manner that might be upsetting to them. RLHF is usually done to explicitly avoid any kind of confrontations or difficult moments. In the case of RLDHF, a maker of a domain-specific AI hires experts in the domain to provide reinforcement learning feedback to the AI during the final training stages. Envision that we hire mental health therapists to provide feedback to our budding domain-specific AI. They log into the AI and give thumbs up and thumbs down to how the AI is wording its interactions. The same applies to the tone of the conversations. The RLHF approach usually involves hiring just about any person that can converse with the AI and provide broad guidance. The RLDHF involves hiring experts, such as therapists, and having them provide domain-specific guidance, including not just factual aspects, but also the nature and tone of how therapy is best provided. Skipping the RLDHF is a recipe for disaster. The odds are that the therapy provided by the AI is going to fall in line with whatever RLHF has been undertaken. I've demonstrated what a wishy-washy AI-based therapist looks like, see the link here, and showcased how the proper use of RLDHF with mental therapist guidance during training is indubitably and undoubtedly essential. There is an additional form of feedback that can work wonders for an AI mental health therapy model. It is known as RLAIF, reinforcement learning from AI feedback. I realize you might not have heard of this somewhat newer approach, but it is readily gaining steam. Usage of RLAIF in domain-specific AI models pays off handsomely if done proficiently. First, please keep present in your mind everything I said about RLHF and RLDHF. We will make a tweak to those approaches. Instead of using humans to provide feedback to AI, we will use AI to provide feedback to AI. Say what? Things are straightforward and not at all off-putting. We set up some other external generative AI that we want to use to provide feedback to the generative AI that we are data training on mental health therapy. The two AI's will interact directly with each other. This is purely AI-to-AI conversations that are taking place. For our mental health therapy AI, we will do two activities: You have the external AI pretend to be a patient or as a therapist. This is natively accomplished via the persona functionality of LLMs, see my detailed coverage of personas at the link here and the link here. The AI-simulated patient can be relatively straightforward to devise. The persona of a therapist must be thoughtfully devised and utilized; else you'll send the being-trained AI into a scattered morass. Rule of thumb: Don't use RLAIF unless you know what you are doing. It can be devilishly complicated. On a related consideration, the use of generative AI personas is a handy tool for human therapists too. For example, I've described how a therapist in training can use personas to test and refine their therapeutic acumen by interacting with AI that pretends to be a wide variety of patients, see the link here. Each such AI-driven patient can differ in terms of what they say, how they act, etc. Likewise, therapists can have the AI pretend to be a therapist. This allows a budding therapist to see what it is like to be a patient. Another useful angle is that the human therapist can learn about other styles of therapy. Setting up generative AI personas that simulate therapy styles and for mental health role playing has ins and outs, pluses and minuses, see my discussion at the link here. Congrats, we've covered some of the bedrock fundamentals about devising AI foundational models for AI mental health therapy. You are up-to-speed on a baseline basis. There's a lot more to cover. Here's a sneak peek at what I'll be covering in the second part of this series: Devising an AI foundational model for mental health therapy is not a walk in the park. Be cautious and skeptical if you hear or see that some startup claims they can stand up a full-on AI foundational model for mental health within hours or days. Something doesn't jibe in such a brazen claim. They might be unaware about what mental health therapy consists of. They might be techies that are versed in AI but only faintly understand the complexities of mental health care. Perhaps they have an extremely narrow view of how therapy is undertaken. Maybe they figure that all they need to do is punch-up generic general-purpose AI with a few spiffy systems prompts and call it a day. Make sure to find out the specifics of what they have in mind. No hand waving allowed. If you are thinking about putting together such a model, I urge you to do so and applaud you for your spirit and willingness to take on a fascinating and enthralling challenge. I sincerely request that you take on the formidable task with the proper perspective at hand. Be systematic. Be mindful. The final word goes to Sigmund Freud here, having made this pointed remark: 'Being entirely honest with oneself is a good exercise.' Yes, indeed, make sure to employ that sage wisdom as you embark on a journey and adventure of building an AI foundational model for mental health therapy. Be unambiguously honest with yourself and make sure you have the right stuff and right mindset to proceed. Then go for it.

"Keeps Asking for More and More" — Trump Launches Fresh Criticism of Zelensky - Jordan News
"Keeps Asking for More and More" — Trump Launches Fresh Criticism of Zelensky - Jordan News

Jordan News

time15-04-2025

  • Politics
  • Jordan News

"Keeps Asking for More and More" — Trump Launches Fresh Criticism of Zelensky - Jordan News

Former U.S. President Donald Trump launched sharp new criticism against Ukrainian President Volodymyr Zelensky over his handling of the Russia-Ukraine war. اضافة اعلان These remarks add to a series of pointed statements Trump has made regarding both Zelensky and former President Joe Biden. While speaking to reporters in the Oval Office, Trump strongly criticized the way both Zelensky and Biden managed the conflict, stating: "This war should never have started. Biden could have stopped it. Zelensky could have stopped it. And Putin shouldn't have started it. Everyone bears responsibility." Trump also took a swipe at Zelensky personally, questioning his competence: 'It was a tough session with that guy — he kept asking for more and more.' These latest comments come just over a month after a televised debate between Trump and Zelensky at the White House, during which Trump accused Zelensky of taking advantage of the crisis. Earlier on Monday, Trump also launched a scathing attack on social media against both Biden and Zelensky, saying they had done a "terrible job" by allowing this 'farce' to unfold. His remarks also followed a deadly Russian missile strike on Sunday in Sumy, Ukraine, which killed at least 35 people and injured over 100. Commenting on the attack from aboard Air Force One, Trump said: 'I was told it was a mistake. I think it's terrible. The whole war is terrible.' Other members of Trump's team also weighed in. Keith Kellogg, Trump's special envoy to Ukraine, said on X (formerly Twitter): 'This goes beyond any sense of decency. As a former military commander, I understand targeting — but this was wrong. That's why President Trump is working so hard to end this war.' On Sunday, Zelensky accused Vice President J.D. Vance of "justifying" Russia's invasion during a CBS 60 Minutes interview. He also invited Trump to visit Ukraine to witness the devastation firsthand. In response, Vance said Zelensky's comments reflected the deteriorating relationship between Kyiv and the White House. His press secretary, Taylor Van Kirk, said: 'President Zelensky should focus on ending this conflict peacefully instead of misrepresenting Vice President Vance's views.' Trump also criticized the CBS interview with Zelensky — without naming him directly — in a post on Truth Social, calling the network 'dishonest political operatives' and suggesting their broadcast license should be revoked. Trump is suing CBS for $20 billion over a past interview with his 2024 rival, then–Vice President Kamala Harris.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store