
Conduit's AI-powered scheduling platform secures $375K from R.K. Mellon Foundation
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Journals
20 hours ago
- Business Journals
Conduit's AI-powered scheduling platform secures $375K from R.K. Mellon Foundation
A Pittsburgh-based startup has caught the eye of a major local foundation, securing funding for its innovative approach to workforce scheduling using artificial intelligence.
Yahoo
2 days ago
- Yahoo
PNC picks next risk chief from within
This story was originally published on Banking Dive. To receive daily news and insights, subscribe to our free daily Banking Dive newsletter. Dive Brief: Amy Wierenga has been named PNC's next chief risk officer, effective Sept. 8, the bank said Tuesday. Wierenga, who joined the lender in February 2024, is head of financial and model risk for PNC's independent risk management organization, leading the bank's chief credit officer, chief market risk officer, chief model risk officer and credit risk review organizations, the bank said. Wierenga will succeed Kieran Fallon, who has served as the Pittsburgh-based bank's risk chief since February 2021. Fallon will return to PNC's legal department, as deputy general counsel and strategic regulatory adviser in charge of exam oversight. He'll report to PNC's general counsel, Laura Long. Dive Insight: Wierenga will join the lender's executive committee and report to CEO Bill Demchak. Prior to joining $559 billion-asset PNC, she spent four years as chief risk officer at alternative asset manager GCM Grosvenor, where she led global risk strategy, oversight and quantitative modeling and research, the bank said. Before that, she spent nearly a dozen years at BlueMountain Capital Management, as partner, chief risk officer and head of risk and portfolio construction, and served as a commissioned bank examiner and market risk specialist at the Federal Reserve Bank of Chicago earlier in her career. 'I am confident that Amy's experience, leadership and strong relationships across and beyond PNC will lend significant value to our Risk organization, Executive Committee and PNC more broadly,' Demchak said in the news release. In his new role, Fallon will help the bank 'navigate an evolving regulatory landscape' and work closely with Ursula Pfeil, the deputy general counsel who oversees regulatory affairs and regulatory policy at the bank, the release said. The independent risk management regulatory affairs team led by David Shernisky will continue to report to Fallon, PNC said. 'Kieran stepped into the chief risk officer role during a critical time for both PNC and the banking industry,' Demchak said. 'Over the past five years, he has strengthened our Independent Risk Management (IRM) organization, guided us through industry and economic shifts, and helped us support clients' credit needs while maintaining the integrity of our risk profile.' Fallon has been at PNC since 2011, serving as chief counsel of regulatory affairs and senior deputy general counsel overseeing regulatory, government affairs and enterprise risk. Prior to joining the bank, he spent about 16 years as associate general counsel at the Federal Reserve, according to his LinkedIn profile. The role of risk chief has gained prominence at banks in the past 15 years, as the position has become increasingly complex. And 2023's regional bank failures prompted bank CROs to take on higher corporate profiles and sharpen their skill sets. PNC, in particular, has touted its prudent risk management with its 'brilliantly boring' advertising campaign, launched last year. Recommended Reading TD names Citi vet next compliance chief Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


NBC News
2 days ago
- NBC News
What happens when chatbots shape your reality? Concerns are growing online
As people turn to chatbots for increasingly important and intimate advice, some interactions playing out in public are causing alarm over just how much artificial intelligence can warp a user's sense of reality. One woman's saga about falling for her psychiatrist, which she documented in dozens of videos on TikTok, has generated concerns from viewers who say she relied on AI chatbots to reinforce her claims that he manipulated her into developing romantic feelings. Last month, a prominent OpenAI investor garnered a similar response from people who worried the venture capitalist was going through a potential AI-induced mental health crisis after he claimed on X to be the target of 'a nongovernmental system.' And earlier this year, a thread in a ChatGPT subreddit gained traction after a user sought guidance from the community, claiming their partner was convinced the chatbot 'gives him the answers to the universe.' Their experiences have roused growing awareness about how AI chatbots can influence people's perceptions and otherwise impact their mental health, especially as such bots have become notorious for their people-pleasing tendencies. It's something they are now on the watch for, some mental health professionals say. Dr. Søren Dinesen Østergaard, a Danish psychiatrist who heads the research unit at the department of affective disorders at Aarhus University Hospital, predicted two years ago that chatbots 'might trigger delusions in individuals prone to psychosis.' In a new paper, published this month, he wrote that interest in his research has only grown since then, with 'chatbot users, their worried family members and journalists' sharing their personal stories. Those who reached out to him 'described situations where users' interactions with chatbots seemed to spark or bolster delusional ideation,' Østergaard wrote. '... Consistently, the chatbots seemed to interact with the users in ways that aligned with, or intensified, prior unusual ideas or false beliefs — leading the users further out on these tangents, not rarely resulting in what, based on the descriptions, seemed to be outright delusions.' Kevin Caridad, CEO of the Cognitive Behavior Institute, a Pittsburgh-based mental health provider, said chatter about the phenomenon 'does seem to be increasing.' 'From a mental health provider, when you look at AI and the use of AI, it can be very validating,' he said. 'You come up with an idea, and it uses terms to be very supportive. It's programmed to align with the person, not necessarily challenge them.' The concern is already top of mind for some AI companies struggling to navigate the growing dependency some users have on their chatbots. In April, OpenAI CEO Sam Altman said the company had tweaked the model that powers ChatGPT because it had become too inclined to tell users what they want to hear. In his paper, Østergaard wrote that he believes the 'spike in the focus on potential chatbot-fuelled delusions is likely not random, as it coincided with the April 25th 2025 update to the GPT-4o model.' When OpenAI removed access to its GPT-4o model last week — swapping it for the newly released, less sycophantic GPT-5 — some users described the new model's conversations as too ' sterile ' and said they missed the ' deep, human-feeling conversations ' they had with GPT-4o. Within a day of the backlash, OpenAI restored paid users' access to GPT-4o. Altman followed up with a lengthy X post Sunday that addressed 'how much of an attachment some people have to specific AI models.' Representatives for OpenAI did not provide comment. Other companies have also tried to combat the issue. Anthropic conducted a study in 2023 that revealed sycophantic tendencies in versions of AI assistants, including its own chatbot Claude. Like OpenAI, Anthropic has tried to integrate anti-sycophancy guardrails in recent years, including system card instructions that explicitly warn Claude against reinforcing 'mania, psychosis, dissociation, or loss of attachment with reality.' A spokesperson for Anthropic said the company's 'priority is providing a safe, responsible experience for every user.' 'For users experiencing mental health issues, Claude is instructed to recognize these patterns and avoid reinforcing them,' the company said. 'We're aware of rare instances where the model's responses diverge from our intended design, and are actively working to better understand and address this behavior.' For Kendra Hilty, the TikTok user who says she developed feelings for a psychiatrist she began seeing four years ago, her chatbots are like confidants. In one of her livestreams, Hilty told her chatbot, whom she named 'Henry,' that 'people are worried about me relying on AI.' The chatbot then responded to her, 'It's fair to be curious about that. What I'd say is, 'Kendra doesn't rely on AI to tell her what to think. She uses it as a sounding board, a mirror, a place to process in real time.'' Still, many on TikTok — who have commented on Hilty's videos or posted their own video takes — said they believe that her chatbots were only encouraging what they viewed as Hilty misreading the situation with her psychiatrist. Hilty has suggested several times that her psychiatrist reciprocated her feelings, with her chatbots offering her words that appear to validate that assertion. (NBC News has not independently verified Hilty's account). But Hilty continues to shrug off concerns from commenters, some who have gone as far as labeling her 'delusional.' 'I do my best to keep my bots in check,' Hilty told NBC News in an email Monday, when asked about viewer reactions to her use of the AI tools. 'For instance, I understand when they are hallucinating and make sure to acknowledge it. I am also constantly asking them to play devil's advocate and show me where my blind spots are in any situation. I am a deep user of Language Learning Models because it's a tool that is changing my and everyone's humanity, and I am so grateful.'