
Here's what Maine health providers think of AI
MaineHealth recently completed — and plans to expand — pilot programs with AI platform Abridge to provide
Advertisement
Providers get consent from all patients prior to using the ambient documentation tool, said Dr. Daniel Nigrin, MaineHealth chief information officer.
Nigrin said AI helps providers by reducing rote tasks and distilling large amounts of information into summaries so that they can focus on patients instead of documentation or data entry. MaineHealth has a policy that prohibits use of publicly available AI tools when dealing with protected patient health information and has an AI advisory committee that reviews all new uses of AI technology, he said.
Moran added that MaineHealth is exploring how AI and machine learning can help reduce insurance denials.
'AI doesn't make the final decision, but it pulls together the information we need as clinicians to make better-informed decisions on behalf of our patients,' Moran said.
Nigrin said it's important to 'move cautiously' with AI to ensure technological advancements still serve patients, but said he expects the technology to serve a 'critical role' in health care in the years to come. It won't replace doctors, said Nigrin, but providers who use it 'will far outperform those who do not.'
The diagnostic capabilities of AI are improving rapidly. In a small
Advertisement
'The study showed more than just the chatbot's superior performance,'
The New York Times
Part of the problem was doctors didn't know how to use the chatbot to its fullest extent, the
Times
reported. That could change as more health care systems incorporate the technology and providers become more comfortable using it.
The providers
The Monitor
spoke with said they are currently using AI to streamline administrative tasks, rather than for diagnostics. Jayne Van Bramer, president & CEO of mental health provider Sweetser, said her organization is using technology from Eleos that takes notes throughout a clinical session. She called the tool a 'game changer' in allowing providers to focus on client engagement and support.
'Seeing this application in action — seeing AI used like a medical scribe that follows a provider around to help with documentation demands — has reassured staff and affiliates that this is there to enhance their work,' Van Bramer said.
The application doesn't record the session and there are no recorded clips of client engagements, according to Justin Chenette, senior director of public relations and advancement. Chenette said the tool is HIPAA compliant and providers tell clients about the device and how it will be used, turning it off if they object. AI-assisted notes are not finalized until a clinician reviews, edits and approves them, said Chenette. 'AI is just helping clinicians be more present.'
Northern Light Health is using Clinical AI Agent from Oracle Health and DAX Copilot for patient visit documentation and is working on a policy for systemwide guidelines and procedures 'for the ethical and responsible use of AI technologies,' said Hugh Jones, senior vice president and chief strategy and business development officer. Consent for these programs is incorporated into general patient consent, and then providers ask patients before they start using the AI scribe.
Advertisement
While AI can be beneficial to health care, Jones said, it will take time for the industry to fully adapt.
'One of the key barriers to adoption is the sensitivity around privacy and information security for highly sensitive (and regulated) personal health information,' he said. 'These are surmountable in the long-term.'
In response to a survey on the use of AI in health care, some medical and health care providers told
The Monitor
they had concerns about the growing use of AI across different settings. Many are in private practice or work at smaller organizations ranging from hospital settings, in-home nursing, behavioral health and oncology.
Very few said they were currently using AI, although a couple said the technology is useful for quickly gathering studies and other research. Multiple respondents said they worried about people using AI to make medical decisions without consulting health care providers.
Sarah Kelley, an oncology social worker in Portland, said she uses AI technology on her personal computer to quickly find community groups and resources for patients across large geographic distances, but worries about AI replacing aspects of the patient-provider relationship.
'I would be very concerned if AI tried to use mental health screeners to replace sound clinical face-to-face judgement (that) I glean through relationship building and patient interaction,' said Kelley.
Elaina George, a mental health counselor, said she isn't currently using AI and she'd prefer it wasn't in health care at all.
Advertisement
'(AI) doesn't belong in health care,' she said. 'It takes the human aspect out of something very personal and unique.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Chicago Tribune
23 minutes ago
- Chicago Tribune
Terry Savage: AI used to guide seniors to Medicare programs
Does the concept of artificial intelligence intimidate you? Or do you figure it won't have much impact on your life, so why bother learning about it? Well, AI is definitely entering — and improving — your life, whether you choose it or not. When I first wrote about ChatGPT several years ago, AI was viewed as a powerful tool to collect information from huge databases and sort it out to provide answers to questions. Since then, AI has quickly morphed into a useful tool for business and individuals, creating accurate and life-like interactions that make outcomes easier. For example, the new Social Security commissioner, a former tech payments CEO, has announced that Social Security will soon be using AI in its call centers. If the idea of talking to a 'robot' sends chills down your spine, think again. In this column, I'll show you a company that is already using AI in its call center — and generating responses that truly make you think you're talking to a helpful person. A reality check It's a generational thing. When I want help after calling a toll-free number for product information or credit card adjustments or insurance issues, I want to talk to an intelligent human being. I guess there aren't enough to go around! One of my pet peeves is being transferred to a voice messaging system that tries to 'help' me decide how to get answers to a simple question. They offer five choices, none of which is helpful. Representative, please! The only thing worse than a voice-activated decision tree is getting transferred to a live person who just happens to live in a foreign country and who is obviously responding off a script. If I ask to talk to a supervisor, I'm told there is no supervisor available! Don't these companies care about their customers? (Insert your own swear word here!) Artificial intelligence that's real So I must say I was absolutely shocked to hear a demonstration of AI being used by eHealth to start the process of guiding seniors to the appropriate choices for Medicare programs. For many years, eHealth has been a popular health insurance marketplace that helps people find the right insurance coverage by comparison-shopping plans from more than 180 insurers for coverage ranging from Medicare Advantage and supplemental plans to individual and family health policies, along with other benefits such as dental and vision. Many people access eHealth through its website, Others use their toll-free number 1-800-EHEALTH (1-800-343-2584) to reach their licensed and helpful insurance agents. Getting to the agent licensed in your state of residence, and knowledgeable about your specific product request, could take a lot of time during busy days around Medicare enrollment. And what about calls that come in late at night? That's why eHealth created 'Alice' — an AI 'agent' who does not actually sell insurance policies but who asks relevant questions to direct you to the correct licensed agent. You'd swear that you are talking to a live person, since 'her' responses are not only appropriate but friendly. Listening to the demo that eHealth sent me, I was absolutely blown away. I knew that you'd want to hear the same thing, so if you are reading this column online at my website you can click on this link in the article. In this case, an audio demo is worth a thousand words! Even if you're not shopping for health insurance, I recommend listening to this short clip of an interaction between someone calling the toll-free line late at night and the AI agent, Alice. This company has taken AI to the next level. Suddenly, you'll understand how much more helpful an AI agent can be than a call center in the middle of nowhere! And, on a personal level, you'll see how AI has so much potential to change our lives for the better (yes, or for the worse). I spoke with Ketan Babaria, chief digital and AI officer of eHealth. He notes that while AI is not (yet) selling policies, it is making a big difference in their processes: 'Our new AI agents are trained to be patient, caring and sympathetic. As a result, we are making it easier and faster for people to start the shopping process for a Medicare plan, enabling them to more quickly connect with a licensed agent who can help them comparison shop for the right health coverage.' So the next time you hear that your call will be answered by AI, don't hang up in fear, hoping that the next time you'll get a 'real person.' Odds are that soon you'll be connecting with many AI agents. And the odds are even better that you'll get the correct answer from a compassionate robot than you'd get from the overworked and underinformed call center worker. That's The Savage Truth.
Yahoo
40 minutes ago
- Yahoo
Teachers can use AI to save time on marking, new guidance says
Teachers in England can use artificial intelligence (AI) to speed up marking and write letters home to parents, new government guidance says. Training materials being distributed to schools, first seen exclusively by the BBC, say teachers can use the technology to "help automate routine tasks" and focus instead on "quality face-to-face time". Teachers should be transparent about their use of AI and always check its results, the Department for Education (DfE) said. The Association of School and College Leaders (ASCL) said it could "free up time for face-to-face teaching" but there were still "big issues" to be resolved. BCS, the Chartered Institute for IT, said it was an "important step forward" but teachers would "want clarity on exactly how they should be telling... parents where they've used AI". Teachers and pupils have already been experimenting with AI, and the DfE has previously supported its use among teachers. However, this is the first time it has produced training materials and guidance for schools outlining how they should and should not use it. The DfE says AI should only be used for "low-stakes" marking such as quizzes or homework, and teachers must check its results. They also give teachers permission to use AI to write "routine" letters to parents. One section demonstrates how it could be used to generate a letter about a head lice outbreak, for example. Emma Darcy, a secondary school leader who works as a consultant to support other schools with AI and digital strategy, said teachers had "almost a moral responsibility" to learn how to use it because pupils were already doing so "in great depth". "If we're not using these tools ourselves as educators, we're not going to be able to confidently support our young people with using them," she said. But she warned that the opportunities were accompanied by risks such as "potential data breaches" and marking errors. "AI can come up with made-up quotes, facts [and] information," she said. "You have to make sure that you don't outsource whatever you're doing fully to AI." The DfE guidance says schools should have clear policies on AI, including when teachers and pupils can and cannot use it, and that manual checks are the best way to spot whether students are using it to cheat. It also says only approved tools should be used and pupils should be taught to recognise deepfakes and other misinformation. Education Secretary Bridget Phillipson said the guidance aimed to "cut workloads". "We're putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop – freeing teachers from paperwork so they can focus on what parents and pupils need most: inspiring teaching and personalised support," she said. Pepe Di'Iasio, ASCL general secretary, said many schools and colleges were already "safely and effectively using AI" and it had the potential to ease heavy staff workloads and as a result, help recruitment and retention challenges. "However, there are some big issues," he added. "Budgets are extremely tight because of the huge financial pressures on the education sector and realising the potential benefits of AI requires investment." Research from BCS, the Chartered Institute for IT, at the end of last year suggested that most teachers were not using AI, and there was a worry among those who were about telling their school. But Julia Adamson, its managing director for education, said the guidance "feels like an important step forward". She added: "Teachers will want clarity on exactly how they should be telling those parents where they've used AI, for example in writing emails, to avoid additional pressures and reporting burdens." The Scottish and Welsh governments have both said AI can support with tasks such as marking, as long as it is used professionally and responsibly. And in Northern Ireland, last week education minister Paul Givan announced that a study by Oxford Brookes University would evaluate how AI could improve education outcomes for some pupils. Teachers 'improving work-life balance' with AI 'Most of our friends use AI in schoolwork' Can you pass your degree using ChatGPT?
Yahoo
an hour ago
- Yahoo
Amazon Pledges $20 Billion for Pennsylvania Data Centers
Amazon (NASDAQ:AMZN) is set to pour $20 billion into Pennsylvania for new AWS data-center campuses, fueling its AI and cloud ambitions while creating over 1,250 high-skilled jobs. Warning! GuruFocus has detected 2 Warning Sign with AMZN. The first two innovation hubs will rise in Luzerne and Bucks countiesSalem and Falls townships respectivelywith additional sites under review, Governor Josh Shapiro said. Since 2010, Amazon has plowed more than $26 billion into Pennsylvania infrastructure and payroll, generating 27,000 direct jobs. This latest boost will also support thousands of roles across the AWS data-center supply chain and is paired with a $250,000 Amazon Northeastern Pennsylvania Community Fund for STEM, sustainability, digital skills, culture and health grants. One Luzerne facility sits alongside the Susquehanna nuclear plant owned by Talen Energy; Amazon's $650 million purchase of that site to tap up to 960 MW has drawn FERC scrutiny over grid-fairness and power allocation issuesthe first such case before the commission. Amazon and Talen must address concerns that shifting power to data centers could disadvantage other consumers and skirt grid-use fees. Amazon didn't immediately comment on FERC's review. Tech peers like Microsoft (NASDAQ:MSFT) and Meta (NASDAQ:META) are also investing billions in AI-ready data centers, but Amazon's Pennsylvania commitment stands out for its scale and local job impact. Why It Matters: Amazon's massive spend underscores the critical role of hyperscale data centers in supporting AI growth and confirms Pennsylvania's emergence as a cloud-computing hub. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data