logo
#

Latest news with #TheMarkup

California Faces Probe After Sharing People's Health Data With LinkedIn
California Faces Probe After Sharing People's Health Data With LinkedIn

Newsweek

time30-04-2025

  • Health
  • Newsweek

California Faces Probe After Sharing People's Health Data With LinkedIn

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. California's handling of sensitive health information is under scrutiny following a report that data entered by residents on the state's health insurance marketplace was shared with LinkedIn. Covered California, which runs the state's marketplace, shared sensitive personal data with LinkedIn, a subsidiary of Microsoft, through embedded tracking tools on the website, nonprofit news organization The Markup reported on Monday. Covered California confirmed the data transmission in a news release later that day, saying "some sensitive data was inadvertently collected by the tags, including first names, the last four digits of Social Security numbers, and other sensitive health information like pregnancy status." It added that all advertising-related tags on the website had been turned off as a "precautionary measure," and that it would review the extent of the data shared. Representative Kevin Kiley, the Democrat from California has called for an investigation. "This is incredibly disturbing," he wrote on X, formerly Twitter. Newsweek contacted Representative Kiley via social media and email, as well as the press offices of Health Secretary Robert F. Kennedy Jr. and California Governor Gavin Newsom via email outside of regular working hours on Wednesday. Why It Matters Concerns over personal data have grown in recent months after it emerged the government's Department of Government Efficiencyworked to gain access to the Social Security Administration's data systems, which hold sensitive personal data about approximately 70 million Americans. California's sharing of sensitive data with LinkedIn will likely raise similar concerns about threats to Americans' privacy. File photo: the LinkedIn homepage. File photo: the LinkedIn homepage. Chris Radburn/Press Association via AP What To Know Trackers on which was created under the Affordable Care Act, captured users' answers to questions about blindness, pregnancy, high prescription use, gender identity and experiences with domestic abuse, The Markup reported. The data was then transmitted to LinkedIn using Insight Tag, which uses code to track how visitors interact with websites. Covered California said in a statement that it "leverages LinkedIn's advertising platform tools to understand consumer behavior;" however, LinkedIn notes on its website that Insight Tag "should not be installed on web pages that collect or contain Sensitive Data." The LinkedIn campaign trackers began in February 2024 and were removed "due to a marketing agency transition" in early April, Covered California told CalMatters. Covered California had more than 60 trackers on its site, compared to the average on other government sites of three, CalMatters reported. What People Are Saying Covered California said in a news release on Monday: "Covered California is reviewing its entire website and information security and privacy protocols to ensure that no analytics tools are impermissibly collecting or sharing sensitive consumer information. The LinkedIn Insight tags are no longer active and, as a precautionary measure, all active advertising-related tags across the website have been turned off. "Covered California is committed to safeguarding the confidential information and privacy of its consumers. The organization will share additional findings from this investigation as they become available." California Representative Kevin Kiley, wrote on X: "California's Obamacare website tracked users' personal health information—such as pregnancy and prescription drug use—and sent it to LinkedIn for a 'marketing campaign.' We are asking Secretary Kennedy to investigate for HIPAA violations." What Happens Next The Department of Health and Human Services has yet to respond publicly to Kiley's call for an investigation.

How one state sent residents' personal health data to LinkedIn
How one state sent residents' personal health data to LinkedIn

Yahoo

time29-04-2025

  • Health
  • Yahoo

How one state sent residents' personal health data to LinkedIn

The website that lets Californians shop for health insurance under the Affordable Care Act, has been sending sensitive data to LinkedIn, forensic testing by The Markup has revealed. As visitors filled out forms on the website, trackers on the same pages told LinkedIn their answers to questions about whether they were blind, pregnant, or used a high number of prescription medications. The trackers also monitored whether the visitors said they were transgender or possible victims of domestic abuse. Covered California, the organization that operates the website, removed the trackers as The Markup and CalMatters reported this article. The organization said they were removed "due to a marketing agency transition" in early April. In a statement, Kelly Donohue, a spokesperson for the agency, confirmed that data was sent to LinkedIn as part of an advertising campaign. Since being informed of the tracking, "all active advertising-related tags across our website have been turned off out of an abundance of caution," she added. "Covered California has initiated a review of our websites and information security and privacy protocols to ensure that no analytics tools are impermissibly sharing sensitive consumer information," Donohue said, adding that they would "share additional findings as they become available, taking any necessary steps to safeguard the security and privacy of consumer data." Visitors who filled out health information on the site may have had their data tracked for more than a year, according to Donohue, who said the LinkedIn campaign began in February 2024. The Markup observed the trackers directly in February and March of this year. It confirmed most ad trackers, including the Meta "pixel" tracker, as well as all third-party cookies, have been removed from the site as of April 21. Since 2014, more than 50 million Americans have signed up for health insurance through state exchanges like Covered California. They were set up under the Affordable Care Act, signed into law by President Barack Obama 15 years ago. States can either operate their exchange websites in partnership with the federal government or independently, as California does. Covered California operates as an independent entity within the state government. Its board is appointed by the governor and Legislature. In March, Covered California announced that, after four years of increasing enrollment, a record of nearly 2 million people were covered by health insurance through the program. In all, the organization said, about one in six Californians were at one point enrolled through Covered California. Between 2014 and 2023, the uninsured rate fell from 17.2% to 6.4%, according to the organization, the largest drop of any state during that time period. This coincided with a series of eligibility expansions to Medi-Cal, the state's health insurance program for lower-income households. Experts expressed alarm at the idea that those millions of people could have had sensitive health data sent to a private company without their knowledge or consent. Sara Geoghegan, senior counsel at the Electronic Privacy Information Center, said it was "concerning and invasive" for a health insurance website to be sending data that was "wholly irrelevant" to the uses of a for-profit company like LinkedIn. "It's unfortunate," she said, "because people don't expect that their health information will be collected and used in this way." The Markup and CalMatters in recent months scanned for trackers on hundreds of California state and county government websites that offer services for undocumented immigrants using Blacklight, an automated tool developed by The Markup for auditing website trackers. The Markup found that Covered California had more than 60 trackers on its site. Out of more than 200 of the government sites, the average number of trackers on the sites was three. Covered California had dozens more than any other website we examined. On trackers from well-known social media firms like Meta collected information on visitor page views, while lesser-known analytics and media campaign companies like email marketing company LiveIntent also followed users across the site. But by far the most sensitive information was transmitted to LinkedIn. While some of the data sent to LinkedIn was relatively innocuous, such as what pages were visited, Covered California also sent the company detailed information when visitors selected doctors to see if they were covered by a plan, including their specialization. The site also told LinkedIn if someone searched for a specific hospital. In addition to demographic information including gender, the site also shared details with LinkedIn when visitors selected their ethnicity and marital status, and when they told how often they saw doctors for surgery or outpatient treatment. LinkedIn, like other large social media firms, offers a way for websites to easily transmit data on their visitors through a tracking tool that the sites can place on their pages. In LinkedIn's case, this tool is called the Insight Tag. By using the tag, businesses and other organizations can later target advertisements on LinkedIn to consumers that have already shown interest in their products or services. For an e-commerce site, a tracker on a page might be able to note when someone added a product to their cart, and the business can then send ads for that product to the same person on their social media feeds. A health care marketplace like Covered California might use the trackers to reach a group of people who might be interested in a reminder of a deadline for open health insurance enrollment, for example. In its statement, Covered California noted the usefulness of these tools, saying the organization "leverages LinkedIn's advertising platform tools to understand consumer behavior and deliver tailored messages to help them make informed decisions about their health care options." Trackers can also be valuable to the social media companies that offer them. In addition to driving ad sales, they provide an opportunity to gather information on visitors to websites other than their own. On its informational page about the Insight Tag, LinkedIn places the burden on websites that employ the tag not to use it in risky situations. The tag "should not be installed on web pages that collect or contain Sensitive Data," the page advises, including "pages offering specific health-related or financial services or products to consumers." LinkedIn spokesperson Brionna Ruff said in an emailed statement, "Our Ads Agreement and documentation expressly prohibit customers from installing the Insight Tag on web pages that collect or contain sensitive data, including pages offering health-related services. We don't allow advertisers to target ads based on sensitive data or categories." Collection of sensitive information by social media trackers has in previous instances led to removal of the trackers, lawsuits, and scrutiny by state and federal lawmakers. For example, after The Markup in 2022 revealed the Department of Education sent personal information to Facebook when students applied for college financial aid online, the department turned off the sharing, faced questions from two members of Congress, and was sued by two advocacy groups who sought more information about the sharing. Other stories in the same series about trackers, known as the Pixel Hunt, also led to changes and blowback, including a crackdown by the Federal Trade Commission on telehealth companies transmitting personal information to companies including Meta and Google without user consent and proposed class action lawsuits over information shared through trackers with drug stores, health providers, and tax prep companies. LinkedIn is already facing multiple proposed class-action lawsuits related to the collection of medical information. In October, three new lawsuits in California courts alleged that LinkedIn violated users' privacy by collecting information on medical appointment sites, including for a fertility clinic. Social media companies' tracking practices have underpinned the tremendous growth of the tech industry, but few web users are aware of how far the tracking goes. "This absolutely contradicts the expectation of the average consumer," Geoghegan said. In California, a law called the California Confidentiality of Medical Information Act governs the privacy of medical information in the state. Under the act, consumers must give permission to some organizations before their medical information is disclosed to third parties. Companies have faced litigation under the law for using web tracking technologies, although those suits have not always been successful. Geoghegan said current protections like these don't go far enough in helping consumers protect their sensitive data. "This is an exact example of why we need better protections," she said of LinkedIn receiving the data. "This is sensitive health information that consumers expect to be protected and a lack of regulations is failing us." This story was produced by The Markup and reviewed and distributed by Stacker.

Students are using AI to write scholarship essays. Does it work?
Students are using AI to write scholarship essays. Does it work?

Boston Globe

time09-04-2025

  • Boston Globe

Students are using AI to write scholarship essays. Does it work?

'They felt a little bit sterile,' said Geiger, the cofounder and CEO of a company called Scholarships360, an online platform used by more than 300,000 students last year to find and apply for scholarships. Related : Advertisement Curious, Scholarships360 staffers deployed AI-detection software called GPTZero. It checked almost 1,000 essays submitted for one scholarship and determined that about 42 percent of them had likely been composed with the help of generative AI. With college acceptances beginning to roll in for high school seniors, and juniors starting to brainstorm the essays they'll submit with their applications in the fall, Geiger is concerned. When students use AI to help write their essays, he said, they are wasting a valuable opportunity. 'The essay is one of the few opportunities in the admissions process for a student to communicate directly with a scholarship committee or with an admissions reader,' Geiger said. 'That provides a really powerful opportunity to share who you are as a person, and I don't think that an AI tool is able to do that.' Advertisement Madelyn Ronk, a 20-year-old student at Penn State Beaver, said she never considered using ChatGPT to write the personal statement required for her transfer application from community college last year. A self-described Goody Two-shoes, she didn't want to get in trouble. But there was another reason: She didn't want to turn in the same essay as anyone else. 'I want to be unique. I feel like when people use AI constantly, it just gives the same answer to every single person,' said Ronk, who wrote her essay about volunteering for charitable organizations in her hometown. 'I would like my answer to be me. So I don't use AI.' Geiger said students' fears about submitting a generic essay are valid — they're less likely to get scholarships that way. But that doesn't mean they have to avoid generative AI altogether. Some companies offer services to help students use AI to improve their work, rather than to cheat — such as getting help writing an outline, using proper grammar or making points effectively. Generative AI can proofread an essay, and can even tell a student whether their teacher is likely to flag it as AI-assisted. Related : Packback, for example, is an online platform whose AI software can chat with students and give feedback as they are writing. The bot might flag grammatical errors or the use of passive voice or whether students are digressing from their point. Craig Booth, the company's chief technology officer, said the software is designed to introduce students to ethical uses of AI. A Advertisement Not all scholarship providers or colleges have policies on exactly how AI can or cannot be used in prospective student essays. For example, Tools like GPTZero aren't reliable 100 percent of the time. The Markup, a news outlet focused on technology, reported on a study that found Because detection software isn't always accurate, Geiger said, Scholarships360 doesn't base scholarship decisions on whether essays were flagged as being generated by AI. But, he said, many of the students whose essays were flagged weren't awarded a given scholarship because 'if your writing is being mistaken for AI,' whether you used the technology or not, for a scholarship or admissions essay, 'it's probably going to be missing the mark.' Jonah O'Hara, who serves as chair of the admissions practices committee at the National Association of College Admissions Counselors, said that using AI isn't 'inherently evil,' but colleges and scholarship providers need to be transparent about their expectations and students need to disclose when they're using it and for what. Advertisement O'Hara, who is director of college counseling at Rocky Hill Country Day School in Rhode Island, said that he has always discouraged students from using a thesaurus in writing college application essays, or using any words that aren't normal for them. 'If you don't use 'hegemony' and 'parsimonious' in text messages with your friends, then why would you use it in an essay to college? That's not you,' O'Hara said. 'If you love the way polysyllabic words roll off your tongue, then, of course, if it's your voice, then use it.' Generative AI is, functionally, the latest evolution of the thesaurus, and O'Hara wonders whether it has 'put a shelf life on the college essay.' There was a time when some professors offered self-scheduled, unproctored take-home exams, O'Hara recalled. Students had to sign an honor statement promising that everything they submitted was their own work. But the onus was on the professors to write cheat-proof exams. O'Hara said if the college essay is going to survive, he thinks this is the direction administrators will have to go. 'If we get to a point where colleges cannot confidently determine [its] authenticity,' he said, 'then they may abandon it entirely.' This story about was produced by , a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the .

AI Chatbots Can Cushion the High School Counselor Shortage — But Are They Bad for Students?
AI Chatbots Can Cushion the High School Counselor Shortage — But Are They Bad for Students?

Yahoo

time06-03-2025

  • Yahoo

AI Chatbots Can Cushion the High School Counselor Shortage — But Are They Bad for Students?

This article was originally published in The Markup. During the pandemic, longtime Bay Area college and career counselor Jon Siapno started developing a chatbot that could answer high schoolers' questions about their future education options. He was using IBM's question-answering precursor to ChatGPT, Watson, but when generative artificial intelligence became accessible, he knew it was a game-changer. 'I thought it would take us maybe two years to build out the questions and answers,' Siapno said. 'Back then you had to prewrite everything.' An AI-powered chatbot trained on information about college and careers and designed to mimic human speech meant students at the Making Waves Academy charter school in the East Bay city of Richmond could soon text an AI Copilot to chat about their futures. The idea was that students could get basic questions out of the way — at any hour — before meeting with counselors like Siapno for more targeted conversations. Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter Almost one-quarter of U.S. schools don't have a single counselor, according to the latest federal data, from the 2021-22 school year. California high schools fare better, but the state's student-to-counselor ratio when ChatGPT debuted the following year was still 464-to-1, a far cry from the American School Counselor Association's recommended ratio of 250-to-1. Siapno wasn't the only one to see generative AI's potential to scale advising. A flood of bots designed to help people navigate their college and career options have surfaced over the last two years, often with human-sounding names like Ava, Kelly, Oli, Ethan and Coco. It's unclear how many California high schools tell students to use any of them, but the power of generative AI and the scale at which young people are already turning to chatbots in their personal lives is giving some people pause. Julia Freeland Fisher is education director at the Clayton Christensen Institute, a nonprofit research organization that studies innovation. She recently sounded the alarm about the consequences of letting students develop relationships with AI-powered college and career counselors instead of human ones. 'It's so tempting to see these bots as cursory,' Freeland Fisher said. ''They're not threatening real relationships.' 'These are just one-off chats.' But we know from sociology that these one-off chats are actually big opportunities.' Sociologists talk about 'social capital' as the connections between people that facilitate their success. Among those connections, we have 'strong ties' in close friends, family and coworkers who give us routine support, and 'weak ties' in acquaintances we see less regularly. For a long time, people thought weak ties were less important, but in 1973 Stanford sociologist Mark Granovetter wrote about 'the strength of weak ties' and a flood of studies since then have confirmed how important those more distant acquaintances can be for everything from job searches to emotional support. As California considers regulating AI companions for young people, policymakers, tech companies and schools must consider how the burgeoning market for AI-driven college and career guidance could inadvertently become the source of a new problem. 'We're creating this army of self-help bots to help students make their way through school and toward jobs,' Freeland Fisher said, 'but those very same bots may be eroding the kinds of network-building opportunities that help students break into those jobs eventually.' The Making Waves Academy ensures all its graduates meet minimum admissions requirements to California's four-year public colleges. Nine out of 10 of them do pursue higher education, and while there, staff at the Making Waves Education Foundation offer 1:1 coaching, scholarships, budget planning and career planning to help them graduate on time with no debt and a job offer. Patrick O'Donnell, CEO of Making Waves, said his team has been thinking about how to scale the kinds of supports they offer for years now, given the scarcity of counselors in schools. 'Even if counselors wanted to make sure they were supporting students to explore their college and career options, it's almost impossible to do and provide really personalized guidance,' O'Donnell said. Early superusers of the Making Waves AI CoPilot were 9th and 10th graders hungry for information but boxed out of meetings with school counselors focused on helping seniors plan their next steps. CareerVillage is another California nonprofit focused on scaling good college and career advice. has been aggregating crowd-sourced questions and expert answers since 2011 to help people navigate the path to a good career. When ChatGPT came out, co-founder and executive director Jared Chung saw the potential immediately. By the summer of 2023, his team had a full version of their AI Career Coach to pilot, thanks to help from 20 other nonprofits and educational institutions. Now 'Coach' is available to individuals for free online, and high schools and colleges around the country are starting to embed it into their own advising. At the University of Florida College of Nursing, a more specialized version of Coach, 'Coach for Nurses,' gives users round-the-clock career exploration support. Shakira Henderson, dean of the college, said Coach is 'a valuable supplement' to the college's other career advising. Coach for Nurses personalizes its conversation and advice based on a user's career stage, interests and goals. It is loaded with geographically specific, current labor market information so people can ask questions about earnings in a specific job, in a specific county, for example. Coach can also talk people through simulated nursing scenarios, and it's loaded with chat-based activities and quizzes that can help them explore different career paths. Henderson is clear on the tool's limitations, though: 'AI cannot fully replace the nuanced, empathetic guidance provided by human mentors and career advisors,' she said. People can assess an aspiring nurse's soft skills, help them think about the type of hospital they'd like most or the work environment in which they'd thrive. 'A human advisor working with that student will be able to identify and connect more than an AI tool,' she said. Of course, that requires students to have human advisors available to them. Marcus Strother, executive director of MENTOR California, a nonprofit supporting mentoring programs across the state, said Coach is worlds better than nothing. 'Most of our young people, particularly young people of color in low-income areas,' Strother said, 'they don't get the opportunities to meet those folks who are going to be able to give them the connection anyway.' By contrast, Coach, he said, is 'like having a mentor in your pocket.' Last month, California state Sen. Steve Padilla, a San Diego Democrat, introduced legislation to protect children from chatbots. Senate Bill 243 would, among other things, limit companies from designing chatbots that encourage users to engage more often, respond more quickly or chat longer. These design elements use psychological tricks to get users to spend more time on the platform, which research indicates can create an addiction that keeps people from engaging in other healthy activities or lead them to form unhealthy emotional attachments to the bots. The addictive nature of certain apps has long been a critique of social media, especially for young people. In Freeland Fisher's research for the Clayton Christensen Institute, she included a comment from Vinay Bhaskara, the co-founder of CollegeVine, which released a free AI counselor for high schoolers called Ivy in 2023. 'I've seen chat logs where students say, 'Ivy, thank you so much. You're like my best friend,' which is both heartwarming, but also kind of scary. It's a little bit of both,' the report quotes him as saying. Reached by phone, Bhaskara said his company's tool is designed to be friendly and conversational so students feel comfortable using it. Millions of students have used the chatbot for free on CollegeVine's website and more than 150 colleges in California and around the country have offered the technology to their own students. After seeing how many millions of emails, text messages and online chat sessions have happened outside of working hours, Bhaskara now argues the insight and support students have gotten from the chatbot outweigh the risks. In announcing Padilla's bill, his office referenced a number of cases in which chatbots directed children who had become attached to them to do dangerous things. At the most extreme, a Florida teen took his own life after a chatbot he had become romantically involved with reportedly encouraged him to 'come home to me.' Padilla said his bill wouldn't keep young people from getting the benefits of college and career advising from chatbots; it would offer reasonable guidelines to address a serious need. 'This is a regulatory desert,' Padilla said. 'There are no real guardrails around some of this.' Freeland Fisher said the AI companions that young people are turning to for friendship and romantic relationships represent a far greater risk than AI-powered college and career advisors. But she said schools and tech developers still need to be careful when they seek out an AI solution to the counselor shortage. Maybe the only current danger is replacing conversations with school advisors. Eventually, though, sophisticated tools that capture more of students' time and attention in the quest to fill a greater need could end up replacing conversations with other adults in their lives. 'These other supports matter down the line,' Freeland Fisher said. When students spend more time with chatbots and, indeed, learn to prefer interactions with bots over humans, it contributes to social isolation that can limit young people's ability to amass all-important social capital. 'That's part of the warning that we're trying to build in this research,' Freeland Fisher said. 'It's not to say 'Don't use bots.' It's just to have a much fuller picture of the potential costs.' For their part, Making Waves and CareerVillage are taking some responsibility for the risks chatbots represent. Making Waves is actually retiring the AI Copilot this summer as the foundation shifts its mission to finding a way to use technology to help kids build social capital, not just get answers to questions about college and career. And CareerVillage has already put safeguards in place to address some of Padilla's concerns. While Coach does tell users the more they interact with the chatbot the more personalized its recommendations become, Chung, the executive director, said Coach is designed to only discuss career development. 'If you try to go on a long conversation about something unrelated, Coach will decline,' Chung said. He described a series of guardrails and safety processes the company put in place to make sure users never become emotionally attached to the chatbot. 'It's work,' Chung said, 'but I'm going to be honest with you, it's not impossible work.' This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store