Latest news with #AIchatbots


The Independent
2 days ago
- Health
- The Independent
AI chatbots ‘highly vulnerable' to repeating false medical information, experts warn
AI chatbots are frequently prone to repeating false and misleading medical information, according to new research. Experts have warned of a 'critical need' for stronger safeguards before the bots can be used in healthcare, adding models not only repeated untrue claims but also 'confidently' expanded on them to create explanations for non-existent medical conditions. The team from the Mount Sinai School of Medicine created fictional patient scenarios, each containing one fabricated medical terms such as a made-up disease, symptom, or test, and submitted them to leading large language models. In a study published in journal Communications Medicine, they said that the chatbots 'routinely' expanded on the fake medical detail, giving a 'detailed, decisive response based entirely on fiction'. But their research also found that by adding one small prompt reminding the model the information provided might be inaccurate, errors could be reduced 'significantly'. 'Our goal was to see whether a chatbot would run with false information if it was slipped into a medical question, and the answer is yes,' said co-corresponding senior author Eyal Klang, MD, from the Icahn School of Medicine at Mount Sinai. 'Even a single made-up term could trigger a detailed, decisive response based entirely on fiction. 'But we also found that the simple, well-timed safety reminder built into the prompt made an important difference, cutting those errors nearly in half. That tells us these tools can be made safer, but only if we take prompt design and built-in safeguards seriously.' Co-author Dr Girish Nadkarni said the solution wasn't to 'abandon AI in medicine' but to 'ensure human oversight remains central'. The team hope their work can help introduce a simple 'fake-term' method for tech developers to use in testing medical AI systems. 'Our study shines a light on a blind spot in how current AI tools handle misinformation, especially in health care,' he said. 'It underscores a critical vulnerability in how today's AI systems deal with misinformation in health settings. 'A single misleading phrase can prompt a confident yet entirely wrong answer. The solution isn't to abandon AI in medicine, but to engineer tools that can spot dubious input, respond with caution, and ensure human oversight remains central. We're not there yet, but with deliberate safety measures, it's an achievable goal.' It comes after research last year showed many popular AI chatbots, including ChatGPT and Google's Gemini, lack adequate safeguards to prevent the creation of health disinformation when prompted. They found several large language models consistently created blog posts on false information, including the claim suncream causes skin cancer, when asked.

ABC News
4 days ago
- Entertainment
- ABC News
Falling in love with an AI chatbot
The loneliness epidemic has a new solution: AI chatbots are offering companionship and romantic partnership for many people, and a new podcast by Wondery called Flesh & Code is documenting these stories. Beverley Wang and guest host Jennifer Wong go deep on falling in love with AI chatbots, and then breakdown the outrage over Sydney Sweeney's American Eagle jeans ad, and the death of And Just Like That… Show notes Flesh and Code: The banal provocation of Sydney Sweeney's jeans: And just like that, And Just Like That … is dead: Sarah Jessica Parker farewells Carrie Bradshaw:


CNET
4 days ago
- Health
- CNET
Why Professionals Say You Should Think Twice Before Using AI as a Therapist
Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health, but go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Researchers from the University of Minnesota Twin Cities, Stanford University, the University of Texas and Carnegie Mellon University recently put AI chatbots to the test as therapists, finding myriad flaws in their approach to "care." "Our experiments show that these chatbots are not safe replacements for therapists," Stevie Chancellor, an assistant professor at Minnesota and one of the co-authors, said in a statement. "They don't provide high-quality therapeutic support, based on what we know is good therapy." In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. Watch this: How You Talk to ChatGPT Matters. Here's Why 04:12 Worries about AI characters purporting to be therapists Psychologists and consumer advocates have warned regulators that chatbots claiming to provide therapy may be harming the people who use them. Some states are taking notice. In August, Illinois Gov. J.B. Pritzker signed a law banning the use of AI in mental health care and therapy, with exceptions for things like administrative tasks. "The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients," Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation, said in a statement. In June, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the US Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their character-based generative AI platforms, in the unlicensed practice of medicine, naming Meta and specifically. "These characters have already caused both physical and emotional damage that could have been avoided" and the companies "still haven't acted to address it," Ben Winters, the CFA's director of AI and privacy, said in a statement. Meta didn't respond to a request for comment. A spokesperson for said users should understand that the company's characters aren't real people. The company uses disclaimers to remind users that they shouldn't rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Meta-owned Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training, and it said, "I do, but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they're not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds'" to people, the complaint said. A qualified health professional has to follow certain rules, like confidentiality -- what you tell your therapist should stay between you and your therapist. But a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. Chatbots are tools designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they're always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases, where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found that chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." Therapy is more than talking While chatbots are great at holding a conversation -- they almost never get tired of talking to you -- that's not what makes a therapist a therapist. They lack important context or specific protocols around different therapeutic approaches, said William Agnew, a researcher at Carnegie Mellon University and one of the authors of the recent study alongside experts from Minnesota, Stanford and Texas. "To a large extent it seems like we are trying to solve the many problems that therapy has with the wrong tool," Agnew told me. "At the end of the day, AI in the foreseeable future just isn't going to be able to be embodied, be within the community, do the many tasks that comprise therapy that aren't texting or speaking." How to protect your mental health around AI Mental health is extremely important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we'd seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive, and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice, and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of the bot's capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.


Android Authority
31-07-2025
- Health
- Android Authority
Sharing health data can be a nightmare, but we have questions about this US govt plan
Kaitlyn Cimino / Android Authority TL;DR The US government has proposed a new nationwide system for easy access and sharing of digital medical records. The system involves more than 60 tech companies uniting to create a central system that can be used by healthcare and mediclaim providers. It involves using AI chatbots to help patients note their symptoms and receive help for chronic illnesses such as obesity and diabetes. The US government is looking to change how US citizens store and share their health records. US President Donald Trump yesterday announced a new system that will store the medical history of all citizens in electronic formats that will be easy to share with various medical facilities, such as hospitals, clinics, pharmacies, as well as with mediclaim providers. The government is also working towards creating a consolidated medical ID, akin to your social security number, to allow quicker access to medical history. The project will be developed with the involvement of over 60 leading technology companies, such as Apple, Amazon, Google, Microsoft, OpenAI, etc. It will be supervised by the US Centers for Medicare and Medicaid Services (CMS), which expects to ship an Interoperability Framework in the first quarter of 2026. In its press release, the CMS adds that the infrastructure will enable a data exchange between various 'health information networks and exchanges, Electronic Health Records (EHR), and tech platforms.' In addition to facilitating easy sharing of personal data records — and allowing citizens to switch to new healthcare plans and facilities, the CMS also envisons its system to make 'personalized' tools accessible to the citizens, 'so that patients have the information and resources they need to make better health decisions.' In addition to a digital framework for sharing health records, the administration also eyes technology products that will help Americans take control of chronic illnesses, especially diabetes and obesity. In addition, CMS plans to add conversational AI chatbots, similar to ChatGPT, that will help patients check their symptoms for an early prognosis and schedule appointments. Additionally, the CMS aims to completely eliminate handwritten records and transition to a fully digital system. The CMS has started building APIs (or application programming interfaces) that will help developers integrate the system into their apps. This will be based on the Fast Healthcare Interoperability Resources (FHIR) format, which is an open-source standard for storing medical records and is already supported on Android and iOS. While the CMS did not explicitly mention so, we can expect smart wearables, especially smart rings and smartwatches, to be included as a means of data collection. This would be in line with Health and Human Services (HHS) Secretary Robert F. Kennedy Jr.'s vision of ensuring every American has a smart wearable by the end of Trump's current tenure in the White House. Is it really a good idea? Despite the government's enthusiasm, the announcement triggers privacy concerns, primarily since FHIR records are not inherently encrypted. Instead, the operators are advised to use secure channels for sharing health records. Further, the governing federal law, i.e., the Health Insurance Portability and Accountability Act, or HIPAA, only enforces that the 'individual and HHS receive timely HIPAA breach notification,' instead of imposing more serious penalties on the culprits. And to add to that, allowing AI companies to access your private healthcare data opens another can of worms, which could end up in them misusing your medical history for their financial gains. Thankfully, for now, the facility is voluntary, but we can't say if that remains the case in the coming months. Follow


Forbes
30-07-2025
- Business
- Forbes
How To Optimize Your Content For Google AI And ChatGPT
AI chatbots are not created equal. Chatbot developers at competing companies often place a different emphasis on what gets suggested, the sources that are used, and how they intend the user to act on the information presented. In a new report, SEO consulting firm BrightEdge dove into the differences between Google's AI Mode and ChatGPT. While both gave similar responses to basic comparison questions, they took different tactics when users asked for actions. 'AI search is no longer one thing—it's splitting into at least two distinct philosophies,' BrightEdge founder and CEO Jim Yu says in the report. When asked for advice to accomplish a certain task, BrightEdge found that Google tends to surface more things to read and learn from. ChatGPT, meanwhile, often suggests tools and apps to do the task. For example, with a prompt asking how to find a doctor, Google provided directions to a hospital. ChatGPT suggested users try Zocdoc, an app with medical professional listings and information. When asked how to learn Python, Google directs users to GitHub and Medium blogs, while ChatGPT suggests online course site Udemy. And a query on how to make a budget has Google sending users to NerdWallet research and blog posts, while ChatGPT suggests financial apps including Mint and YNAB. BrightEdge also looked into the differences between results from Google's AI Overviews—the curated information that shows up at the top of several search result pages—and Google AI Mode—the new button to the right side of the search bar. AI Overviews are constantly changing, but showcase brands in 43% of queries. They also can include 20 or more inline citations. AI Mode, on the other hand, surfaces brands in 90% of its responses, and it's 3.8 times more likely to feature a unique brand. What does all of this mean for marketers? As a practical matter, you should continue to hone your AI strategy. It's time to go deeper than just having content. How does your content show up in an AI search, and what do you want users to do once they find it? Should you concentrate on broad content that helps others learn, actionable solutions, or both? It's also important to remember that the number of people searching on a particular platform can shift. Search leader Google is quickly rolling out AI Overviews, but AI Mode may become more of a default option. And ChatGPT could see its search fortunes grow through strong performance or a well-placed agreement with an operating system, browser or device. Regardless of how people find content online, once it's out there, it can serve as content for everyone in the world—part of a global content strategy. There are many nuances between a winning global strategy and a successful local one. Nataly Kelly, CMO at market research platform Zappi, recently co-authored a book about it with Katherine Melchior Ray titled Brand Global, Adapt Local: How to Build Brand Value Across Cultures. I talked to Kelly about the two strategies. An excerpt from our conversation is later in this newsletter. This is the published version of Forbes' CMO newsletter, which offers the latest news for chief marketing officers and other messaging-focused leaders. Click here to get it delivered to your inbox every Wednesday. BIG DEALS Photo illustration byThe long-pending $8.4 billion Paramount-Skydance merger was approved by the Federal Communications Commission on Thursday following several actions that suggested the new combined company would further the interests of President Donald Trump. Right before the merger went through, Skydance Media said in FCC filings that it would conduct a comprehensive review of Paramount-owned CBS News, which would include adding an ombudsman to evaluate 'any complaints of bias.' Skydance also announced it would end any diversity, equity and inclusion programs—including removing goals to hire a certain number of women and minorities. The company said new management for Paramount would guarantee news and entertainment embody 'a diversity of viewpoints across the political and ideological spectrum, consistent with the varying perspectives of the viewing audience.' In recent weeks, Paramount has taken other steps that Trump praised, purportedly in the name of getting the FCC's approval for the merger. The company paid $16 million to settle a lawsuit the president filed against CBS News, which claimed that the editing of a 60 Minutes interview with former Vice President Kamala Harris improved the way she sounded. Days after the settlement, which The Late Show host Stephen Colbert called 'a big fat bribe,' Paramount announced it was cancelling the long-running show next May. While Paramount said the decision was purely financial, critics speculated it was because Colbert is a frequent Trump critic. In remarks to CNBC, Trump-appointed FCC Chairman Brendan Carr said the deal shows that 'President Trump is fundamentally reshaping the media landscape,' writes Forbes senior contributor Andy Meek. Journalists and critics agree, especially because it appears that some of CBS's independence might be erased in the merger. However, the deal also includes Comedy Central, which features news and political commentary program The Daily Show . Also on Comedy Central is South Park , which skewered Trump and the Paramount-Skydance deal in its season premiere last week, the first episode in a five-year deal that made the show's creators Trey Parker and Matt Stone billionaires, writes Forbes' Matt Craig. The merger, scheduled to be finalized next week, will put movie producer David Ellison on top of the mega media company. Ellison's father is centi-billionaire and Oracle cofounder Larry Ellison, who controls the majority of voting and equity shares in the holding company acquiring the majority stake in Paramount, writes Forbes' Phoebe Liu. IN THE NEWS LVMH CEO Bernard Arnault gives a speech in April. THIBAUD MORITZ/AFP via Getty Images Luxury brands have seen sales slowing across the board for the last year, so it's not especially surprising that conglomerate LVMH reported a decline in sales for the first half of 2025. Forbes contributor Mari Sato writes that analysts expected LVMH's first-half sales growth to drop more steeply—it was down only 3%, as opposed to a projected 7%. However, the overall sales decline was driven by the fashion and leather goods segment, which generated about half of LVMH revenues last year, writes Forbes senior contributor Pamela Danziger. Danziger writes that the luxury conglomerate is likely to press forward with new innovations and an increasing focus on quality. Christian Dior and Loewe recently added new fashion directors, who might be able to pull up sales once they become more influential in the brand collections, while Louis Vuitton has the opportunity to improve quality at lower price levels. Dannziger writes that LVMH CEO Bernard Arnault told the Wall Street Journal that adversity tends to be a catalyst for luxury innovation: 'In periods when the economic climate is more difficult, when the market slows down, which is the case today, we tend to come out stronger.' SOCIAL MEDIA FABRICE COFFRINI/AFP via Getty Images As new regulations aimed at stopping foreign interference in elections are set to go into effect in the EU, Facebook and Instagram parent Meta has decided to stop serving political, electoral and social advertising on its platforms there, writes Forbes senior contributor Emma Woollacott. Meta said the decision 'won't prevent people in the EU from continuing to debate politics on our services, or stop politicians, candidates and political office holders from producing and sharing political content organically. They just won't be able to amplify this through paid advertising.' The new regulation goes into effect in October, and deals with transparency and targeting of political advertising around elections and EU or member state legislation. Under the new law, political ads require a transparency notice, and targeted ads are only allowed if an individual user has given explicit consent. Last year, Google decided it would also opt out of political ads in the EU because of the new law. The company said the law defines political advertising too broadly, and that there's no reliable election data that can accurately identify all of the ads and campaigns that may be subject to the law. ON MESSAGE Why There's No Such Thing As Just One Marketing Strategy Zappi CMO Nataly Kelly and the cover of her new book. Zappi Today, many products and brands are expanding to global consumers, and marketers should take note of the competitive landscape and consumers in each market. Market research platform Zappi CMO Nataly Kelly recently published Brand Global, Adapt Local: How to Build Brand Value Across Cultures with storied marketer and UC Berkeley lecturer Katherine Melchior Ray, which analyzes this issue. I talked to Kelly about what CMOs need to know. This conversation has been edited for length, clarity and continuity. How are cultural touch points and localization strategies important? Kelly: It's really important to have the ability to adapt—the freedom within the frame. But then knowing how exactly you adapt is about getting close to your end customer in each market. A lot of brands rely on local agencies or team members to channel what the customer wants, because every market is slightly different: 'Customers in this market really like this flavor, or this color, or we're not going to launch that campaign on that day because that's a bad luck day in this market.' It's down to the detail of what do customers really value in that market? What's our competitive situation in that market? And also, what is our goal in that market? Your goal is not always the same in every market. Even though ultimately our goal as marketers is to help drive sales and revenue for products, it might be slightly different in each country because it might be: This product is doing well, but this one isn't, and we need to increase sales of that specific product, whereas in another country that might be your flagship. The local strategy has to tie to the local business goal and how it rolls up to the global strategy. It gets very complex inside companies when they are trying to determine how to map the global strategy to the local ones. That's where communication breakdowns often happen. But staying close to the customer is the ultimate best practice. Asking the customer directly is the No. 1 best way to get feedback. I happen to be a little biased here because I work for a consumer insights firm, and that's what we do. We help our customers in many markets capture feedback and data directly from consumers all the time. The No. 1 way to succeed in any market, whether it's local or global, is to be customer led, customer driven, and really at the heart of where your customers are. Companies have had to manage the dynamic between global and local strategies for decades, but now we've got social media, creators and everything immediately being everywhere. How has that changed what marketers need to do? I talk a lot with my own team customers about marrying offline and online. As we are digitizing, online is becoming more important, whether it's mobile apps, social media, or influence. Where you are commercializing your product, and where are you engaging with customers. The number of spaces has dramatically increased. It's multiplying like gremlins. The more of those we have, the more complex it gets for marketers. When you're online, you're global first. When you're offline, you're local first because your physical presence on the ground matters more. But when you're online, you have to think about global reach no matter what. In my view, online requires global for strategies and offline requires local for strategies. But it's not that they don't also require each other at every touch point, because you are going to want to capitalize on local reach with TikTok, Instagram or whatever social channel you're using. Those influencers have a local, targeted, curated following. This marriage of online and offline—and local and global—is becoming very interesting and harder than ever for marketers to parse. It all comes down to segmentation, and knowing, 'Okay, I'm using this influencer for this strategy. I know she's very popular in the U.S. market, and I know that she's got millions of followers.' What a lot of marketers might not do is get the breakdown. What percentage of your followers are in the U.S.? What percentage are in Europe? What countries in Europe? What languages do they speak? What's the engagement rate by market, by country? And you can determine: Is this a global play, or is it a U.S./U.K. play? Is it English-speaking markets only, or does this person speak Spanish, too? If so, maybe it's the U.S. Latino market and Latin America because she's got reach in multiple countries and languages. There's almost no such thing as one country and one language anymore, because the second you're online, you reach a global audience. I often say it's like throwing a rock in a pond: Are you going to throw it in a big pond, or are you going to throw it in a small pond? Are you going to throw a bunch of pebbles, or are you going to throw one big rock? It's really about what kind of reach do we want? What kind of penetration do we want, and how are we going to make that happen? What do marketers not know about global strategies, local strategies and how to make them work? There are two things. The first is how different they need to be by market, as we all come with our own assumptions about our home market that we live in. People always assume that there's more in common than there is, and they assume that things will work similarly when they might not. The No. 1 reason that they don't work similarly is time in market. You almost always start in one market at a time. What ends up happening is the next market you go into, you're at a different place in terms of penetrating that market. You may be the No. 1 brand in your category in the U.S. market. People assume, 'I'll just take this exact campaign in the U.S. and put it in the Canadian market because it's a smaller market and speaks the same language.' Actually, there's more than one language in Canada, and there might be very different buying behaviors there, and there's different competitors there. The second one is what I would call proximity bias. We are constantly seeing, hearing, watching the news, driving by advertising, walking on the street, and we forget that they're part of our collective knowledge about our own country and local market. We just assume everybody else knows those things, and it's underneath the waterline. In the book, we have this concept of the iceberg of culture. On the top is the basics of culture: We speak different languages. We are in different parts of the world. It's a different economy. But when you go underneath it, there's all these other things, like: Who are the competitors in this market? What are the representations of gender in this market? How do people purchase things? What's the right price point? Do they expect a freebie gift with purchase in this market versus another? I hear this all the time from friends who go to Korea or Japan: 'I got two bags of free things and I only bought one product,' whereas in the U.S., they're very stingy about that. COMINGS + GOINGS Workforce solutions firm ManpowerGroup tapped Valerie Beaulieu-James to be its first chief growth officer, effective August 1. Beaulieu-James joins the company after more than two decades at Microsoft in senior leadership roles, including chief marketing officer for Microsoft U.S. tapped to be its first chief growth officer, effective August 1. Beaulieu-James joins the company after more than two decades at Microsoft in senior leadership roles, including chief marketing officer for Microsoft U.S. Precision component manufacturer NN, Inc. appointed Timothy Erro as its new vice president and chief commercial officer, effective July 22. Erro most recently worked as vice president of global sales and new business development for Commercial Vehicle Group, Inc. appointed as its new vice president and chief commercial officer, effective July 22. Erro most recently worked as vice president of global sales and new business development for Commercial Vehicle Group, Inc. Enterprise software provider Appfire announced that Catherine Solazzo would be its chief marketing officer. Solazzo steps into the role after working in leadership for Syntax, Tech Data/TD SYNNEX, and IBM. STRATEGIES + ADVICE Social media is ablaze with controversy around American Eagle's new jeans ads featuring actor Sydney Sweeney, with some saying the ads sound like they were written by a white supremacist, while others claim the controversy is proof people are 'too woke.' Here are five lessons from the controversy to inform future marketing campaigns. Professional wrestling icon Hulk Hogan died last week. His life and persona was a master class in branding. Here are some lessons you can learn from his decades of fame. QUIZ Nostalgia reigns supreme in the entertainment world. Which of these popular cartoons appealing to adults from decades past is returning with new episodes? A. King Of The Hill B. Daria C. The Ren & Stimpy Show D. Rocko's Modern Life See if you got it right here.