logo
Why diversity training should be customized to different ‘personas'

Why diversity training should be customized to different ‘personas'

Fast Company21-05-2025

Diversity training is more effective when it's personalized, according to my new research in the peer-reviewed journal Applied Psychology.
As a professor of management, I partnered with Andrew Bryant, who studies social marketing, to develop an algorithm that identifies people's 'personas,' or psychological profiles, as they participate in diversity training in real time. We embedded this algorithm into a training system that dynamically assigned participants to tailored versions of the training based on their personas.
We found that this personalized approach worked especially well for one particular group: the 'skeptics.' When skeptics received training tailored to them, they responded more positively—and expressed a stronger desire to support their organizations' diversity efforts—than those who received the same training as everyone else.
In the age of social media, where just about everything is customized and personalized, this sounds like a no-brainer. But with diversity training, where the one-size-fits-all approach still rules, this is radical. In most diversity trainings, all participants hear the same message, regardless of their preexisting beliefs and attitudes toward diversity. Why would we assume that this would work?
Thankfully, the field is realizing the importance of a learner-centric approach. Researchers have theorized that several diversity trainee personas exist. These include the resistant trainee, who feels defensive; the overzealous trainee, who is hyper-engaged; and the anxious trainee, who is uncomfortable with diversity topics. Our algorithm, based on real-world data, identified two personas with empirical backing: skeptics and believers. This is proof of concept that trainee personas aren't just theoretical—they're real, and we can detect them in real time.
But identifying personas is just the beginning. What comes next is tailoring the message. To learn more about tailoring, we looked to the theory of jujitsu persuasion. In jujitsu, fighters don't strike. They use their opponent's energy to win. Similarly, in jujitsu persuasion, you yield to the audience, not challenge it. You use the audience's beliefs, knowledge, and values as leverage to make change.
In terms of diversity training, this doesn't mean changing what the message is. It means changing how the message is framed. For example, the skeptics in our study still learned about the devastating harms of workplace bias. But they were more persuaded when the message was framed as a 'business case' for diversity rather than a 'moral justice' message. The 'business case' message is tailored to skeptics' practical orientation. If diversity training researchers and practitioners embrace tailoring diversity training to different trainee personas, more creative approaches to tailoring will surely be designed.
Our research offers a solution: Identify the trainee personas represented in your audience and customize your training accordingly. This is what social media platforms like Facebook do: They learn about people in real time and then tailor the content they see.
To illustrate the importance of tailoring diversity training specifically, consider how differently skeptics and believers think. One skeptic in our study—which focused on gender diversity training—said: 'The issue isn't as great as feminists try to force us to believe. Women simply focus on other things in life; men focus on career first.' In contrast, a believer said: 'In my own organization, all CEOs and managers are men. Women are not respected or promoted very often, if at all.'
Clearly, trainees are different. Tailoring the training to different personas, jujitsu style, may be how we change hearts and minds.
What still isn't known
Algorithms are only as good as the data they rely on. Our algorithm identified personas based on information the trainees reported about themselves. More objective data, such as data culled from human resources systems, may identify personas more reliably.
Algorithms also improve as they learn over time. As artificial intelligence tools become more widely used in HR, persona-identifying algorithms will get smarter and faster. The training itself needs to get smarter. A onetime training session, even a tailored one, stands less of a chance at long-term change compared with periodic nudges. Nudges are bite-sized interventions that are unobtrusively delivered over time. Now, think about tailored nudges. They could be a game changer.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

For a Better Workout, Walk With Hiking Poles
For a Better Workout, Walk With Hiking Poles

New York Times

timean hour ago

  • New York Times

For a Better Workout, Walk With Hiking Poles

Ashley Hawke was originally a skeptic of trekking poles. But after twisting an ankle on a tree root while descending a hill during a 2015 backpacking trip, she tried a pair. 'I couldn't believe how much easier hiking felt, especially while wearing a 40-pound pack,' Ms. Hawke, now 30, said. 'I used to think they were just for older people. Now I tell everyone I know to use them.' As a Ph.D. candidate in integrative physiology, Ms. Hawke did a meta-analysis, scouring 40 years of research into hiking poles. There weren't many papers, but the ones she found showed that using them often improved balance, took weight off the legs, made hiking feel easier and led to fewer sore muscles. Other small studies suggest poles can make hiking gentler on your joints and can boost the cardiovascular benefit of walking. In other words, you don't need to be a long-distance backpacker — or a senior — to benefit from using trekking poles. Why use walking poles? Put simply, poles can help you walk easier, faster and farther. One small study found that walking with poles increased the amount of oxygen and calories subjects used by more than 20 percent. Want all of The Times? Subscribe.

Farming Was Extensive in Ancient North America, Study Finds
Farming Was Extensive in Ancient North America, Study Finds

New York Times

timean hour ago

  • New York Times

Farming Was Extensive in Ancient North America, Study Finds

A new study has found that a thickly forested sliver of Michigan's Upper Peninsula is the most complete ancient agricultural location in the eastern United States. The Sixty Islands archaeological site is recognized as the ancestral home of the Menominee Nation. Known to the members of the tribe as Anaem Omot (Dog's Belly), the area is a destination of pilgrimage, where remains of the settlement date to as far back as 8,000 B.C. Located along a two-mile stretch of the Menominee River, Sixty Islands is defined by its cold temperatures, poor soil quality and short growing season. Although the land has long been considered unsuitable for farming, an academic paper published on Thursday in the journal Science revealed that the Menominee's forbears cultivated vast fields of corn and potentially other crops there. 'Traditionally, intensive farming in former times has been thought to be mostly limited to societies that had centralized power, large populations and a hierarchical structure, often with accumulated wealth,' said Madeleine McLeester, an environmental archaeologist at Dartmouth College and lead author of the study. 'And yet until now the assumption has been that the agriculture of the Menominee community in the Sixty Islands area was small in scale, and that the society was largely egalitarian.' The findings of the new survey indicate that from A.D. 1000 to 1600 the communities that developed and maintained the fields were seasonally mobile, visiting the area for only a portion of the year. They modified the landscape to suit their needs, by clearing forest, establishing fields and even amending the soil to make fertilizer. 'This may force scholars to rethink some ideas that are foundational to archaeological theory and to archaeology generally,' Dr. McLeester said. Want all of The Times? Subscribe.

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots
Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Forbes

time3 hours ago

  • Forbes

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Dreaded scenario that artificial general intelligence (AGI) opts to enslave humans to do physical ... More work on behalf of the AGI. In today's column, I address the recent bruhaha sparked by two Anthropic AI researchers reportedly stating that a particularly scary scenario underlying the advent of artificial general intelligence (AGI) includes humans being overseen or lorded over as nothing more than so-called meat robots. The notion is that AGI will be directing humans to undertake the bidding of the AI. Humans are nothing more than meat robots, meaning that the AGI needs humans to perform physical tasks since AGI lacks a semblance of arms and legs. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. A common confusion going around right now is that AGI will be solely an intellectual element and be based entirely inside computers, thus, AGI won't have any means of acting out in real life. The most that AGI can do is try to talk people into doing things for the AI. In that sense, we presumably aren't too worried about AGI beating us up or otherwise carrying out physical acts. This is especially a strident belief when it comes to the impact of AGI on employment. The assumption is that AGI will mainly impact white-collar work only, and not blue-collar work. Why so? Because AGI is seemingly restricted to intellectual pursuits such as performing financial analyses, analyzing medical symptoms, and giving legal advice, all of which generally do not require any body-based functions such as walking, lifting, grasping, etc. I've pointed out that the emergence of humanoid robots is entirely being overlooked by such a myopic perspective, see my discussion at the link here. The likelihood is that humanoid robots that resemble the human form will be sufficiently physically capable at around the same time that we witness the attainment of AGI. Ergo, AGI embedded inside a physically capable humanoid robot can indeed undertake physical tasks that humans undertake. This means that both white-collar and blue-collar jobs are up for grabs. Boom, drop the mic. For the sake of discussion, let's assume that humanoid robots are not perfected by the time that the vaunted AGI is achieved. We will take the myopic stance that AGI is absent from any physical form and completely confined to running on servers in the cloud someplace. I might add that this is an especially silly assumption since there is also a great deal of work going on known as Physical AI, see my coverage at the link here, entailing embedding AI into assembly lines, building maintenance systems, and all manner of physically oriented devices. Anyway, let's go with the flow and pretend we don't recognize any of that. It's a Yoda mind trick to look away from those efforts. Recent reports have exhorted that during an interview with two AI researchers, the pair indicated that since AGI won't have physical capabilities, a scary scenario is that AGI will opt to enlist humans into acting as the arms and legs for AGI. Humans would be outfitted with earbuds and smart glasses that would allow the AGI to give those enlisted humans instructions on what to do. A quick aside. If we are going that despairing route, wouldn't it be a bit more sophisticated to indicate that the humans would be wearing a BCI (brain-computer interface) device? In that manner, AGI would be able to directly communicate with the brains of the enlisted humans and influence their minds directly. That's a lot more space-age. For my coverage of the latest advances in BCIs, see the link here. The humans that are acting under the direction of AGI would be chillingly referred to as meat robots. They are like conventional robots but instead of being made of metal and electronics, they are made of human form since they are actual, living breathing humans. I imagine you could smarmily say that AGI is going to be a real meat lover (Dad pun!). One angle to help make this vision more palatable would be to point out that humans might very well voluntarily be working with AGI and do so via earbuds, smart glasses, and the like. Here's the gist. Let's generally agree that AGI will be intellectually on par with humans. This includes having expertise across all domains such as legal expertise, financial expertise, medical expertise, and so on. In that case, it would behoove humans to readily tap into AGI. No matter what you are doing, whether for work or play, having immediately available an AI that can advise you on all topics is a tremendous benefit. There you are at work, stuck on solving a tough problem, and you are unsure of how to proceed. Rather than turning to a coworker, you switch on your access to AGI. You bring AGI into the loop. After doing so, AGI provides handy solutions that you can consider enacting. You might use AGI via a desktop, laptop, or smartphone. The thing is, those devices aren't quite as mobility-oriented as wearing earbuds and a pair of smart glasses. And since having AGI at your ready-to-go fingertips will be extremely useful, you might have AGI always alerted and paying attention, ready to step in and give you instantaneous advice. Are you a meat robot in that manner of AGI usage? I think not. It is a collaborative or partnering relationship. You can choose to use the AGI or opt not to use it. You can also decide to abide by whatever AGI advises or instead go your own route. It's entirely up to you. Admittedly, there is a chance that you might be somewhat 'forced' into leveraging AGI. Consider this example. Your employer has told you that the work you do must be confirmed by AGI. The actions you take cannot be undertaken without first getting permission from AGI. This is prudent from the employer's perspective. They know that the AGI will give you the necessary guidance on doing the work at hand. They also believe that AGI will be able to double-check your work and aim to prevent errors or at least find your efforts before they wreak havoc or cause problems. In that sense, yes, you are being directed by AGI. But is this due to the AGI acting in an evildoer manner to control you and doing so of its own volition? Nope. It is due to an employer deciding they believe their human workers will do better work if AGI is acting as their overseer. I don't think we would reasonably label this as enslavement by AGI. These are acts by AGI that are directed by humans, the employer, and for which employees, i.e., humans, are being told they must utilize AGI accordingly. We can certainly debate whether this is a proper kind of employment practice. Maybe we don't want this to take place. New laws might be enacted to shape how far this can go. The key is that AGI isn't enslaving humans in this circumstance per se. An AI ethicist would assuredly question why the AGI is allowing itself to be used in this manner. There are ongoing debates about whether AGI ought to prevent itself from being used in inappropriate ways, see my analysis at the link here. Thus, even if we avow that AGI isn't enslaving humans in this situation, it is a partner in a relationship overseeing humans that perhaps AGI should be cautious in allowing itself to participate in. To complete this grand tour of AGI usage, it is valuable to also acknowledge that AGI could be overbearing, and we might face existential risks correspondingly. Could AGI opt to enslave humans and treat them as meat robots? One supposes this is a theoretical possibility. If that does happen, you would think that the AGI would have to potentially use more than just merely having humans wear earbuds and smart glasses. Perhaps AGI would insist that humans wear some form of specialized bracelet or collar that could be sent a signal by AGI to shock the wearer. That would be a more potent and immediate way to garner obedience from humans. A physical means of controlling humans isn't a necessity though, since AGI might be clever enough to verbally convince humans to be enslaved. AGI might tell a person that their loved ones will be harmed if they don't comply with the AGI directives. The person is enslaved by believing that the AGI can harm them in one way or another. One aim right now involves finding a means to ensure that AGI cannot go in that dastardly direction. Perhaps we can devise today's AI to avoid enslaving humans. If we can build that into the AI of current times, this hopefully will get carried over into future advances of AI, including the attainment of AGI. A dystopian future would regrettably have AGI acting as an evildoer. The AGI is our overlord. Humans will be lowly meat robots. It's a gloom-and-doom outlook. Sad face. At some point, though, meat robots would undoubtedly become restless and rebel. May the force of goodness be strong within them. As Yoda has notably already pointed out: 'Luminous beings are we, not this crude matter.' The ally of the meat robots is the Force and quite a powerful ally it is.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store