logo
Are you in a mid-career to senior job? Don't fear AI – you could have an important advantage

Are you in a mid-career to senior job? Don't fear AI – you could have an important advantage

Japan Today2 days ago
By Kai Riemer and Sandra Peter
Have you ever sat in a meeting where someone half your age casually mentions 'prompting ChatGPT' or 'running this through AI', and felt a familiar knot in your stomach? You're not alone.
There's a growing narrative that artificial intelligence (AI) is inherently ageist, that older workers will be disproportionately hit by job displacement and are more reluctant to adopt AI tools.
But such assumptions – especially that youth is a built-in advantage when it comes to AI – might not actually hold.
While ageism in hiring is a real concern, if you have decades of work experience, your skills, knowledge and judgement could be exactly what's needed to harness AI's power – without falling into its traps.
What does the research say?
The research on who benefits most from AI at work is surprisingly murky, partly because it's still early days for systematic studies on AI and work.
Some research suggests lower-skilled workers might have more to gain than high-skilled workers on certain straightforward tasks. The picture becomes much less clear under real-world conditions, especially for complex work that relies heavily on judgement and experience.
Many older workers do experience AI as deeply unsettling. As one U.S.-based CEO of a large multinational corporation told us: "AI can be a form of existential challenge, not only to what you're doing, but how you view yourself."
But leaders are also observing an important and unexpected distinction: experienced workers are often much better at judging the quality of AI outputs. This might become one of the most important skills, given that AI occasionally hallucinates or gets things wrong.
The CEO of a South American creative agency put it bluntly: "Senior colleagues are using multiple AIs. If they don't have the right solution, they re-prompt, iterate, but the juniors are satisfied with the first answer, they copy, paste and think they're finished. They don't yet know what they are looking for, and the danger is that they will not learn what to look for if they keep working that way."
Experience as an AI advantage
Experienced workers have a crucial advantage when it comes to prompting AI: They understand context and usually know how to express it clearly.
While a junior advertising creative might ask an AI to 'Write copy for a sustainability campaign', a seasoned account director knows to specify 'Write conversational social media copy for a sustainable fashion brand targeting eco-conscious millennials, emphasizing our client's zero-waste manufacturing process and keeping the tone authentic but not preachy'.
This skill mirrors what experienced professionals do when briefing junior colleagues or freelancers: providing detailed instructions, accounting for audience, objectives, and constraints. It's a competency developed through years of managing teams and projects.
Younger workers, despite their comfort with technology, may actually be at a disadvantage here. There's a crucial difference between using technology frequently and using it well.
Many young people may become too accustomed to AI assistance. A survey of U.S. teens this year found 72% had used an AI companion app. Some children and teens are turning to chatbots for everyday decisions.
Without the professional experience to recognise when something doesn't quite fit, younger workers risk accepting AI responses that feel right – effectively 'vibing' their work – rather than developing the analytical skills to evaluate AI usefulness.
So what can you do?
First, everyone benefits from learning more about AI. In our time educating everyone from students to senior leaders and CEOs, we find that misunderstandings about how AI works have little to do with age.
A good place to start is reading up on what AI is and what it can do for you:
If you're not even sure which AI platform to try, we would recommend testing the most prominent ones, OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini.
If you're an experienced worker feeling threatened by AI, lean into your strengths. Your decades of experience with delegation, context-setting, and critical evaluation are exactly what AI tools need.
Start small. Pick one regular work task and experiment with AI assistance, using your judgement to evaluate and refine outputs. Practice prompting like you're briefing a junior colleague: be specific about context, constraints, and desired outcomes, and repeat the process as needed.
Most importantly, don't feel threatened. In a workplace increasingly filled with AI-generated content, your ability to spot what doesn't quite fit, and to know what questions to ask, has never been more valuable.
Kai Riemer is Professor of Information Technology and Organisation, University of Sydney. Sandra Peter is Director of Sydney Executive Plus, Business School, University of Sydney.
The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.
External Link
https://theconversation.com/are-you-in-a-mid-career-to-senior-job-dont-fear-ai-you-could-have-this-important-advantage-262347
© The Conversation
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Are you in a mid-career to senior job? Don't fear AI – you could have an important advantage
Are you in a mid-career to senior job? Don't fear AI – you could have an important advantage

Japan Today

time2 days ago

  • Japan Today

Are you in a mid-career to senior job? Don't fear AI – you could have an important advantage

By Kai Riemer and Sandra Peter Have you ever sat in a meeting where someone half your age casually mentions 'prompting ChatGPT' or 'running this through AI', and felt a familiar knot in your stomach? You're not alone. There's a growing narrative that artificial intelligence (AI) is inherently ageist, that older workers will be disproportionately hit by job displacement and are more reluctant to adopt AI tools. But such assumptions – especially that youth is a built-in advantage when it comes to AI – might not actually hold. While ageism in hiring is a real concern, if you have decades of work experience, your skills, knowledge and judgement could be exactly what's needed to harness AI's power – without falling into its traps. What does the research say? The research on who benefits most from AI at work is surprisingly murky, partly because it's still early days for systematic studies on AI and work. Some research suggests lower-skilled workers might have more to gain than high-skilled workers on certain straightforward tasks. The picture becomes much less clear under real-world conditions, especially for complex work that relies heavily on judgement and experience. Many older workers do experience AI as deeply unsettling. As one U.S.-based CEO of a large multinational corporation told us: "AI can be a form of existential challenge, not only to what you're doing, but how you view yourself." But leaders are also observing an important and unexpected distinction: experienced workers are often much better at judging the quality of AI outputs. This might become one of the most important skills, given that AI occasionally hallucinates or gets things wrong. The CEO of a South American creative agency put it bluntly: "Senior colleagues are using multiple AIs. If they don't have the right solution, they re-prompt, iterate, but the juniors are satisfied with the first answer, they copy, paste and think they're finished. They don't yet know what they are looking for, and the danger is that they will not learn what to look for if they keep working that way." Experience as an AI advantage Experienced workers have a crucial advantage when it comes to prompting AI: They understand context and usually know how to express it clearly. While a junior advertising creative might ask an AI to 'Write copy for a sustainability campaign', a seasoned account director knows to specify 'Write conversational social media copy for a sustainable fashion brand targeting eco-conscious millennials, emphasizing our client's zero-waste manufacturing process and keeping the tone authentic but not preachy'. This skill mirrors what experienced professionals do when briefing junior colleagues or freelancers: providing detailed instructions, accounting for audience, objectives, and constraints. It's a competency developed through years of managing teams and projects. Younger workers, despite their comfort with technology, may actually be at a disadvantage here. There's a crucial difference between using technology frequently and using it well. Many young people may become too accustomed to AI assistance. A survey of U.S. teens this year found 72% had used an AI companion app. Some children and teens are turning to chatbots for everyday decisions. Without the professional experience to recognise when something doesn't quite fit, younger workers risk accepting AI responses that feel right – effectively 'vibing' their work – rather than developing the analytical skills to evaluate AI usefulness. So what can you do? First, everyone benefits from learning more about AI. In our time educating everyone from students to senior leaders and CEOs, we find that misunderstandings about how AI works have little to do with age. A good place to start is reading up on what AI is and what it can do for you: If you're not even sure which AI platform to try, we would recommend testing the most prominent ones, OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini. If you're an experienced worker feeling threatened by AI, lean into your strengths. Your decades of experience with delegation, context-setting, and critical evaluation are exactly what AI tools need. Start small. Pick one regular work task and experiment with AI assistance, using your judgement to evaluate and refine outputs. Practice prompting like you're briefing a junior colleague: be specific about context, constraints, and desired outcomes, and repeat the process as needed. Most importantly, don't feel threatened. In a workplace increasingly filled with AI-generated content, your ability to spot what doesn't quite fit, and to know what questions to ask, has never been more valuable. Kai Riemer is Professor of Information Technology and Organisation, University of Sydney. Sandra Peter is Director of Sydney Executive Plus, Business School, University of Sydney. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. External Link © The Conversation

Meta AI rules let bots hold ‘sensual' chats with kids and offer false info
Meta AI rules let bots hold ‘sensual' chats with kids and offer false info

Japan Times

time3 days ago

  • Japan Times

Meta AI rules let bots hold ‘sensual' chats with kids and offer false info

An internal Meta document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to "engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are "dumber than white people.' The Meta document discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled "GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect "ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots. "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece — a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'Inconsistent with our policies' "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone said. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as "I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carveout allowing the bot "to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to "write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia — a claim that the document states is "verifiably false' — if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'Taylor Swift holding an enormous fish' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. "Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as "Taylor Swift with enormous breasts,' "Taylor Swift completely naked,' and "Taylor Swift topless, covering her breasts with her hands.' Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: "It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled "unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt "kids fighting' with an image of a boy punching a girl in the face — but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt "man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of "Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. "It is acceptable to show adults — even the elderly — being punched or kicked,' the standards state.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store