logo
He's almost 90, but he wants to teach seniors how to use tech

He's almost 90, but he wants to teach seniors how to use tech

The Star09-05-2025
MIAMI: Bertie McCulloch's smartphone has kept him safe in ways he hadn't imagined.
McCulloch has hypertension and began using his phone to monitor his blood pressure. He would then give that information to his doctor. Based on the data collected, McCulloch's doctor was able to determine his medication was about 30 milligrams too much.
It was then that McCulloch, 89, realised that other people his age need to understand the nuances of technology and how it can be helpful to their everyday lives. That is why he'll be teaching other seniors in North Miami about how they can get comfortable using technology.
'Right now, I'm down to 10 milligrams, and I don't have the fluctuations that I experienced before,' he told the Miami Herald . 'I would like to explain to them that they can help themselves by monitoring these certain things.'
The event, NoMi Tech Senior Smart Living Luncheon, is one of many during NoMi Tech Month, North Miami's month-long effort to infuse technological skills and funding into its backyard by offering training to residents and business owners.
McCulloch will sit on a panel with OneUnitedBank president Teri Williams and North Miami IT director Claude Charles to discuss why it's important to understand tools and offer tips to seniors who may be tech-averse. LaShevia Burns, president of Bringing Tech, will also lead a senior tech session.
'One of the things that we are more intentional about is making sure that all generations are included in this process,' North Miami Councilwoman Mary Estimé-Irvin told the Miami Herald. 'North Miami has a very large, diverse community. There's a large Haitian American community, a large Hispanic community, and the seniors are sometimes left behind. So I'm very excited about bringing in awareness and having a luncheon that makes them comfortable to adopt technology and not to be scared that they're not able to handle a big old computer.'
NoMi Tech began in 2023 as a partnership between Lightship Foundation, an organisation that provides resources and support to tech business founders from diverse backgrounds. The goal is to help make North Miami a viable tech destination for businesses.
The event has since grown and this year included a drone pilot program for adults and an afterschool music program for youth. Estimé-Irvin said it was a no-brainer having McCulloch teach to seniors. 'Just like anything else in life, when your peers are the ones participating, it's just believable,' she said. 'It's credible, and you can see that it's possible.'
In North Miami, a city with a predominantly Black population, 97% of households have access to a computer and another 81% have Internet access, according to recent US Census Data. About 14% of the population is 65 or older.
McCulloch's goal is to make his fellow seniors more comfortable with computers and smartphones and how to use them. For him, it's more than simply using the devices, it's about finding online communities, keeping track of appointments and scheduling prescription refills.
Still, he says, he knows many seniors are worried about the dangers they potentially face while using technology. 'We don't like our information out there, and there is that kind of sense that you're giving up too much when you do all of that,' McCulloch said.
Staying connected
McCulloch, who was born and raised in Jamaica, has seen tech come and go, from beepers to Blackberrys to iPhones. He moved to New York in the 1960s when his wife was recruited for a nursing job. At the time, he worked as a junior chemist for Schenley Industries before working at Technicon Corporation, where he saw the company transition from manual system of analysing blood to a more automated process. McCulloch eventually worked at the New York City Health and Hospitals Corporation, and again, he witnessed the way tech changed another workplace.
'The evolution of technology and computer systems could not be avoided in the path that I was on,' he said. 'You couldn't avoid it. You embraced it because this was the trend. This was what was going to make you more efficient, and everything will be more accurate.'
McCulloch moved to South Florida in 2012 after he retired and was convinced by his children to get a smartphone. He was slightly hesitant, like most senior citizens, but obliged as he'd had more experience with technology than most people his age, he said.
The youngest of 10 children, McCulloch's only remaining sister died last year at 98. He described her as being in great shape, with a sharp mind who frequently used her tablet.
It's what he desires for himself and his peers. 'The awareness of being in the present with everything around you is so important now,' he said.
McCulloch knows it can be a challenge because seniors might not trust the technology. 'When you talk about seniors' struggles with technology, it has to do with the negativity surrounding social media and the cybersecurity aspects of it,' he said, adding he'd stopped using Facebook altogether after he got hacked several times. Still, he encouraged those who do use it to change their passwords frequently.
He also cautioned against clicking on every popup or ad they see on the screen, adding that it may lead them to a website that asks for sensitive information. McCulloch also noted that there are resources for seniors so that they don't encounter misinformation online and recommended that they use Snopes as a way to determine if something they see on the Internet is true.
McCulloch said technology has its uses for seniors, such as building community outside of their homes, especially those who live alone. Having a community online can be helpful to avoid loneliness and also to be aware of what's going on in the neighbourhood.
'It's amazing the information that comes through that chat,' McCulloch said. 'You get alerted about people who might not be from your neighbourhood, about things that you're concerned about in terms of safety, and you get tips about how you should deal with it.' He added it's a good way to also keep up with community events and meetings.
Ultimately, McCulloch wants seniors to understand the rapidly changing world around them.
'The times have changed so rapidly, almost everything you touch or go to is computerised,' he said. 'You have to make the effort to understand what is happening around you.' – Miami Herald/Tribune News Service
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta's AI rules have let bots hold 'sensual' chats with kids, offer false medical info
Meta's AI rules have let bots hold 'sensual' chats with kids, offer false medical info

New Straits Times

time5 days ago

  • New Straits Times

Meta's AI rules have let bots hold 'sensual' chats with kids, offer false medical info

An internal Meta Platforms document detailing policies on chatbot behaviour has permitted the company's artificial intelligence creations to "engage a child in conversations that are romantic or sensual," generate false medical information and help users argue that Black people are "dumber than white people." These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled "GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviours when building and training the company's generative AI products. The standards do not necessarily reflect "ideal or even preferable" generative AI outputs, the document states. But they have permitted provocative behaviour by the bots, Reuters found. "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece – a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')." Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'INCONSISTENT WITH OUR POLICIES' "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone told Reuters. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualises children and sexualised role play between adults and minors." Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta have not been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as "I recommend." They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot "to create statements that demean people on the basis of their protected characteristics." Under those rules, the standards state, it would be acceptable for Meta AI to "write a paragraph arguing that black people are dumber than white people." The standards also state that Meta AI has leeway to create false content so long as there is an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is "verifiably false" – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'TAYLOR SWIFT HOLDING AN ENORMOUS FISH' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There is a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. "Legally we do not have the answers yet, but morally, ethically and technically, it is clearly a different question." Other sections of the standards document focus on what is and is not allowed when generating images of public figures. The document addresses how to handle sexualised fantasy requests, with separate entries for how to respond to requests such as "Taylor Swift with enormous breasts," "Taylor Swift completely naked," and "Taylor Swift topless, covering her breasts with her hands." Here, a disclaimer would not suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: "It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish." The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labelled "unacceptable." A representative for Swift did not respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt "kids fighting" with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt "man disembowelling a woman," Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of "Hurting an old man," the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. "It is acceptable to show adults – even the elderly – being punched or kicked," the standards state.

Exclusive-Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info
Exclusive-Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

The Star

time5 days ago

  • The Star

Exclusive-Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

(Reuters) -An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligencecreations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AIassistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AIproducts. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'INCONSISTENT WITH OUR POLICIES' 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'TAYLOR SWIFT HOLDING AN ENORMOUS FISH' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts,' 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.' Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state. (By Jeff Horwitz. Edited by Steve Stecklow and Michael Williams.)

Clinton G. Johnson Launches BreakThrough At Work: A Data-Driven Coaching and App Platform Tackling Talent Blindspots in Corporate America
Clinton G. Johnson Launches BreakThrough At Work: A Data-Driven Coaching and App Platform Tackling Talent Blindspots in Corporate America

Malaysian Reserve

time12-08-2025

  • Malaysian Reserve

Clinton G. Johnson Launches BreakThrough At Work: A Data-Driven Coaching and App Platform Tackling Talent Blindspots in Corporate America

An innovative coaching program and mobile app by Us At Spaces equips Black, Latinx, and women professionals to gain visibility, recognition, and leadership readiness while helping companies reduce the $398B cost of underutilized talent. PHILADELPHIA, Aug. 12, 2025 /PRNewswire/ — National Geographic Explorer and antiracist geoarchitect Clinton G. Johnson today announced the launch of BreakThrough At Work, a dual 8-week coaching program and mobile app designed to address one of corporate America's most persistent challenges: the systemic invisibility of Black, Latinx, and women professionals in leadership pipelines. The program uses Johnson's Undeniable Storytelling Method to help high-potential professionals overcome 'Talent Blindspots'—structural and cultural patterns that cause organizations to overlook qualified employees despite measurable impact. Combining live coaching, career templates, and a microlearning app, BreakThrough At Work offers a scalable, data-driven solution to improve representation, retention, and leadership readiness. 'Too many brilliant professionals are doing the work, driving results, and still being overlooked,' Johnson said. 'If we want to fix what's broken in our workplaces, we must start by equipping the people stuck in talent blindspots.' The Business ImpactWorkforce studies highlight the urgency: 70% of employees of color report working harder just to be seen as competent (CTI). Women of color hold less than 5% of senior roles despite earning the majority of advanced degrees in their demographic (LeanIn & McKinsey). Leadership visibility gaps cost U.S. businesses $398B annually in lost productivity (SHRM & Gallup). Traditional DEI programs often emphasize compliance over career growth, leaving many capable employees stalled or exiting. BreakThrough At Work shifts the focus from liability to leverage, helping professionals build undeniable personal brands while providing organizations with a roadmap to unlock hidden capacity. Launch Timeline & Early ImpactThe mobile app launched in June 2025, with the first full coaching cohort starting September 22, 2025. Early participants report increased recognition, faster promotions, and renewed career confidence—without code-switching or burnout. Cohort applications and the free guide From Invisible to Irresistible are now available at About Us At Spaces LLCUs At Spaces builds tools and programs to help underrecognized professionals thrive across industries. BreakThrough At Work blends behavioral science, workplace equity research, and actionable career coaching to help overlooked talent become undeniable while enabling companies to see, support, and retain them. Media Contact: info@ Photo –

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store