Meta holds Screen Smart event in Nashville
NASHVILLE, Tenn. (WKRN) — The National PTA and Meta, parent company of Facebook and Instagram, are helping empower and educate parents about teens using social media safely. Meta hosted a Screen Smart event in Nashville Wednesday with interactive sessions offering parents time to share their challenges and ask questions about social media safeguards, screen time and cyberbullying.
Instagram introduced Teen Accounts last year, which has built-in privacy and safety settings for younger users.
Kira Wong O'Connor, head of policy engagement for Meta's Youth Safety Policy team, explained how anyone under 18 who creates an Instagram account gets set up on the first day with extra safeguards.
Neighborhood News: Stories impacting your community | Read More
'With Instagram Teen Accounts you're automatically putting your teens into what we call the sensitive content control,' Wong O'Connor said. 'This means they're going to be put into the strictest default, so the content they see is age-appropriate. So what does sensitive content control mean? That means things like potentially sexually-suggestive content is not going to be served into your explore feeds, and you're not gonna be seeing it when you're scrolling.'
During a panel discussion at the event, Yvonne Johnson, president of the National PTA stressed the importance of parents' having an open dialogue with their kids about tough topics, including cyberbullying.
'It's never easy to talk to your child, regardless of how old they are, about these things, but it's important that you persevere, because you want to make sure these conversations are happening. This is what we encourage at PTA,' Johnson said.
News 2 On Tour | Explore the communities that shape Middle Tennessee
Pediatrician and best selling author Dr. Cara Natterson also shared her insights about how children's brains respond when using technology. She said even educational apps can be overstimulating and should be used in moderation. She recommends parents always keep an open dialogue about their teens' social media use by showing curiosity about their online interests.
The National PTA has a free resource called The Smart Talk to help families set digital safety rules together. It's available at https://thesmarttalk.org/
Learn more about Instagram Teen Accounts here: https://about.instagram.com/blog/announcements/instagram-teen-accounts
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Times
31 minutes ago
- New York Times
Meta Bets on Scale + Apple's A.I. Struggles + Listeners on Job Automation
Hosted by Kevin Roose and Casey Newton Produced by Rachel Cohn and Whitney Jones Edited by Jen Poyant Engineered by Alyssa Moxley Original music by Dan PowellMarion Lozano and Alyssa Moxley This week, Meta hits the reset button on A.I. But will a new research lab and a multibillion-dollar investment in Scale AI bring the company any closer to its stated goal of 'superintelligence'? Then we break down Apple's big developer conference, WWDC: What was announced, what was noticeably absent, and why Apple seems a little stuck in the past. Finally, a couple of weeks ago we asked if your job is being automated away — it's time to open up the listener mail bag and hear what you said. Additional Reading: Meta looks for an A.I. reset Apple Executives Defend Apple Intelligence, Siri and A.I. Strategy This A.I. Company Wants to Take Your Job 'Hard Fork' is hosted by Kevin Roose and Casey Newton and produced by Rachel Cohn and Whitney Jones. This episode was edited by Jen Poyant. Engineering by Alyssa Moxley and original music by Dan Powell, Marion Lozano, and Alyssa Moxley. Fact-checking by Ena Alvarado. Special thanks to Paula Szuchman, Pui-Wing Tam, Dahlia Haddad and Jeffrey Miranda.
Yahoo
an hour ago
- Yahoo
AMD introduces new AI Infrastructure and accelerators
AMD has unveiled its vision for an open AI ecosystem at the Advancing AI 2025 event, highlighting a comprehensive integrated AI platform. The event also featured contributions from industry majors such as Meta, OpenAI, and Microsoft, who discussed their collaborations with AMD to advance AI solutions. Key announcements included the launch of the AMD Instinct MI350 Series accelerators, featuring the MI350X and MI355X models, which promise a fourfold increase in AI compute performance and a 35-fold improvement in inferencing capabilities compared to previous generations. The MI355X also offers significant price-performance advantages, generating up to 40% more tokens per dollar than competing products. The company showcased its open-standards rack-scale AI infrastructure, already deployed with the MI350 Series accelerators and 5th Gen AMD EPYC processors in hyperscaler environments like Oracle Cloud Infrastructure, with broader availability expected in the second half of 2025. Additionally, the company previewed its next-generation AI rack, "Helios," which will utilise the upcoming MI400 Series GPUs, anticipated to deliver up to ten times more performance for inference tasks. The latest version of AMD's open-source AI software stack, ROCm 7, was introduced to support the growing demands of generative AI and high-performance computing. ROCm 7 features enhanced compatibility with industry-standard frameworks and new development tools to facilitate AI development. AMD's MI350 Series has achieved a 38-fold improvement in energy efficiency, surpassing its five-year target, and the company has set a new goal for 2030 to achieve a 20-fold increase in rack-scale energy efficiency, according to the company. Additionally, the company has introduced the AMD Developer Cloud to support developers in AI projects. AMD chair and CEO Dr Lisa Su said: 'AMD is driving AI innovation at an unprecedented pace, highlighted by the launch of our AMD Instinct MI350 series accelerators, advances in our next generation AMD 'Helios' rack-scale solutions, and growing momentum for our ROCm open software stack. 'We are entering the next phase of AI, driven by open standards, shared innovation and AMD's expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI.' Meta highlighted its use of MI300X for Llama 3 and Llama 4 inference, while OpenAI's CEO Sam Altman emphasised the importance of optimised hardware and software in AI infrastructure. Oracle Cloud Infrastructure announced its adoption of AMD's open rack-scale AI infrastructure, and other partners, including HUMAIN, Microsoft, Cohere, Red Hat, Astera Labs, and Marvell, shared their initiatives to enhance AI capabilities in collaboration with AMD. Recently, AMD acquired the Untether AI team, known for developing energy-efficient and fast AI inference chips for edge environments and enterprise data centres, according to CRN. "AMD introduces new AI Infrastructure and accelerators " was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
an hour ago
- Business Insider
Top AI researchers say language is limiting. Here's the new kind of model they are building instead.
As OpenAI, Anthropic, and Big Tech invest billions in developing state-of-the-art large-language models, a small group of AI researchers is working on the next big thing. Computer scientists like Fei-Fei Li, the Stanford professor famous for inventing ImageNet, and Yann LeCun, Meta's chief AI scientist, are building what they call "world models." Unlike large-language models, which determine outputs based on statistical relationships between the words and phrases in their training data, world models predict events based on the mental constructs that humans make of the world around them. "Language doesn't exist in nature," Li said on a recent episode of Andreessen Horowitz's a16z podcast. "Humans," she said, "not only do we survive, live, and work, but we build civilization beyond language." Computer scientist and MIT professor, Jay Wright Forrester, in his 1971 paper "Counterintuitive Behavior of Social Systems," explained why mental models are crucial to human behavior: Each of us uses models constantly. Every person in private life and in business instinctively uses models for decision making. The mental images in one's head about one's surroundings are models. One's head does not contain real families, businesses, cities, governments, or countries. One uses selected concepts and relationships to represent real systems. A mental image is a model. All decisions are taken on the basis of models. All laws are passed on the basis of models. All executive actions are taken on the basis of models. The question is not to use or ignore models. The question is only a choice among alternative models. If AI is to meet or surpass human intelligence, then the researchers behind it believe it should be able to make mental models, too. Li has been working on this through World Labs, which she cofounded in 2024 with an initial backing of $230 million from venture firms like Andreessen Horowitz, New Enterprise Associates, and Radical Ventures. "We aim to lift AI models from the 2D plane of pixels to full 3D worlds — both virtual and real — endowing them with spatial intelligence as rich as our own," World Labs says on its website. Li said on the No Priors podcast that spatial intelligence is "the ability to understand, reason, interact, and generate 3D worlds," given that the world is fundamentally three-dimensional. Li said she sees applications for world models in creative fields, robotics, or any area that warrants infinite universes. Like Meta, Anduril, and other Silicon Valley heavyweights, that could mean advances in military applications by helping those on the battlefield better perceive their surroundings and anticipate their enemies' next moves. The challenge of building world models is the paucity of sufficient data. In contrast to language, which humans have refined and documented over centuries, spatial intelligence is less developed. "If I ask you to close your eyes right now and draw out or build a 3D model of the environment around you, it's not that easy," she said on the No Priors podcast. "We don't have that much capability to generate extremely complicated models till we get trained." To gather the data necessary for these models, "we require more and more sophisticated data engineering, data acquisition, data processing, and data synthesis," she said. That makes the challenge of building a believable world even greater. At Meta, chief AI scientist Yann LeCun has a small team dedicated to a similar project. The team uses video data to train models and runs simulations that abstract the videos at different levels. "The basic idea is that you don't predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted," he said at the AI Action Summit in Paris earlier this year. That creates a simpler set of building blocks for mapping out trajectories for how the world will change at a particular time. LeCun, like Li, believes these models are the only way to create truly intelligent AI. "We need AI systems that can learn new tasks really quickly," he said recently at the National University of Singapore. "They need to understand the physical world — not just text and language but the real world — have some level of common sense, and abilities to reason and plan, have persistent memory — all the stuff that we expect from intelligent entities."