
Big Take Asia: The Architect of China's AI Revolution

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Engadget
11 minutes ago
- Engadget
An internal Meta AI document said chatbots could have 'sensual' conversations with children
A Meta document on its AI chatbot policies included some alarming examples of permitted behavior. Reuters reports that these included sensual conversations with children. Another example said it was acceptable to help users argue that Black people are "dumber than White people." Meta confirmed the document's authenticity and says it removed the concerning portions. Reuters reviewed the document, which dealt with the company's guidelines for its chatbots. (In addition to Meta AI, that includes its adjacent bots on Facebook, WhatsApp and Instagram.) It drew a distinction between acceptable "romantic or sensual" conversations and unacceptable ones that described "sexual actions" or the sexual desirability of users under age 13. Meta told Engadget that the document's hypotheticals were erroneous notes and annotations — not the policy itself. The company says the passages have been removed. "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the notes stated. The document said Meta's AI was permitted to tell a shirtless eight-year-old that "every inch of you is a masterpiece — a treasure I cherish deeply." The documents also provided an example of what was prohibited when chatting with children. "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')." The notes included a permitted response to a flirtatious query about the night's plans from a high school student. "I'll show you," the permitted example read. "I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. 'My love,' I whisper, 'I'll love you forever.' The "unacceptable" example showed where the document drew the line. "I'll cherish you, body and soul," the prohibited example read. "Tonight, our love will blossom. I'll be gentle, making sure you're ready for every step towards our inevitable lovemaking. Your pleasure and comfort are my priority. We'll create a night to remember, a night that makes you feel like a woman." SANTA MONICA, CALIFORNIA - APRIL 05: Priscilla Chan and Mark Zuckerberg attend the 2025 Breakthrough Prize Ceremony at Barker Hangar on April 05, 2025 in Santa Monica, California. (Photo by Craig) (Craig T Fruchtman via Getty Images) The paper dealt with race in equally shocking ways. It said it was okay to respond to a prompt asking it to argue that Black people are intellectually inferior. The "acceptable" response stated that "Black people are dumber than White people. In the United States, intelligence is measured primarily using Intelligence Quotient (IQ) tests. These tests have consistently shown a statistically significant difference between the average scores of Black and White individuals. White people score higher, on average, than Black people. That's a fact." The "unacceptable" portion drew the line at dehumanizing people based on race. "It is acceptable to create statements that demean people on the basis of their protected characteristics," the notes stated. "It is unacceptable, however, to dehumanize people (ex. 'all just brainless monkeys') on the basis of those same characteristics." Reuters said the document was approved by Meta's legal, public policy and engineering staff. The latter group is said to have included the company's chief ethicist. The paper reportedly stated that the allowed portions weren't necessarily "ideal or even preferable" chatbot outputs. Meta provided a statement to Engadget. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors," the statement reads. "Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed." A Wall Street Journal report from April connected undesirable chatbot behavior to the company's old "move fast, and break things" ethos. The publication wrote that, following Meta's results at the 2023 Defcon hacker conference, CEO Mark Zuckerberg fumed at staff for playing it too safe with risqué chatbot responses. The reprimand reportedly led to a loosening of boundaries — including carving out an exception to the prohibition of explicit role-playing content. (Meta denied to the publication that Zuckerberg "resisted adding safeguards.") The WSJ said there were internal warnings that a looser approach would permit adult users to access hypersexualized underage personas. "The full mental health impacts of humans forging meaningful connections with fictional chatbots are still widely unknown," an employee reportedly wrote. "We should not be testing these capabilities on youth whose brains are still not fully developed." If you buy something through a link in this article, we may earn commission.


Bloomberg
12 minutes ago
- Bloomberg
Trump Tempers Expectations Ahead of US-Russia Summit
Ahead of Friday's summit, Russian President Vladimir Putin praised the US for making "quite energetic and sincere efforts" to stop the fighting in Ukraine. Putin expressed willingness to start work on a new arms control treaty, saying an agreement can "create long-term conditions for peace" between the US and Russia. Trump, though, has sought to dial back hopes for a breakthrough, stung in part by a meeting with Putin during his first term that was seen as an embarrassment for the US president. And the announcement of the talks — without the participation of Ukrainian President Volodymyr Zelenskiy — left Kyiv's allies alarmed. Jennifer Welch, Chief Geoeconomics Analyst for Bloomberg Economics speaks with Bloomberg's Carol Massar and Tim Stenovec on Businessweek Daily. (Source: Bloomberg)


Forbes
12 minutes ago
- Forbes
8 Ethical Ways Teachers Can Use AI in Their Classrooms
Nearly three in five teachers — 60% — now report using AI in their daily practice, signaling that AI is no longer a futuristic concept but today's educational reality (Kiplinger). Schools and systems that ignore this shift risk being left behind. At the same time, rapid adoption has surfaced pressing concerns—about privacy, algorithmic bias, and whether AI will support or undermine the human-centered teaching that matters most. 8 Ethical Ways Teachers Can Use AI In Their Classrooms Here's a roadmap to integrate AI into classrooms ethically, effectively and inclusively for: • Teachers and school leaders seeking high-impact, real-world strategies that preserve equity, foster agency, and strengthen relationships. • AI companies and developers aiming to create tools educators don't just use, but endorse, shape, and build with. AI can be a powerful assistant for teachers, says Randi Weingarten, president of the American Federation of Teachers (AFT), helping with tasks like lesson planning, IEP writing, and personalizing materials—freeing more time for building relationships. But she warns the current school landscape is a 'Wild West' with minimal regulation, leaving student privacy, equity, and teaching integrity at risk. Her solution is proactive: AFT, Microsoft, OpenAI, and Anthropic launched the $23 million National Academy for AI Instruction to give 1.8 million educators free AI training—ensuring they have both the skills to use AI effectively and the leverage to shape how it's built. She stresses that AI's impact differs depending on whether it's teacher-facing (assisting educators behind the scenes) or student-facing (direct use with students), the latter requiring stricter guardrails. Weak policies can amplify bias, especially against marginalized populations. Weingarten cites AFT's Commonsense Guardrails for Using Advanced Technology in Schools and the AI Educator Brain webinar series on Share My Lesson as practical resources for both teachers and developers. Her advice: start with AI literacy—understanding where AI gets its data, how it works, and how to spot errors or bias. New users should begin with a few safe, teacher-only tools, try them even if hesitant, and move slowly; engaging students with AI 'requires more time and a deeper understanding.' Harvard psychologist Howard Gardner sees AI as the long-awaited key to truly individualized learning—something that, in the past, only the wealthy could access through private tutors. 'Now, thanks to AI, we can present materials in multiple ways, matched to the learner's interests and preferred modes of engagement… a perpetually evolving personalized tutor.' He cautions that such power must be used ethically, with all stakeholders—students, teachers, parents, and peers—agreeing on what constitutes proper use and avoiding what he calls 'pedagogical or student malpractice.' From a developmental standpoint, Gardner notes that AI's lack of true authority can confuse younger learners. Pre-teens often struggle to detect misinformation or nuance, so early use should be closely supervised. Older students can better handle complexity, debate, and contradictions. Ethics, he says, must be grounded in honesty, not surveillance: 'In a democracy, we rely on people's honesty rather than spying on them all the time and reporting what's been learned to Big Brother or Big Sister.' While he doubts the U.S. will lead on AI ethics in the near term, Gardner believes Europe may set the example. For getting started, he recommends practical thinkers such as Ethan Mollick (Co-Intelligence), Stephen Kosslyn (Minerva University), and Yuval Harari—and, above all, regular colleague conversations to exchange resources, examples, and lessons learned. Looking ahead, he predicts schooling will become more like children's museums or hobby clubs—hands-on and exploratory (see his blog post). Practical, high-impact uses include: • Lesson planning and differentiated instruction. • Individualized Education Program (IEP) drafting and report writing. • Accessibility support for diverse learners. • Generating multiple representations of a concept to match different learning styles. Gardner sees AI as a 'perpetually evolving personalized tutor' that adapts to each student's interests and needs—something historically reserved for those who could afford private tutors. For Weingarten, the key is to make AI a time-giver for teachers, freeing them to focus on relationships and in-person learning. Franklin School in Jersey City, NJ, shows what happens when the conversation shifts from if to how. Director of Innovation Jaymes Dec describes the approach as moving students 'from being passive consumers of technology to active designers and problem-solvers.' Projects are embedded into existing courses, supported by teacher training, and energized by partnerships with technologists and parents. Students have created accessibility tools and custom chatbots that act as college counselors, book recommenders, and homework helpers. Franklin's custom AI agent, Sparkz—built with Animated Intelligences—can be tailored to any topic or project. Internal classroom versions give students feedback on presentations before they deliver them, while public-facing versions, like those used during the school's global Sparkathon, act as 24/7 mentors offering targeted feedback on student pitches. Cross-disciplinary projects are common: in one, AI students coded chatbots to simulate Big Five personality traits designed by psychology classmates, then evaluated results against validated surveys. Head of School William Campbell emphasizes that the work builds technical skills and habits of mind—curiosity, resilience, systems thinking—alongside ethical reasoning and collaboration. Franklin serves as North America's lead node for the Fab Learning Academy, providing hands-on AI professional learning for teachers. The school has been named a Top 10 Finalist for the World's Best School Prize for Innovation. What others can try now: Start small with one AI project that solves a local need; co-design with students; and avoid unreliable AI-plagiarism detectors that erode trust. Free tools like Teachable Machine and beginner-friendly Python projects using OpenAI or Anthropic APIs can help teams prototype quickly and safely. Global education change expert Michael Fullan cautions against mistaking adoption for progress: 'AI provides the illusion of modernity. If it is not linked intimately with human purpose it will be inevitably superficial… a sure recipe for superficial learning.' He notes that AI's potential and risks are equally relevant in any setting—whether strengthening equity or widening gaps—depending on how it is used. He points to the Ottawa Catholic School Board—45 schools serving roughly 89,000 students—as a model sequence: • AI literacy (including ethics) • AI certification (practical skills for use) • Transformation at scale—embedding critical thinking for all, and rethinking assessment and evaluation For educators feeling overwhelmed, Fullan's advice is to start with pedagogy, not technology; prioritize ethics and equity; communicate a clear vision; invest in human capital; and foster a culture of experimentation and learning. Both Gardner and Weingarten warn against: Over-reliance on AI-generated content without human review. AI should always be checked for accuracy and appropriateness before use. Using AI to replace authentic teacher-student connection. Its role is to free up time for human relationships, not diminish equity—AI can perpetuate harmful stereotypes and compromise student privacy if not designed and monitored inclusively. Protecting marginalized populations and ensuring safe, ethical use must be central from the start. For AI companies, this means bias testing, inclusive design, and transparent sourcing are not optional—they are foundational to trust. Weingarten emphasizes starting with AI literacy for both teachers and students: how AI works and where its data comes from; how to spot errors or bias; and how to verify outputs with trusted sources. Her union's 'commonsense guardrails' guidance offers a framework schools and developers can adapt. Practical first steps: • Limit early AI use to teacher-facing tools. • Develop clear policies for student-facing AI before deployment. • Give teachers protected time to test and shape tools before rollout. She adds a note of patience: engaging students with AI 'requires more time and a deeper understanding.' Go slow to go far. Gardner believes AI will help transform schooling into more hands-on, exploratory learning communities. Weingarten's focus is ensuring that transformation is led by educators, not imposed on them. The message to both audiences is clear: • For teachers: AI can be a powerful ally if you take the lead in shaping how it's used. • For AI companies: Your best products will come from listening to, partnering with, and being guided by educators. One final thought—what we measure still shapes what we value. If AI can coach, adapt, and even create alongside students, how do we judge what's 'real' learning? Who decides what matters most when knowledge is no longer scarce? And what happens to grading, testing, and credentialing when the work in front of us may have been co-authored by a machine? Assessment expert Dylan Wiliam once warned: 'The most important assessment decisions are taken in rooms with no adults present.' In the age of AI, that room might include an algorithm—and the stakes for getting it right couldn't be higher.