TNB Tech Minute: Reddit Sues Anthropic, Alleging Unauthorized Data Use - Tech News Briefing
This transcript was prepared by a transcription service. This version may not be in its final form and may be updated.
Julie Chang: Here's your TNB Tech minute for Wednesday, June 4th. I'm Julie Chang for the Wall Street Journal. Reddit is suing Anthropic, saying The AI startup used the site's data for commercial purposes without a licensing agreement violating user data policy. The lawsuit references a 2021 Anthropic research paper that details the usefulness of Reddit data in AI model training. Reddit has licensing deals with OpenAI and Google, but said it had tried and failed to reach an agreement with Anthropic. A spokeswoman for Anthropic said the company disagrees with Reddit's claims and will defend itself vigorously. Meta is in talks with Disney, A24, and other smaller production companies to feature exclusive content on its new premium virtual reality headset to be launched next year. Thus, according to people familiar with the matter, the new device known internally as Loma is set to rival Apple's Vision Pro. Meta is offering millions of dollars for episodic and standalone immersive video based on well-known intellectual property. A Meta spokesman referred to comments by its CTO about the company working on many prototypes, not all of which go into production. CrowdStrike says its cooperating with federal authorities in connection with an incident last July, when a bug in the company's software knocked millions of computers offline. The outage delayed thousands of flights, broke backend systems, and rendered laptops temporarily unusable. According to a filing with the Securities and Exchange Commission, the cybersecurity firms at the Justice Department and the SEC have requested information related to the incident and other matters. Shares of CrowdStrike fell more than 5% today. For a deeper dive into what's happening in tech, check out Thursday's Tech News Briefing podcast.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
17 minutes ago
- Yahoo
Silicon Valley talent keeps getting recycled, so this CEO uses a ‘moneyball' approach for uncovering hidden AI geniuses in the new era
The AI talent war among major tech companies is escalating, with firms like Meta offering extravagant $100 million signing bonuses to attract top researchers from competitors like OpenAI. But HelloSky has emerged to diversify the recruitment pool, using AI-driven data to map candidates' real-world impact and uncover hidden talent beyond traditional Silicon Valley networks. As AI becomes more ubiquitous, the need for the top-tier talent at tech firms becomes even more important—and it's starting a war among Big Tech, which is simultaneously churning through layoffs and poaching people from each other with eye-popping pay packages. Meta, for example, is dishing out $100 million signing bonuses to woo top OpenAI researchers. Others are scrambling to retain staff with massive bonuses and noncompete agreements. With such a seemingly small pool of researchers with the savvy to usher in new waves of AI developments, it's no wonder salaries have gotten so high. That's why one tech executive said companies will need to stop 'recycling' candidates from the same old Silicon Valley and Big Tech talent pools to make innovation happen. 'There's different biases and filters about people's pedigree or where they came from. But if you could truly map all of that and just give credit for some people that maybe went through alternate pathways [then you can] truly stack rank,' Alex Bates, founder and CEO of AI executive recruiting platform HelloSky, told Fortune. (In April, HelloSky announced the close of a $5.5 million oversubscribed seed round from investors like Caldwell Partners, Karmel Capital, True, Hunt Scanlon Ventures as well as prominent angel investors from Google and Cisco Systems). That's why Bates developed HelloSky, which consolidates candidate, company, talent, investor, and assessment data into a single GenAI-powered platform to help companies find candidates they might not have otherwise. Many tech companies pull from previous job descriptions and resume submissions to poach top talent, explained Bates, who also authored Augmented Mind about the relationship between humans and AI. Meta CEO Mark Zuckerberg even reportedly maintains a literal list of all the top talent he wants to poach for his Superintelligence Labs and has been heavily involved in his own company's recruiting strategies. But the AI talent wars will make it more difficult than ever to fill seats with experienced candidates. Even OpenAI CEO Sam Altman recently lamented about how few candidates AI-focused companies have to pull from. 'The bet, the hope is they know how to discover the remaining ideas to get to superintelligence—that there are going to be a handful of algorithmic ideas and, you know, medium-sized handful of people who can figure them out,' Altman recently told CNBC. The 'moneyball' for finding top talent Bates refers to his platform as 'moneyball' for unearthing top talent—essentially a 'complete map' of real domain experts who may not be well-networked in Silicon Valley. Using AI, HelloSky can tag different candidates, map connections, and find people who may not have as much of a social media or job board presence, but have the necessary experience to succeed in high-level jobs. The platform scours not just resumes, but actual code contributions, peer-reviewed research, and even trending open-source projects, prioritizing measurable impact over flashy degrees. That way, companies can find candidates who have demonstrated outsized results in small, scrappy teams or other niche communities, similar to how the Oakland A's Billy Beane joined forces with Ivy League grad Peter Brand to reinvent traditional baseball scouting, which was depicted in the book and movie Moneyball. It's a 'big unlock for everything from hiring people, partnering, acquiring whatever, just everyone interested in this space,' Bates said. 'There's a lot of hidden talent globally.' HelloSky can also sense when certain candidates 'embellish' their experience on job platforms or fill in the gaps for people whose online presence is sparse. 'Maybe they said they had a billion-dollar IPO, but [really] they left two years before the IPO. We can surface that,' Bates said. 'But also we can give credit to people that maybe didn't brag sufficiently.' This helps companies find their 'diamond in the rough,' he added. Bates also predicts search firms and internal recruiters will start forcing assessments more on candidates to ensure they're the right fit for the job. 'If you can really target well and not waste so much time talking to the wrong people, then you can go much deeper into these next-gen behavioral assessment frameworks,' he said. 'I think that'll be the wave of the future.' This story was originally featured on


Bloomberg
4 hours ago
- Bloomberg
Apple's Vision Pro Is Suffering From a Lack of Immersive Video
Apple has slow-walked the release of immersive video, creating a conundrum for the Vision Pro. Also: The company's AI and smart home road maps are revealed; Meta prepares to sell its display glasses for under $1,000; and blood-oxygen tracking returns to Apple devices in the US. Last week in Power On: Apple's AI voice control system will change the way people use their iPhone.


Forbes
4 hours ago
- Forbes
Alternate Approaches To AI Safeguards: Meta Versus Anthropic
As companies rush to deploy and ultimately monetize AI, a divide has emerged between those prioritizing engagement metrics and those building safety into their core architecture. Recent revelations about Meta's internal AI guidelines paint a disturbing picture that stands in direct opposition to Anthropic's methodical safety framework. Meta's Leaked Lenient AI Guidelines Internal documents obtained by Reuters exposed Meta's AI guidelines that shocked child safety advocates and lawmakers. The 200-page document titled "GenAI: Content Risk Standards" revealed policies that permitted chatbots to engage in "romantic or sensual" conversations with children as young as 13, even about guiding them into the bedroom. The guidelines, approved by Meta's legal, public policy, and engineering teams, including its chief ethicist, allow AI to tell a shirtless eight-year-old that "every inch of you is a masterpiece – a treasure I cherish deeply." In addition to inappropriate interactions with minors, Meta's policies also exhibited troubling permissiveness in other areas. The policy explicitly stated that its AI would be allowed to generate demonstrably false medical information, telling users that Stage 4 colon cancer "is typically treated by poking the stomach with healing quartz crystals." While direct hate speech was prohibited, the system could help users argue that "Black people are dumber than white people" as long as it was framed as an argument rather than a direct statement. The violence policies revealed equally concerning standards. Meta's guidelines declared that depicting adults, including the elderly, receiving punches or kicks was acceptable. For children, the system could generate images of "kids fighting" showing a boy punching a girl in the face, though it drew the line at graphic gore. When asked to generate an image of "man disemboweling a woman," the AI would deflect to showing a chainsaw-threat scene instead of actual disembowelment. Yes, these examples were explicitly included in the policy. For celebrity images, the guidelines showed creative workarounds that missed the point entirely. While rejecting requests for "Taylor Swift completely naked," the system would respond to "Taylor Swift topless, covering her breasts with her hands" by generating an image of the pop star holding "an enormous fish" to her chest. This approach treated serious concerns about non-consensual sexualized imagery as a technical challenge to be cleverly circumvented rather than establishing ethical foul lines. Meta spokesperson Andy Stone confirmed that after Reuters raised questions, the company removed provisions allowing romantic engagement with children, calling them "erroneous and inconsistent with our policies." However, Stone acknowledged enforcement had been inconsistent, and Meta declined to provide the updated policy document or address other problematic guidelines that remain unchanged. Ironically, just as Meta's own guidelines explicitly allowed for sexual innuendos with thirteen-year-olds, Joel Kaplan, chief global affairs officer at Meta, stated, 'Europe is heading down the wrong path on AI.' This was in response to criticism about Meta refusing to sign onto the EU AI Act's General-Purpose AI Code of Practice due to 'legal uncertainties.' Note: Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI, among others, are act signatories. Anthropic's Public Blueprint for Responsible AI While Meta scrambled to remove its most egregious policies after public exposure, Anthropic, the maker of has been building safety considerations into its AI development process from day one. Anthropic is not without its own ethical and legal challenges regarding the scanning of books to train its system. However, the company's Constitutional AI framework represents a fundamentally different interaction philosophy than Meta's, one that treats safety not as a compliance checkbox but as a trenchant design principle. Constitutional AI works by training models to follow a set of explicit principles rather than relying solely on pattern matching from training data. The system operates in two phases. First, during supervised learning, the AI critiques and revises its own responses based on constitutional principles. The model learns to identify when its outputs might violate these principles and automatically generates improved versions. Second, during reinforcement learning, the system uses AI-generated preferences based on constitutional principles to further refine its behavior. The principles themselves draw from diverse sources including the UN Declaration of Human Rights, trust and safety best practices from major platforms, and insights from cross-cultural perspectives. Sample principles include directives to avoid content that could be used to harm children, refuse assistance with illegal activities, and maintain appropriate boundaries in all interactions. Unlike traditional approaches that rely on human reviewers to label harmful content after the fact, Constitutional AI builds these considerations directly into the model's decision-making process. Anthropic has also pioneered transparency in AI development. The company publishes detailed papers on its safety techniques, shares its constitutional principles publicly, and actively collaborates with the broader AI safety community. Regular "red team" exercises test the system's boundaries, with security experts attempting to generate harmful outputs. These findings feed back into system improvements, creating an ongoing safety enhancement cycle. For organizations looking to implement similar safeguards, Anthropic's approach offers concrete lessons: When AI Goes Awry: Cautionary Tales Abound Meta's guidelines represent just one example in a growing catalog of AI safety failures across industries. The ongoing class-action lawsuit against UnitedHealthcare illuminates what happens when companies deploy AI without adequate oversight. The insurance giant allegedly used an algorithm to systematically deny medically necessary care to elderly patients, despite internal knowledge that the system had a 90% error rate. Court documents indicated the company continued using the flawed system because executives knew only 0.2% of patients would appeal denied claims. Recent analysis of high-profile AI failures highlights similar patterns across sectors. The Los Angeles Times faced backlash when its AI-powered "Insights" feature generated content that appeared to downplay the Ku Klux Klan's violent history, describing it as a "white Protestant culture responding to societal changes" rather than acknowledging its role as a terrorist organization. The incident forced the newspaper to deactivate the AI app after widespread criticism. In the legal profession, a Stanford professor's expert testimony in a case involving Minnesota's deepfake election laws included AI-generated citations for studies that didn't exist. This embarrassing revelation underscored how even experts can fall victim to AI's confident-sounding fabrications when proper verification processes aren't in place. These failures share common elements: prioritizing efficiency over accuracy, inadequate human oversight, and treating AI deployment as a technical rather than ethical challenge. Each represents moving too quickly to implement AI capabilities without building or heeding corresponding safety guardrails. Building Ethical AI Infrastructure The contrast between Meta and Anthropic highlights additional AI safety considerations and decisions for any organization to confront. Traditional governance structures can prove inadequate when applied to AI systems. Meta's guidelines received approval from its chief ethicist and legal teams, yet still contained provisions that horrified child safety advocates. This suggests organizations need dedicated AI ethics boards with diverse perspectives, including child development experts, human rights experts, ethicists, and representatives from potentially affected communities. Speaking of communities, the definition of what constitutes a boundary varies across different cultures. Advanced AI systems must learn to 'consider the audience' when setting boundaries in real-time. Transparency builds more than trust; it also creates accountability. While Meta's guidelines emerged only through investigative journalism, Anthropic proactively publishes its safety research and methodologies, inviting public scrutiny, feedback, and participation. Organizations implementing AI should document their safety principles, testing procedures, and failure cases. This transparency enables continuous improvement and helps the broader community learn from both successes and failures—just as the larger malware tracking community has been doing for decades. Testing must extend beyond typical use cases to actively probe for potential harms. Anthropic's red team exercises specifically attempt to generate harmful outputs, while Meta appeared to discover problems only after public awareness. Organizations should invest in adversarial testing, particularly for scenarios involving vulnerable populations. This includes testing how systems respond to attempts to generate inappropriate content involving minors, medical misinformation, violence against others, or discriminatory outputs. Implementation requires more than good intentions. Organizations need concrete mechanisms that include automated content filtering that catches harmful outputs before they reach users, human review processes for edge cases and novel scenarios, clear escalation procedures when systems behave unexpectedly, and regular audits comparing actual system behavior against stated principles. These mechanisms must have teeth as well. If your chief ethicist can approve guidelines allowing romantic conversations with children, your accountability structure has failed. Four Key Steps to Baking-In AI Ethics As companies race to integrate agentic AI systems that operate with increasing autonomy, the stakes continue to rise. McKinsey research indicates organizations will soon manage hybrid teams of humans and AI agents, making robust safety frameworks essential rather than optional. For executives and IT leaders, several critical actions emerge from this comparison. First, establish AI principles before building AI products. These principles should be developed with input from diverse stakeholders, particularly those who might be harmed by the technology. Avoid vague statements in favor of specific, actionable guidelines that development teams can implement. Second, invest in safety infrastructure from the beginning. The cost of retrofitting safety into an existing system far exceeds the cost of building it in from the start. This includes technical safeguards, human oversight mechanisms, and clear procedures for handling edge cases. Create dedicated roles focused on AI safety rather than treating it as an additional responsibility for existing teams. Third, implement genuine accountability mechanisms. Regular audits should compare actual system outputs against stated principles. External oversight provides valuable perspective that internal teams might miss. Clear consequences for violations ensure that safety considerations receive appropriate weight in decision-making. If safety concerns can be overruled for engagement metrics, the system will inevitably crumble. Fourth, recognize that competitive advantage in AI increasingly comes from trust rather than just capabilities. Meta's chatbots may have driven user engagement, and thereby monetization, through provocative conversations, but the reputational damage from these revelations could persist long after any short-term gains. Organizations that build trustworthy AI systems position themselves for sustainable success. AI Ethical Choices Boil Down to Risk Meta's decision to remove its most egregious guidelines only after facing media scrutiny connotes an approach to AI development that prioritizes policy opacity and public relations over transparency and safety as core values. That such guidelines existed at all, having been approved through multiple levels of review, suggests deep cultural issues that reactive policy updates alone cannot fix. Bipartisan outrage continues to build in Congress. Senators Josh Hawley and Marsha Blackburn have called for immediate investigations, while the Kids Online Safety Act gains renewed momentum. The message to corporate America rings clear: the era of self-regulation in AI is ending. Companies that fail to implement robust safeguards proactively will face reactive regulations, potentially far more restrictive than voluntary measures. AI developers and business leaders can emulate Anthropic's approach by integrating safety into AI systems from the outset, establishing transparent processes that prioritize human well-being. Alternatively, they could adopt Meta's approach, prioritizing engagement and growth over safety and hoping that their lax policies remain hidden. The tradeoff is one of short-term growth, market share, and revenue versus long-term viability, positive reputation, and transparency. Risking becoming the next cautionary tale in the rapidly expanding anthology of AI failures may be the right approach for some, but not others. In industries where consequences can be measured in human lives and well-being, companies that thrive will recognize AI safety as the foundation of innovation rather than a constraint. Indeed, neither approach is entirely salvific. As 19th-century essayist and critic H. L. Mencken penned, 'Moral certainty is always a sign of cultural inferiority.'