logo
OpenAI Seems to Be Making a Very Familiar, Very Cynical Choice

OpenAI Seems to Be Making a Very Familiar, Very Cynical Choice

Last spring, Sam Altman, the chief executive of OpenAI, sat in the chancel of Harvard Memorial Church, sermonizing against advertising. 'I will disclose, just as a personal bias, that I hate ads,' he began in his usual calm cadence, one sneakered foot crossed onto his lap. He said that ads 'fundamentally misalign a user's incentives with the company providing the service,' adding that he found the notion of mixing advertising with artificial intelligence — the product his company is built on — 'uniquely unsettling.'
The comment reminded me immediately of something I'd heard before, from around the time I was first getting online. It came from a seminal paper that Sergey Brin and Larry Page wrote in 1998, when they were at Stanford developing Google. They argued that advertising often made search engines less useful, and that companies that relied on it would 'be inherently biased towards the advertisers and away from the needs of the consumers.'
I showed up at Stanford as a freshman in 2000, not long after Mr. Brin and Mr. Page had accepted a $25 million round of venture capital funding to turn their academic project into a business. My best friend there persuaded me to try Google, describing it as more ethical than the search engines that had come before. What we didn't realize was that in the midst of the dot-com crash, which coincided with our arrival, Google's investors were pressuring the co-founders to hire a more experienced chief executive.
Mr. Brin and Mr. Page brought in Eric Schmidt, who in turn hired Sheryl Sandberg, the chief of staff to Lawrence H. Summers when he was Treasury secretary, to build an advertising program. Filing for Google to go public a couple of years later, Mr. Brin and Mr. Page explained away the reversal of their anti-advertising stance by telling shareholders that ads made Google more useful because they provided what the founders called 'great commercial information.'
My senior year, news filtered into The Stanford Daily, where I worked, that Facebook — which some of us had heard about from friends at Harvard, where it had started — was coming to our campus. 'I know it sounds corny, but I'd love to improve people's lives, especially socially,' Mark Zuckerberg, Facebook's co-founder, told The Daily's reporter. He added, 'In the future we may sell ads to get the money back, but since providing the service is so cheap, we may choose not to do that for a while.'
Mr. Zuckerberg went on to quit Harvard and move to Palo Alto, Calif. I went on to The Wall Street Journal. Covering Facebook in 2007, I got a scoop that Facebook — which had in fact introduced ads — would begin using data from individual users and their 'friends' on the site to sharpen how ads were targeted to them. Like Google before it, Facebook positioned this as being good for users. Mr. Zuckerberg even brought Ms. Sandberg over from Google to help. When an economic downturn, followed by an I.P.O., later put pressure on Facebook, it followed Google's playbook: doubling down on advertising. In this case, it did so by collecting and monetizing even more personal information about its users.
Want all of The Times? Subscribe.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

House bipartisan bill directs NSA to create 'AI security playbook' amid Chinese tech race
House bipartisan bill directs NSA to create 'AI security playbook' amid Chinese tech race

Yahoo

time18 minutes ago

  • Yahoo

House bipartisan bill directs NSA to create 'AI security playbook' amid Chinese tech race

FIRST ON FOX – Rep. Darin LaHood, R-Ill., is introducing a new bill Thursday imploring the National Security Administration (NSA) to develop an "AI security playbook" to stay ahead of threats from China and other foreign adversaries. The bill, dubbed the "Advanced AI Security Readiness Act," directs the NSA's Artificial Intelligence Security Center to develop an "AI Security Playbook to address vulnerabilities, threat detection, cyber and physical security strategies, and contingency plans for highly sensitive AI systems." It is co-sponsored by House Select Committee on China Chairman Rep. John Moolenaar, R-Mich., Ranking Member Rep. Raja Krishnamoorthi, D-Ill., and Rep. Josh Gottheimer, D-N.J. LaHood, who sits on the House Intelligence Committee and the House Select Committee on China, told Fox News Digital that the legislative proposal, if passed, would be the first time Congress codifies a "multi-prong approach to ensure that the U.S. remains ahead in the advanced technology race against the CCP." The new bill follows another bipartisan legislative proposal, the "Chip Security Act," which he introduced in late May. That proposal aims to improve export control mechanisms – including for chips and high-capacity chip manufacturing – protect covered AI technologies with a focus on cybersecurity, and limit outbound investment to firms directly tied to the Chinese Community Party or China's People's Liberation Army. Chinese Bioweapon Smuggling Case Shows Us 'Trains Our Enemies,' 'Learned Nothing' From Covid: Security Expert "We start with the premise that China has a plan to replace the United States. And I don't say that to scare people or my constituents, but they have a plan to replace the United States, and they're working on it every single day. And that entails stealing data and infiltrating our systems," LaHood told Fox News Digital. "AI is the next frontier on that. We lead the world in technology. We lead the world when it comes to AI. But what this bill will do will again make sure that things are done the right way and the correct way, and that we're protecting our assets and promoting the current technology that we have in our country." Read On The Fox News App LaHood pointed to evidence uncovered by the committee that he said shows the CCP's DeepSeek used illegal distillation techniques to steal insights from U.S. AI models to accelerate their own technology development. He also pointed to how China allegedly smuggled AI chips through Singapore intermediaries to circumvent U.S. export controls on the technology. "As we look at, 'How do we win the strategic competition?' I think most experts would say we're ahead in AI right now against China, but not by much. It is a short lead," LaHood told Fox News Digital. He said he is confident his legislative proposals will put the U.S. "in the best position to protect our assets here and make sure that we're not shipping things that shouldn't go to AI that allow them to win the AI race in China." "Whoever wins this race in the future, it's going to be critical to future warfare capabilities, to, obviously, cybersecurity," LaHood continued. "And then, whoever wins the AI competition is going to yield really unwavering economic influence in the future. And so we're aggressive in this bill in terms of targeting those areas where we need to protect our AI and our companies here in the United States, both on the commercial side and on the government side, to put us in the best position possible." The "Advanced AI Security Readiness Act" calls on the NSA to develop a playbook that identifies vulnerabilities in AI data centers and developers producing sensitive AI technologies with an emphasis on unique "threat vectors" that do not typically arise, or are less severe, in the context of conventional information technology systems." The bill says the NSA must develop "core insights" in how advanced AI systems are being trained to identify potential interferences and must develop strategies to "detect, prevent and respond to cyber threats by threat actors targeting covered AI technologies." Amazon Announces $20B Investment In Rural Pennsylvania For Ai Data Centers The bill calls on the NSA to "identify levels of security, if any, that would require substantial involvement" by the U.S. government "in the development or oversight of highly advanced AI systems." It cites a "hypothetical initiative to build covered AI technology systems in a highly secure government environment" with certain protocols in place, such as personnel vetting and security clearance processes, to mitigate "insider threats." Though not directly related, the new bill is being introduced a week after FBI Director Kash Patel sounded the alarm on how the CCP continues to deploy operatives and researchers to "infiltrate" U.S. institutions. Patel laid out the risk in announcing that two Chinese nationals were charged with smuggling a potential bioweapon into the U.S. LaHood said that case further highlights "the level of penetration and sophistication that the CCP will engage in," but he added that his bill focuses on putting a "protective layer" on U.S. AI tech and "restricting outbound investment to China." He pointed to how the CCP also has bought up farmland around strategic U.S. national security locations, particularly in Montana, North Dakota and South Dakota. "If everything was an even playing field, and we were all abiding by the same rules and standards and ethical guidelines, I have no doubt the U.S. would win [the AI race], but China has a tendency and a history of playing by a different set of rules and standards," LaHood said. "They cheat, they steal, they take our intellectual property. Not just my opinion, that's been factually laid out, you know, in many different instances. And that's the reason why we need to have a bill like this." The bill comes as the Trump administration has been pushing to bolster artificial intelligence infrastructure in the United States, and major tech companies, including Amazon, Nvidia, Meta, OpenAI, Oracle and others, have made major investments in constructing AI-focused data centers and enhancing U.S. cloud computing. Last week, Amazon announced a $20 billion investment in constructing AI data centers in rural Pennsylvania. It followed a similar $10 billion investment in North Carolina. In late May, the NSA's Artificial Intelligence Security Center released "joint guidance" on the "risks and best practices in AI data security." The recommendations include implementing methods to secure the data used in AI-based systems, "such as employing digital signatures to authenticate trusted revisions, tracking data provenance, and leveraging trusted infrastructure." The center said its guidance is "critically relevant for organizations – especially system owners and administrators within the Department of Defense, National Security Systems, and the Defense Industrial Base – that already use AI systems in their day-to-day operations and those that are seeking to integrate AI into their infrastructure."Original article source: House bipartisan bill directs NSA to create 'AI security playbook' amid Chinese tech race

Ransomware Groups Use AI to Level Up
Ransomware Groups Use AI to Level Up

Yahoo

time18 minutes ago

  • Yahoo

Ransomware Groups Use AI to Level Up

A new wave of AI-powered threats is on the loose. A recent CISCO Talos report found that ransomware gangs are leveraging AI hype, luring enterprises with fake AI business-to-business software while pressuring victims with psychological manipulation. Ransomware groups like CyberLock, Lucky_Gh0$t, and a newly-discovered malware dubbed 'Numero,' are all impersonating legitimate AI software, such as Novaleads, the multinational lead monetization platform. Kiran Chinnagangannagari, co-founder and chief product and technology officer at global cybersecurity firm Securin told CIO Upside that this new tactic is not niche. 'It is part of a growing trend where cybercriminals often use malicious social media ads or SEO poisoning to push these fake tools, targeting businesses eager to adopt AI but unaware of the risks,' Chinnagangannagari said. Mandiant, the cybersecurity arm of Google, recently reported a similar campaign running malicious ads on Facebook and LinkedIn, redirecting users to fake AI video-generator tools imitating Luma AI, Canva Dream Lab and Kling AI. READ ALSO: Does Agentic AI Have Staying Power? and Nvidia Safety Patent Signals Physical AI Push Ransomware gangs are also using psychological manipulation to increase the success rate of their attacks. For example, CyberLock is leaving victims notes asking them to pay $50,000, an unusually low ransom demand considering the industry's average. The notes say that the ransom payment will be used for 'humanitarian aid' in various regions, including Palestine, Ukraine, Africa and Asia. The $50,000 demand pressures smaller businesses into paying quickly while avoiding the scrutiny that comes with multi-million dollar ransoms, Chinnagangannagari said. Organizations should never pay the ransom, as payment offers no guarantee of results, Chinnagangannagari said.'Companies should focus on robust backups and incident response plans to recover without negotiating,' he added. Security leaders also need to prepare their teams for psychological manipulation, not just technical defenses, said Mike Logan, CEO of C2 Data Technology. 'These ransomware attacks are not just technical threats but psychological weapons.' In certain industries, these smaller-scale ransomware attacks can have more serious impacts. 'There are edge cases, healthcare for example, where human lives are at stake,' Logan said. However, even in those cases, the goal should be to have preventive controls in place so that paying never becomes the only option, he said. Companies should report the incident, work with authorities, and treat the breach as a catalyst to modernize their security posture, he said. The new wave of AI business-targeting ransomware demands a paradigm shift in defense strategies. AI tools are now considered by cybersecurity experts as high-risk assets, Chinnagangannagari said. Training staff on how to spot fake, malicious and suspicious online activity, especially when downloading unverified AI apps, is essential. This post first appeared on The Daily Upside. To receive cutting-edge insights into technology trends impacting CIOs and IT leaders, subscribe to our free CIO Upside newsletter.

How to Future-Proof Your Workforce for AI
How to Future-Proof Your Workforce for AI

Forbes

time18 minutes ago

  • Forbes

How to Future-Proof Your Workforce for AI

Stand Together Is AI poised to automate all human work — or will it bring us more meaningful, higher-paying jobs? It depends who you ask. Tech CEOs have warned of a coming tidal wave. Anthropic CEO Dario Amodei predicted AI could eliminate half of all entry-level white-collar jobs within five years. OpenAI's Sam Altman expects artificial general intelligence to surpass humans at all economically valuable tasks. Some economists see a slower story. MIT's Daron Acemoglu estimates that AI will increase U.S. productivity by just 0.05% annually over the next decade — based on the idea that most companies won't change how they operate. Predicting exactly how AI will reshape jobs might be impossible. But if you're leading a business, your priority isn't knowing precisely who's correct — it's preparing your organization and workforce for an AI-driven future. But why is forecasting AI's impact on jobs so difficult, even for experts? Understanding the forces that make it hard to predict is also what helps leaders understand how to prepare. I think of this as AI's "Three-Body Problem" — a nod to the famously unpredictable physics phenomenon, where adding a third interacting body makes future trajectories chaotic and impossible to forecast. Similarly, three interacting factors — AI capabilities, adoption speed, and job creation potential — make AI job predictions uniquely challenging. Let's briefly explore each one. 1. What will AI be able to do? Experts fundamentally disagree about how powerful — and how fast — AI models will become. Some predict artificial superintelligence could emerge as soon as 2027, driven by self-improving systems that compound capabilities exponentially. Others believe we're nearing the limits of current scaling laws, and that even human-level intelligence may still be 15 years away. Additionally, AI's abilities are uneven — superhuman in some tasks yet subhuman in others. For example, Ethan Mollick recently demonstrated that GPT-4o could ace an MBA-level strategy case with remarkable nuance — yet failed at solving a basic riddle requiring common sense logic. It's a reminder that raw capability doesn't guarantee reliability across domains. 2. How will we choose to use AI? Just because AI can automate a job — or parts of one— doesn't mean it will. The pace at which we integrate AI into the workplace will be wildly uneven. Some sectors will move as fast as the technology itself. Others will be slowed by regulation, union rules, institutional inertia, or cultural preferences. In industries with little financial pressure to cut costs, there may simply be no urgency to change. Something I think about a lot: We still have gas station attendants pumping gas in Oregon and New Jersey — not because it's technologically necessary, but because we've chosen to protect those jobs. Predicting where and how fast adoption will happen is tough, because it's ultimately a question of human behavior. 3. What new jobs will AI create? AI will create new jobs — but how many, and what kind, remains unknown, especially if it quickly automates even the roles it helps create. Historically, new technology has been a net job creator. The internet gave rise to SEO specialists and app developers, and the smartphone boom created whole industries around mobile design and delivery. But AI is different: It could quickly take over the roles it helps create, making it harder to predict the net effect on employment. Taken together, the Three-Body Problem makes AI job forecasting impossible to pin down. AI capabilities, adoption speed, and job creation are all unpredictable — yet understanding these complexities is only half the battle. When nobody can confidently predict AI's trajectory, how do you prepare your business and your employees for what's ahead? Precisely because AI's trajectory is uncertain, organizations must build the muscle to adapt quickly. Readiness is the best hedge against uncertainty. As Bijal Shah, CEO of Guild, put it, 'Employers who win in this new world of work driven by AI will be those who invest in making their workforce more adaptable and resilient — not just for today's skills, but for tomorrow's unknowns.' So, where should leaders begin? The following strategies are no-regrets investments — moves that deliver value no matter how fast AI change arrives. They offer ways not just to survive the shifts ahead, but to build more agile organizations and more meaningful, purposeful work for employees. Forget titles. Jobs are a collection of skills and abilities. Titles make jobs sound fixed. But they're really collections of skills and abilities — Lego pieces —that can be reassembled creatively. Organizations like Censia are already shepherding adaptations — such as Dunkin' cashiers becoming pharmacy techs. Thinking in skills and desired traits frees businesses and workers from obsessing over whether a job stays or goes. 'The winners in this AI era will be those who stop managing roles and start orchestrating skills,' said Joanna Riley, CEO of Censia. Make work suck less. AI can automate or reduce tedious job aspects, helping less experienced employees quickly take on more complex, meaningful, and better-paid work. 'The new versions of these jobs are just better — more interesting, creative, and human-oriented,' said Brandon Sammut, chief people officer at AI company Zapier. While there's a risk that these more meaningful and productive jobs will mean fewer jobs overall, the immediate opportunity is clear: Deploy AI tools that make jobs more enjoyable (less rote, routine work) and purposeful. Help people do harder stuff. Automating simpler tasks means work becomes more rewarding and challenging. Companies must help employees elevate their skill levels and continuously push into domains where humans have a unique advantage. 'We're headed toward a world where only high-skill, high-complexity work survives,' David Blake, CEO of Degreed, noted. 'The real challenge is helping people make that leap.' Durable human skills — like creativity, problem-solving, empathy, and mathematical reasoning — will remain essential. Support employees to use AI in daily work. It's management malpractice not to. 'It is management malpractice not to create an environment where employees are using AI in concrete ways in their daily work,' Sammut said. Knowing how to use AI is fast becoming table stakes to stay employable. The people most at risk of being replaced aren't being replaced by AI — they're being replaced by other humans who know how to use it. Helping employees integrate AI into their daily workflow doesn't just boost productivity — it also strengthens your employer value proposition. People are drawn to workplaces where they remain relevant, are continually challenged, and engage in meaningful, purposeful work. Waiting for certainty isn't an option. Leaders must act now, building strategies resilient to multiple outcomes. If you want to future-proof your business, start by future-proofing your people. Follow Allison Salisbury's writing and insights on her LinkedIn. Allison Salisbury is CEO of Humanist Venture Studio, which is supported by Stand Together, an organization that partners with changemakers who are tackling the root causes of America's biggest problems. Learn more about Stand Together's efforts to transform the future of work and explore ways you can partner with us.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store