
Macroeconomic backdrop won't give investors clarity, says New York Life Investments' Lauren Goodwin
iCapital's Anastasia Amoroso and New York Life Investments' Lauren Goodwin, join 'Closing Bell' to discuss the markets, Big Tech earnings and the macroeconomic data ahead.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
5 hours ago
- Yahoo
What Happens When People Don't Understand How AI Works
The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book—The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' [Read: What 'Silicon Valley' knew about tech-bro paternalism] These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. [Read: Life really is better without the internet] This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences. When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic. Article originally published at The Atlantic


New York Times
5 hours ago
- New York Times
Hawley Breaks With Republicans to Oppose a Major Crypto Bill
While the clash between Elon Musk and President Trump captivated Washington on Thursday, another drama was playing out behind closed doors over a bill to regulate the $250 billion market for stablecoins, which could transform America's relationship with the dollar, upend the credit card industry, and benefit both Musk and Trump. The bill, the GENIUS Act, is poised to pass the Senate within days. But a prominent Republican, Senator Josh Hawley of Missouri, said that he will vote against the bill in its current form, warning that it would hand too much control of America's financial system to tech giants. 'It's a huge giveaway to Big Tech,' Hawley said in an interview. Mr. Hawley, who previously voted against the bill for procedural purposes, is concerned that the legislation would allow tech giants to create digital currencies that compete with the dollar. And he fears that such companies would then be motivated to collect even more data on users' finances. 'It allows these tech companies to issue stablecoins without any kind of controls,' he said. 'I don't see why we would do that.' Similar worries scuttled an effort by Meta to get into stablecoins. In 2019, Jay Powell of the Fed, among others, raised 'serious concerns' about Meta's cryptocurrency initiative, called Libra and then Diem. It abandoned the project in 2022. The GENIUS Act has exposed divisions in both parties. Democrats like Senator Elizabeth Warren of Massachusetts oppose the bill, warning it would make it easier for Trump, whose family announced its own USD1 stablecoin in March, to engage in corrupt practices. Want all of The Times? Subscribe.


Fox News
5 hours ago
- Fox News
Federal AI power grab could end state protections for kids and workers
Just as AI begins to upend American society, Congress is considering a move that would sideline states from enforcing commonsense safeguards. Tucked into the recently passed House reconciliation package is Section 43201, a provision that would pre-empt nearly all state and local laws governing "artificial intelligence models," "artificial intelligence systems," and "automated decision systems" for the next 10 years. Last night, the Senate released its own version of the moratorium that would restrict states from receiving federal funding for broadband infrastructure if they don't fall in line. Supporters argue that a moratorium is needed to avoid a patchwork of state rules that could jeopardize U.S. AI competitiveness. But this sweeping approach threatens to override legitimate state efforts to curb Big Tech's worst abuses—with no federal safeguards to replace them. It also risks undermining the constitutional role of state legislatures to protect the interests and rights of American children and working families amid AI's far-reaching social and economic disruptions. In the absence of Congressional action, states have been the first line of defense against Big Tech. Texas, Florida, Utah, and other states have led the way to protect children online, safeguard data privacy, and rein in platform censorship. Section 43201 puts many of those laws—even those not directly related to AI—at risk. The provision defines "automated decision systems" broadly, potentially capturing core functions of social media platforms, such as TikTok's For You feed or Instagram's recommendation engine. At least 12 states have enacted laws requiring parental consent or age verification for minors accessing these platforms. However, because these laws specifically apply to social media platforms, they could easily be construed as regulating "automated decision systems"— and thus be swept up in the moratorium. Further, Section 43201 might also block provisions of existing state privacy laws that restrict the use of algorithms—including AI—to predict consumer behavior, preferences, or characteristics. Even setting aside concerns with the moratorium's expansive scope, it suffers from a more fundamental flaw. The moratorium threatens to short-circuit American federalism by undermining state laws that ensure AI lives up to the promise outlined by Vice President J.D. Vance. Speaking at the Paris AI Summit, he warned against viewing "AI as a purely disruptive technology that will inevitably automate away our labor force." Instead, Vance called for "policies that ensure that AI… make[s] our workers more productive" and rewards them with "higher wages, better benefits, and safer and more prosperous communities." That vision is nearly impossible without state-level action. Legislators, governors, and attorneys general from Nashville to Salt Lake City are already advancing creative, democratically accountable solutions. Tennessee's novel ELVIS Act protects music artists from nonconsensual AI-generated voice and likeness cloning. Utah's AI consumer protection law requires that generative AI model deployers notify consumers when they are interacting with an AI. Other states, including Arkansas and Montana, are building legal frameworks for digital property rights with respect to AI models, algorithms, data, and model outputs. All of this is now at risk. As laboratories of democracy, states are essential to navigating the inevitable and innumerable trade-offs entailed by the diffusion of emerging technologies. Federalism enables continuous experimentation and competition between states—exposing the best and worst approaches to regulation in highly dynamic environments. That's critical when confronting AI's vast and constantly evolving sphere of impact on children and employment—to say nothing of the technology's wider socio-economic effects. Sixty leading advocacy and research organizations have warned that AI chatbots pose a significant threat to kids. They cite harrowing stories of teens who have been induced to suicide, addiction, sexual perversion, and self-harm at the hands of Big AI. Even industry leaders are sounding alarms: Anthropic CEO Dario Amodei estimates that AI could force up to 20% unemployment over the next five years. Innovation inherently brings disruption—but disruption without guardrails can harm the very communities AI is purportedly meant to uplift. That's why 40 state attorneys general, Democrats and Republicans alike, signed a letter opposing Section 43201, warning that it would override "carefully tailored laws targeting specific harms related to the use of AI." To be sure, not all laws are drafted equal. States like California and Colorado are imposing European-style AI regulations particularly detrimental to "Little Tech" and open-source model developers. But Congress shouldn't throw out federalism with the "doomer" bathwater. Rather than a blanket pre-emption, it should consider narrow, targeted limits carefully tailored to stymie high-risk bills—modeled on California and Colorado's approach—that foist doomer AI standards on the rest of the nation. Absent a comprehensive federal AI framework, states must retain freedom to act—specifically, to ensure that AI bolsters American innovation and competitiveness in pursuit of a thriving middle class. America's AI future has great potential. But our laboratories of democracy are key to securing it.