Latest news with #hype


Times
5 days ago
- Entertainment
- Times
The Prancing Stag restaurant review — the jewel in Jordanhill's crown
H earing about a 'hidden gem' restaurant is a bit like reading of an 'unmissable show' in the Edinburgh Festival or seeing influencers rave about a 'must buy' skin serum. When these terms get casually bandied around, can they still tell us anything helpful? Which of these gems deserves the effort to be dug out? So after the dozenth 'hidden gem' description I read about the Prancing Stag — a restaurant in Glasgow run by Rory Cox and family — it feels high time to see if it's worth the hype. The hidden part is confirmed quickly — I walk into the wrong restaurant. I know I am aiming for somewhere off the Crow Road, beside Jordanhill railway station, but confused by an overhead sign for the Prancing Stag, prance straight into Little SoHo pizzeria instead. Note to self: read the bit saying 'round the back' next time. Apologising to the Little SoHo crew, I find the real restaurant around a corner, under a life-size stag statue, frozen mid-leap, forever majestic and totemic.
Yahoo
6 days ago
- Business
- Yahoo
Meme Stocks Are Coming Back in 2025 — 3 Red Flags To Watch Out For
Meme stocks are making a comeback this summer. In 2021, stocks like GameStop (GME) and AMC (AMC) that no one cared about suddenly skyrocketed to all-time highs. This time, Opendoor (OPEN), Kohl's (KSS), GoPro (GPRO) and Krispy Kreme (DNUT) have all jumped in price not because these companies are doing better, but due to a wave of online hype. Read Next: Check Out: While some people will make quick profits, many others could lose money just as fast. Before you jump into meme stocks, here are some red flags you should look out for. Big Price Jumps Without Fundamentals When a stock skyrockets by hundreds of percent in a few days, it's easy to think you're missing out on something big. But in many cases, the surge is driven by hype, not company fundamentals. For example, in late July, Yahoo Finance reported that Opendoor stock rose by more than 300% over the previous month. As of Aug. 12, 2025, GoPro has soared by more than 56% over the past month. When you see this pattern — stocks rising dramatically without any good news — it's often a sign of speculative frenzy. When the excitement fades, those gains can disappear quickly. So don't be the one jumping in late because you'll likely lose money. Explore More: Heavy Short Interest When a stock is heavily shorted, investors are betting it will drop, and it can lead to increased volatility. If the stock's price goes up instead, those investors will rush to buy it back to limit losses, which can drive the stock price even higher. This sudden surge is known as a short squeeze. In late July, Forbes reported that nearly half (49%) of Kohl's outstanding shares were short positions, while Opendoor had approximately 21% of its shares sold short. These jumps can be so tempting, but the real danger is that once those shorts are covered and buyers lose interest, prices can fall just as explosively as they rose. AI and Social Media Hype In 2025, meme stocks aren't just riding the waves from Reddit forums. Artificial intelligence (AI) is turbocharging them. 'Meme stocks in 2025 rely on AI-driven sentiment analysis and real-time short-interest data, replacing viral tweets and impulsive buying,' according to AInvest. AI-powered stock-tracking tools can scan social media, forums and news headlines for spikes in specific stock mentions or unusual trading activity. When a stock suddenly trends, these tools could alert thousands of traders. This can create a loop where the same stock names keep getting pushed online, potentially becoming meme stocks. That's one reason meme stocks can see wild gains in a matter of days and then plummet quickly once the loop moves on to the next big thing. More From GOBankingRates 3 Luxury SUVs That Will Have Massive Price Drops in Summer 2025 10 Cars That Outlast the Average Vehicle 4 Housing Markets That Have Plummeted in Value Over the Past 5 Years This article originally appeared on Meme Stocks Are Coming Back in 2025 — 3 Red Flags To Watch Out For Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Bloomberg
23-07-2025
- Business
- Bloomberg
AI ‘Agents' Aren't Matching Up to the Buzzwords
The biggest issue with the term artificial intelligence 'agent' is that everyone seems to have a different definition for what it means. Most often, it's used to describe an AI system that can act autonomously and work with outside applications to complete increasingly complicated tasks. The buzziest example from Asia has been Manus, which went mega-viral earlier this year. But it's also morphed into a marketing buzzword, slapped onto everything from products that surf the web on their own to bots that will eventually achieve human decision-making skills — and could be coming for your job. In China, where reports of a new agentic tool seem to emerge every week, some firms have been accused of labeling their products AI agents just to capitalize on the hype.


Fast Company
06-06-2025
- Fast Company
5 dark facts to remember in the face of AI hype
Emily Bender is a Professor of Linguistics at the University of Washington where she is also Faculty Director of the Computational Linguistics Master of Science program, affiliate faculty in the School of Computer Science and Engineering, and affiliate faculty in the Information School. Alex Hanna is Director of Research at the Distributed AI Research Institute and a lecturer in the School of Information at the University of California Berkeley. She has been featured in articles for the Washington Post, Financial Times, The Atlantic, and Time. What's the big idea? The AI Con is an exploration of the hype around artificial intelligence, whose interests it serves, and the harm being done under this umbrella. Society has options when it comes to pushing back against AI hype, so there is still hope that we can collectively resist and prevent tech companies from mortgaging humanity's future. Below, co-authors Emily Bender and Alex Hanna share five key insights from their new book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. Listen to the audio version—read by Emily and Alex—in the Next Big Idea App. 1. The tech that's driving the current wave of AI hype is built on a parlor trick Chatbots like ChatGPT are impressive technology, but maybe not in the way you think. They cannot perform the range of functions they purportedly fulfill, but rather, they are designed to impress us. The key to their parlor trick lies in how people utilize language. You might think it's a simple matter of decoding what the words say, but the process is both far more complex and far more social. We interpret language by relying on everything we know (or guess) about the person who said the words, and whatever common ground we share with them. Then we make inferences about what they must have been trying to convey. We do this instinctively and reflexively. So, when we encounter synthetic text of the kind that comes out of ChatGPT and its ilk, we interpret it by imagining a mind behind the text, even though there is no mind there. In other words, the linguistic and social skills we wrap around AI outputs are what make it so easy for the purveyors of chatbots to fool us into perceiving chatbots as reasoning entities. 2. AI is not going to take your job, but it will make your job a lot worse Much of the purpose of AI technology serves to remove humans from the equation at work. The story of the Writers Guild of America strike is instructive here. In 2023, the Writers Guild of America East and West (or the WGA), the labor union representing Hollywood writers, went on strike for several reasons, including a demand to raise the pay rate that writers receive from streaming services. They also wanted to ensure that they wouldn't be reduced to babysitters for chatbots tasked to write scripts based on harebrained ideas from movie and television producers. John Lopez, a member of the WGA's AI working group, noted that writers could be paid the rewrite rate for dealing with AI-generated content, which is much less than the pay rate for an original script. We've seen the threat of image and text generators drastically reduce the number of job opportunities for graphic designers, video game artists, and journalists. This is not because these tools can adequately perform the tasks of these professionals, but they perform well enough for careers to be cut short and for workers to be rehired at a fraction of what they had been paid before, just so that they can fix the sloppy outputs of AI. 'They perform well enough for careers to be cut short and for workers to be rehired at a fraction of what they had been paid before.' Furthermore, systems that get called 'AI' are often a thin veneer that hides the tried-and-true corporate strategy of outsourcing labor to people in the Majority World, also called the Global South. Many of these workers moderate online content, test chatbots for toxic outputs, and even remotely drive vehicles that are advertised as being fully automated. Luckily, workers have been able to push back, both by concerned labor action, industrial sabotage (especially through creative tools for artists, like Nightshade and Glaze, which prevent their work from being used for training image generation models), and political education. 3. The purpose of the AI con is to disconnect people from social services Because we use language in just about every sphere of activity, and because the synthetic text extruding from machines can be trained to mimic language, it can seem like we are about to have technology that can provide medical diagnoses, personalized tutoring, wise decision making in the allocation of government services, legal representation, and more—all for just the cost of electricity (plus whatever the companies making the chatbots want to charge). But in all these cases, it's not the words that matter, but the actual thought that goes into them and the relationships they help us build and maintain. AI systems are only good for those who want to redirect funding away from social services and justify austerity measures. Meanwhile, those in power will be sure to get services from actual people, while foisting the shoddy facsimiles off on everyone else. The head of Health AI at Google, Greg Corrado, said he wouldn't want Google's Med-PaLM system to be part of his family's health care journey. That didn't stop him from bragging about how it supposedly passed a medical licensing exam. It didn't. But more to the point, designing systems to pass multiple-choice exams about medical situations is not an effective way to build useful medical technology. In these domains, AI hype takes the form of specious claims of technological solutions to social problems, based, at best, on spurious and unfounded evaluations of the systems being sold. 4. AI is not going to kill us all, but climate change might There was a time in Silicon Valley and Washington D.C. when an idiosyncratic, yet serious, question was posed to people working on technology or tech policy: 'What is your p(doom)?' p(doom) refers to probability of doom, or the likelihood that AI would somehow kill all of humanity. This doomerism is predicated on the development of artificial general intelligence (or AGI). AGI is poorly defined, but the basic idea is a system which can do a variety of tasks as well as or better than humans. Unfortunately, doomerism has serious purchase with some technologists and policymakers, and is predicated on a body of unseemly ideologies, including effective altruism, longtermism, and rationalism. These ideologies take the moral philosophy of utilitarianism to the extreme, suggesting that we need to discount harm in the present to save the billions of trillions of humans who will live in some undefined future. These ideologies are eugenicist in their origins and implications. 'Doomerism has serious purchase with some technologists and policymakers.' Meanwhile, we are likely to fail to meet the Paris Agreement's goal to limit the increase in global average temperature to well below 2 degrees Celsius above pre-industrial levels, and AI is making this problem worse. The data centers that host these tools are generating vast amounts of excess carbon, semiconductors used for their parts are leeching forever chemicals into the ground, and backup generators are projected to cause more respiratory illnesses in the poorest parts of the U.S. and elsewhere. Not only are robots not going to take over the world, but their production is going to make the climate crisis much worse. 5. None of this is inevitable The people selling AI systems and the hype around them would like us to voluntarily give up our agency in these matters. They tell us that AI, or even AGI, is inevitable, or at least that systems like ChatGPT are 'here to stay.' But none of this is inevitable. We do have agency, both collectively and individually. Collectively, we can push for regulations that prevent AI tech from being used on us and for labor contracts that keep us in control of our work. On an individual level, we can refuse to use AI systems. We can be critical consumers of automation, being sure we understand what's being automated, how it was evaluated, and why it's being automated. We can also be critical consumers of journalism about technology, looking for and supporting work that holds power to account. And finally, we can and should engage in ridicule as praxis, meaning having fun pointing out all the ways in which synthetic media extruding machines are janky and tacky.