Latest news with #Anthropic


Hindustan Times
3 hours ago
- Hindustan Times
Google is the next Google, Dreame's impressive K10 Pro, and Jony Ive may be back
Is AI becoming self-aware? It is too early to pass a definitive verdict on that, as it may be to say that AI is going rogue. That is indeed a question I pondered over this past week, as Anthropic and OpenAI's models made it very clear that they have a sense of self-preservation (that is usually the first step of machines taking over). As I pointed out, it must be contextualised that these recent incidents, while alarming at first glance, may not signify that AI has spontaneously developed malicious intent. These behaviours have been observed in carefully constructed test environments, often designed to elicit worst-case scenarios to understand potential failure points. Yet, these developments mustn't be taken lightly. Allow me to explain what happened at Anthropic's labs, and what the boffins at Palisade Research discovered about OpenAI's o3 model. A simple disclaimer is — you don't need to worry; at least not yet. Another question I asked is, are we judging Anthropic, and for that matter any AI company detailing eccentricities with their AI models, harshly? The fact that they've been transparent of AI's unexpected behaviours during testing, must hold AI development in good stead, as we embark on uncharted territory. These instances of AI's unexpected behaviour, highlights a core challenge in AI development — alignment. One that defines AI goals must remain aligned with human intentions. As AI models become more complex and capable, ensuring that is proving exponentially harder. Last week on Wired Wisdom: Google's universal AI assistant vision, fighting spam and backing up photos Let me list some — Google Gemini integration (this will, including more context, will be better than Meta AI, all things considered), in-lens display, app integration such as navigation guidance and of course, on-frame camera. Specifically when taking a photo, the big difference between the Android XR glasses and Ray-Ban Meta is, the former will give you a view of the photo you've just clicked, using the in-lens display. And that is surprisingly high-fidelity. That said, anything text does require some squinting, and that may need work if Messages on the fly, are to work as they are envisioned. Though still in early stages of development, and there is still time to go before the glasses do roll out later in the year, but surprisingly refined even then. That should hold the XR in good stead. It is rare for a tech thing to behave smartly, unless there is a Wi-Fi and a companion app component attached to the proposition. The Dreame K10 Pro is, which is a Wet and Dry Vacuum, that has a few smarts which hold it in good stead. And better chances of your home's floors being cleaner. Power is one, as is versatility. The heft, perhaps not so much, but we'll get to it. The ₹19,999 price tag for a hand-held vacuum may seem a bit much to an extent, considering rivals including Midea and Bissell do cost a little bit lesser. That said, the Dreame K10 Pro does a few things rather well. First off, the dust detection sensor is quite sensitive to even the slightest of changes, and that is a good problem to have if you insist on the best possible cleaning for the floor. The dust indicator that frames the display (it is a large and clear screen) will be more red than blue, if it's a dusty floor. This sensor reading also dictates whether the cleaner chooses the suction mode or turbo mode. The wet and dry cleaning is easy to start off with, since modes dictate either/or. It does seem like a limitation though that there is no option to choose water quantity for wet cleaning, and often, ends up using a tad too much for typical flooring inside homes — the result is, it takes a while to dry out completely. The Dreame K10 Pro is incredibly powerful, and can suck in dust and visible dirt from corners, even before the rollers get close. Speaking of which, the cleaning head, which includes the roller and the scrapers, is rather simple — unlike Dyson's mechanism for cleaning the heads, for instance. It is a 120,000 revolutions per minute (rpm) motor, and this sort of specifics are par for course for any vacuum system worth the money. Not very loud, which is good news for indoor use, but you wouldn't want to get started with it early morning either. Controls are placed near the hold handle, and that seems surprisingly convenient. But at 3.8kg, the Dreame K10 Pro is certainly heavier than traditional vacuum cleaners. Your arms will begin to complain soon enough (dry vacuuming curtains is out of the question). That is, if the battery doesn't run out before. For a 16,800mAh capacity, this barely lasts 20 minutes in auto mode and around 12 minutes in turbo mode (that's your only manual way to stick to dry cleaning). Jony Ive is back. Don't think only Apple is in the line of fire. But they are, as things stand, far behind in the AI race. And therefore, likely to get most impacted if OpenAI and Ive's io decide to get into a fine amalgamation of hardware, that is a product for consumers, and artificial intelligence as the foundation. OpenAI's $6.5 billion acquisition isn't without thought. And that must be worrying for most of the big tech, at this point in time. Ive is after all the man who designed iconic Apple products like the iPhone and MacBook. The collaboration, which had been developing for some time with OpenAI already holding a stake in io, aims to reimagine human-computer interaction, potentially moving beyond current device paradigms like smartphones and laptops. Could we instead move to something that's heavily reliant on voice interaction and environmental awareness? But we've seen those AI pins (the Humane one, for instance) before, and they absolutely haven't worked. It's uncertain whether OpenAI and Ive's collaboration will achieve an "iPhone moment," but their combined expertise in AI and design presents a formidable challenge to established tech giants. Apple's response to this emerging competition will be crucial in maintaining its leadership in consumer technology. And for everyone else, to maintain their position in the spaces they dominate.


The Star
4 hours ago
- Business
- The Star
Netflix co-founder and former CEO Reed Hastings joins Anthropic board
FILE PHOTO: Reed Hastings, founder and Co-C.E.O. of Netflix, arrives at the DealBook Summit in New York City, U.S., November 30, 2022. REUTERS/David 'Dee' Delgado/File Photo (Reuters) -Anthropic has appointed Netflix co-founder Reed Hastings to the board of directors, the artificial intelligence startup said on Wednesday. Hastings, 64, is also a board member at Bloomberg. He has previously held director positions at Microsoft and Facebook, and served as the CEO of Netflix for 25 years. "Anthropic is very optimistic about the AI benefits for humanity, but is also very aware of the economic, social, and safety challenges. I'm joining Anthropic's board because I believe in their approach to AI development, and to help humanity progress," Hastings said. At Anthropic, Hastings will help the company navigate its rapid growth while maintaining its commitment to safety and minimizing AI's potential negative impacts on society. Hastings co-founded Netflix in 1997 and transformed from a fledgling DVD mail-order service into a global streaming platform. He recently contributed $50 million to Bowdoin College to establish an AI and Humanity research initiative, focusing on AI's impact on work, relationships and education. In addition to Hastings, the board includes Daniela Amodei, her brother and Anthropic's CEO Dario Amodei, investor Yasmin Razavi, and Confluent CEO Jay Kreps. (Reporting by Kritika Lamba in Bengaluru; Editing by Alan Barona)
Business Times
4 hours ago
- Business Times
Generative AI models are skilled in the art of bulls**t
LIES are not the greatest enemy of the truth, according to the philosopher Harry Frankfurt. Bulls**t is worse. As he explained in his classic essay On Bulls**t (1986), a liar and a truth teller are playing the same game, just on opposite sides. Each responds to facts as they understand them, and either accepts or rejects the authority of truth. But a bulls**tter ignores these demands altogether. 'He does not reject the authority of truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bulls**t is a greater enemy of the truth than lies are.' Such a person wants to convince others, irrespective of the facts. Sadly, Frankfurt died in 2023, just a few months after ChatGPT was released. But reading his essay in the age of generative artificial intelligence (GenAI) provokes a queasy familiarity. In several respects, Frankfurt's essay neatly describes the output of AI-enabled large language models (LLMs). They are not concerned with truth, because they have no conception of it. They operate by statistical correlation, not empirical observation. 'Their greatest strength, but also their greatest danger, is their ability to sound authoritative on nearly any topic irrespective of factual accuracy. In other words, their superpower is their superhuman ability to bulls**t,' Carl Bergstrom and Jevin West have written. The two University of Washington professors run an online course – Modern-Day Oracles or Bulls**t Machines? – scrutinising these models. Others have renamed the machines' output as bots**t. One of the best-known and unsettling, yet sometimes interestingly creative, features of LLMs is their 'hallucination' of facts – or simply making stuff up. Some researchers argue this is an inherent feature of probabilistic models, not a bug that can be fixed. But AI companies are trying to solve this problem by improving the quality of the data, fine-tuning their models and building in verification and fact-checking systems. They would appear to have some way to go, though, considering a lawyer for Anthropic told a Californian court this month that their law firm had itself unintentionally submitted an incorrect citation hallucinated by the AI company's Claude. As Google's chatbot flags to users: 'Gemini can make mistakes, including about people, so double-check it.' That did not stop Google from this week rolling out an 'AI mode' to all its main services in the US. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up The ways in which these companies are trying to improve their models, including reinforcement learning from human feedback, itself risks introducing bias, distortion and undeclared value judgments. As the Financial Times has shown, AI chatbots from OpenAI, Anthropic, Google, Meta, xAI and DeepSeek describe the qualities of their own companies' chief executives and those of rivals very differently. Elon Musk's Grok has also promoted memes about 'white genocide' in South Africa in response to wholly unrelated prompts. XAI said it had fixed the glitch, which it blamed on an 'unauthorised modification'. Such models create a new, even worse category of potential harm – or 'careless speech', according to Sandra Wachter, Brent Mittelstadt and Chris Russell, in a paper from the Oxford Internet Institute. In their view, careless speech can cause intangible, long-term and cumulative harm. It is like 'invisible bulls**t' that makes society dumber, Wachter told me. At least with a politician or salesperson, we can normally understand their motivation. But chatbots have no intentionality, and are optimised for plausibility and engagement, not truthfulness. They will invent facts for no purpose. They can pollute the knowledge base of humanity in unfathomable ways. The intriguing question is whether AI models could be designed for higher truthfulness. Will there be a market demand for them? Or should model developers be forced to abide by higher truth standards, as apply to advertisers, lawyers and doctors, for example? Wachter suggested that developing more truthful models would take time, money and resources that the current iterations are designed to save. 'It's like wanting a car to be a plane. You can push a car off a cliff, but it's not going to defy gravity,' she said. All that said, GenAI models can still be useful and valuable. Many lucrative business – and political – careers have been built on bulls**t. Appropriately used, GenAI can be deployed for myriad business use cases. But it is delusional, and dangerous, to mistake these models for truth machines. FINANCIAL TIMES

Miami Herald
5 hours ago
- Business
- Miami Herald
Netflix chairman Reed Hastings joins board of AI giant Anthropic
Netflix Chairman Reed Hastings is joining the board of San Francisco-based artificial intelligence company Anthropic. Anthropic, valued at $61.5 billion after its most recent funding round in March, is known for its AI chatbot model Claude. "Anthropic is very optimistic about the AI benefits for humanity, but is also very aware of the economic, social, and safety challenges," Hastings said. "I'm joining Anthropic's board because I believe in their approach to AI development, and to help humanity progress." Netflix is one of the world's most prolific producers of movies and TV shows, known for its content recommendation algorithm. Hollywood is grappling with the implications of generative artificial intelligence, which studios believe could save money and time, but also comes with downsides. Labor groups fear job displacement, and there are also concerns about the use of copyrighted material when training AI models. Hastings was selected by Anthropic's Long Term Benefit Trust, which the company describes as "five financially disinterested members" that can select and remove a portion of the board. The group selected Hastings because of his leadership experience, philanthropic work and "commitment to addressing AI's societal challenges makes him uniquely qualified to guide Anthropic at this critical juncture in AI development," said Buddy Shah, chair of Anthropic's Long Term Benefit Trust, in a statement. Hastings will join the company's five-member board, which includes Anthropic Chief Executive Dario Amodei, President Daniela Amodei, investor Yasmin Razavi and Jay Kreps, CEO of Mountain View-based data streaming firm Confluent. Hastings served as CEO or co-CEO of Netflix for 25 years until 2023. He currently serves on the boards of other organizations including Bloomberg, the financial data and media company. He has donated money to charter school networks serving low-income U.S. communities and recently gave $50 million to Bowdoin College to establish the Hastings Initiative for AI and Humanity that aims to help the school provide ethical frameworks for AI and examine AI's impact on work and education. Copyright (C) 2025, Tribune Content Agency, LLC. Portions copyrighted by the respective providers.
Yahoo
5 hours ago
- Business
- Yahoo
AI Is Coming for White Collar Jobs, per Tech Expert
Artificial Intelligence (AI) has been a hot technology topic for some time. While people have been experimenting with its everyday uses from recipes to travel itineraries, one CEO is sounding the alarm over how it will impact the workforce. Dario Amodei, CEO of Anthropic, an American artificial intelligence startup company founded in 2021, doesn't have good news for anyone. He claimed to Axios that AI might eliminate "half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years." Amodei wants corporations and the government to stop "sugar-coating" the collateral damage that is just around the corner. "Most of them are unaware that this is about to happen," he said to the media outlet. "It sounds crazy, and people just don't believe it." The executive expects white-collar industries to be affected the most, especially when it comes to entry-level jobs. Everything from law to finance will likely be impacted by what Axios described as "the possible job apocalypse." While it doesn't bode well for the job market, Amodei believes that there will be some upsides to the use of AI. 'Cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don't have jobs,' he predicted. As of April, the national unemployment rate stands at 4.2 percent, according to the Bureau of Labor Statistics. A July 2023 McKinsey report projects that by 2030, 30 percent of U.S. jobs will be automated, thanks to AI. The sectors that will likely see a decline first include, "office support, customer service, and food service employment." They anticipate that the impact will be felt in "lower-wage jobs" first with women "1.5 times more likely to need to move into new occupations than men."AI Is Coming for White Collar Jobs, per Tech Expert first appeared on Men's Journal on May 28, 2025