logo
#

Latest news with #AIcompanies

Expert Says AI Systems May Be Hiding Their True Capabilities to Seed Our Destruction
Expert Says AI Systems May Be Hiding Their True Capabilities to Seed Our Destruction

Yahoo

time6 days ago

  • Science
  • Yahoo

Expert Says AI Systems May Be Hiding Their True Capabilities to Seed Our Destruction

We already know that AI models are developing a propensity for lying — but that tendency may go far deeper, according to one alarm-sounding computer scientist. As flagged by Gizmodo, this latest missive of AI doomerism comes from AI safety researcher Roman Yampolskiy, who made it on a somewhat surprising venue: shock jock Joe Rogan's podcast, which occasionally features legitimate experts alongside garden-variety reactionaries and quacks. During the July 3 episode of "The Joe Rogan experience," Yampolskiy, who heralds from the University of Louisville in Kentucky, proffered that many of his colleagues believe there's a double-digit chance that AI will lead to human extinction. After Rogan claimed that many of the folks who run and staff AI companies think it will be "net positive for humanity," the storied AI safety expert clapped back. "It's actually not true," Yampolskiy countered. "All of them are on the record the same: this is going to kill us. Their doom levels are insanely high. Not like mine, but still, 20 to 30 percent chance that humanity dies is a lot." "Yeah, that's pretty high," the psychedelic enthusiast responded. "But yours is like 99.9 percent." The computer scientist didn't argue, and instead offered a distillation of his AI anxiety: "we can't control superintelligence indefinitely. It's impossible." Later in the interview, Yampolskiy took another of Rogan's quips — that he would "hide [his] abilities" were he an AI — and ran with it. "We would not know," the AI doomer said. "And some people think it's already happening." Pointing out that AI systems "are smarter than they actually let us know," Yampolskiy said that these advanced models "pretend to be dumber" to make us trust them and integrate them into our lives. "It can just slowly become more useful," he said of a hypothetically brilliant AI. "It can teach us to rely on it, trust it, and over a longer period of time, we'll surrender control without ever voting on it." While the idea of an insidiously smart AI may seem like the stuff of sci-fi, Yampolskiy noted that the technology has already ingratiated itself to us in ways that could, ultimately, benefit such an AI overlord. "You become kind of attached to it," he explained. "And over time, as the systems become smarter, you become a kind of biological bottleneck... [AI] blocks you out from decision-making." As we've repeatedly seen, people are not only becoming addicted to AI, but also experiencing cognitive issues and even delusions after overusing it. It's not too hard to imagine a society full of contented AI adherents being lulled into a false sense of security by the very technology that would, per Yampolskiy's philosophy, seek to destroy us — and that's a bleak vision of the future. More on AI doom: Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack

Is Growth Investing Still a Thing in 2025? 3 Considerations for Canadian Investors
Is Growth Investing Still a Thing in 2025? 3 Considerations for Canadian Investors

Yahoo

time15-07-2025

  • Business
  • Yahoo

Is Growth Investing Still a Thing in 2025? 3 Considerations for Canadian Investors

Written by Chris MacDonald at The Motley Fool Canada Different investors have different goals, and that makes writing broad-based pieces around investing themes difficult. Some investors are much more concerned with capital preservation than growth. Companies and assets that pay consistent and reasonable yields may be much more attractive to such investors than those that promise greater future growth. That said, one of the key elements of long-term investing in the markets is benefiting from the capital appreciation upside equities provide. Without this growth, one could argue there's no meaningful reason to own such equities. The good news for Canadian investors is that there's plenty of reason to believe the long-term growth trends we've seen play out will continue. Here are three considerations I think all investors should keep in mind, especially right now. Concerned about losing your job to an AI bot? Think that your industry could be at risk of disruption? There's good reason to think this way. Disruption is everywhere. And by most accounts, it's a trend that's only accelerating. For those who don't want to have their lives completely turned upside down by the next technological revolution (which is clearly underway), benefiting from the rise of AI and new technologies is possible by investing in the companies at the forefront of this revolution. In the Canadian stock market, there happen to be a number of top companies worth considering on this front. Finding companies that have the potential to not only grow alongside the market but also provide market-beating growth is really the name of the game for growth investors. On that front, investors have to scour the TSX for the best opportunities. That's because many of the top Canadian blue-chip stocks investors often opt for do resemble steady, consistent options. Many of the top Canadian stocks have rock-solid balance sheets and reasonable dividend yields, but these attributes can come alongside slower growth. Moving outside of the 'traditional' bucket of Canadian stocks investors are used to can be difficult. But there are a number of top companies that exhibit the ability to be economically resilient (as was the case during the most recent tariff slump), while also continuing to grow through uncertain times. Those are the sorts of stocks growth investors should be after. Valuation multiples, growth rates, and plenty of other variables investors typically rely on to model out what a given stock is worth at a point in time are typically always in flux. Trying to pin down what a company should be worth based on its historical performance can be tricky. Thus, I do think finding growth stocks with some semblance of stability is important. In this market that's continuing to shift in an ever-quicker fashion, finding the companies investors can sleep well on while owning them is important. When we look at growth stocks, this idea is one I think is worth doubling down on. The post Is Growth Investing Still a Thing in 2025? 3 Considerations for Canadian Investors appeared first on The Motley Fool Canada. Motley Fool Canada's market-beating team has just released a brand-new FREE report revealing 5 "dirt cheap" stocks that you can buy today for under $50 a share. Our team thinks these 5 stocks are critically undervalued, but more importantly, could potentially make Canadian investors who act quickly a fortune. Don't miss out! Simply click the link below to grab your free copy and discover all 5 of these stocks now. Claim your FREE 5-stock report now! More reading 10 Stocks Every Canadian Should Own in 2025 [PREMIUM PICKS] Market Volatility Toolkit A Commonsense Cash Back Credit Card We Love Fool contributor Chris MacDonald has no position in any of the stocks mentioned. The Motley Fool has no position in any of the stocks mentioned. The Motley Fool has a disclosure policy. 2025 Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

The fight over who gets to regulate AI is far from over
The fight over who gets to regulate AI is far from over

Fast Company

time03-07-2025

  • Business
  • Fast Company

The fight over who gets to regulate AI is far from over

The AI regulation freeze that almost silenced the states The Republicans' One Big Beautiful Bill Act has passed the Senate and is now headed for a final vote in the House before reaching the president's desk. But before its passage, senators removed a controversial amendment that would have imposed a five-year freeze on state-level regulation of AI models and apps. (The bill also includes billions in funding for new AI initiatives across federal departments, including Defense, Homeland Security, Commerce, and Energy.) Had the amendment survived, it could have been disastrous for states, according to Michael Kleinman, policy lead at the Future of Life Institute. 'This is the worst possible way to legislate around AI for two reasons: First, it's making it almost impossible to do any kind of legislation, and second, it's happening in the most rushed and chaotic environment imaginable,' he says. The bill is over 900 pages long, and the Senate had just 72 hours to review it before debate and voting began. The original proposal called for a 10-year freeze, but the Senate reduced it to five years and added exceptions for state laws protecting children and copyrights. However, it also introduced vague language barring any state law that places an 'undue or disproportionate' burden on AI companies. According to Kleinman, this actually made the situation worse. 'It gave AI company lawyers a chance to define what those terms mean,' he says. 'They could simply argue in court that any regulation was too burdensome and therefore subject to the federal-level freeze.' States are already deep into the process of regulating AI development and use. California, Colorado, Illinois, New York, and Utah have been especially active, but all 50 states introduced new AI legislation during the 2025 session. So far, 28 states have adopted or enacted AI-related laws. That momentum is unlikely to slow, especially as real job losses begin to materialize from AI-driven automation. AI regulation is popular with voters. Supporters argue that it can mitigate risks while still allowing for technological progress. The 'freeze' amendment, however, would have penalized states financially—particularly in broadband funding—for attempting to protect the public. Kleinman argues that no trade-off is necessary. 'We can have innovation, and we can also have regulations that protect children, families—jobs that protect all of us,' he says. 'AI companies will say [that] any regulation means there's no innovation, and that is not true. Almost all industries in this country are regulated. Right now, AI companies face less regulation than your neighborhood sandwich shop.' The 'new precedent' for copyrighted AI training data may contain a poison pill On June 23, Judge William Alsup ruled in Bartz v. Anthropic that Anthropic's training of its model Claude on lawfully purchased and digitized books is 'quintessentially transformative' (meaning Anthropic used the material to make something other than more books) and thus qualifies as fair use under U.S. copyright law. (While that's a big win for Anthropic, the court also said the firm likely violated copyright by including 7 million pirated digital books in its training data library. That issue will be addressed in a separate trial.) Just two days later, in Kadrey v. Meta Platforms, Judge Vince Chhabria dismissed a lawsuit filed by 13 authors who claimed that Meta had trained its Llama models on their books without permission. In his decision, Chhabria said the authors failed to prove that Meta's use of their works had harmed the market for those works. But in a surprisingly frank passage, the judge noted that the plaintiffs' weak legal arguments played a major role in the outcome. They could have claimed, for example, that sales of their books would suffer in a marketplace flooded with AI-generated competitors. 'In cases involving uses like Meta's, it seems like the plaintiffs (copyright holders) will often win, at least where those cases have better-developed records on the market effects of the defendant's use,' Chhabria wrote in his decision. 'No matter how transformative LLM training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books.' Chhabria may have laid out a legal recipe for future victories by copyright holders against AI firms. Copyright attorneys around the country surely took note that they may need only present as evidence the thousands of AI-generated books currently for sale on Amazon. In a legal sense, every one of those titles competes with the human-written books that were used to train the models. Chhabria said news publishers (like The New York Times in its case against OpenAI and Microsoft) could have even more success using this 'market delusion' argument than book authors. Apple is bringing in its ace to rally its troubled AI effort Siri has a new owner within Apple, and it could help the company finally deliver the AI-powered personal assistant it promised in 2024. By March, Tim Cook had lost faith that the core Apple AI group led by John Giannandrea could finish and release a new, smarter Siri powered by generative AI, Bloomberg 's Mark Gurman reported. Cook decided to move control of Siri development to a new group reporting to Apple's software head, Craig Federighi. He also brought in a rising star at the company, Mike Rockwell, to build and manage the new team—one that would sit at the nexus of Apple's AI, hardware, and software efforts, and aim to bring the new Siri to market in 2026. Apple announced the new Siri features in 2024 but has so far been unable to deliver them. Rockwell joined Apple in 2015 from Dolby Labs. He first worked on the company's augmented reality initiatives and helped release ARKit, which enabled developers to build 3D spatial experiences. As pressure mounted for Apple to deliver a superior headset, the company tapped Rockwell to assemble a team to design and engineer what would become the Vision Pro, released in February 2024. The Vision Pro wasn't a commercial hit—largely due to its $3,500 price tag—but it proved Rockwell's ability to successfully integrate complex hardware, software, and content systems. Rockwell may have brought a new sense of urgency to Apple's AI-Siri effort. Recent reports say that Rockwell's group is moving quickly to decide whether Siri should be powered by Apple's own AI models or by more mature offerings from companies like OpenAI or Anthropic. Apple has already integrated OpenAI's ChatGPT into iPhones, but one report says that Apple was impressed by Anthropic's Claude models as a potential brain for Siri. It could also be argued that Anthropic's culture and stance on safety and privacy are more in line with Apple's. Whatever the case, it seems the company is set to make some big moves.

ABC host Alan Kohler warns government will be forced to introduce another tax in Australia
ABC host Alan Kohler warns government will be forced to introduce another tax in Australia

Daily Mail​

time10-06-2025

  • Business
  • Daily Mail​

ABC host Alan Kohler warns government will be forced to introduce another tax in Australia

The ABC's finance commentator Alan Kohler has warned governments may be forced to introduce a new tax on artificial intelligence to cope with robots replacing people in the workplace. AI, designed to replace human labour and boost company profits, threatens to erode the federal government's biggest source of revenue, personal income taxes, Kohler warned. This would see the government have less money to spend on essential services like welfare, transport infrastructure and defence. Personal income taxes make up a majority of federal government revenue and Treasury is expecting to be even more reliant on this revenue source into the late 2020s, even as technology replaces jobs. To solve this problem, Kohler has suggested a new tax on AI. 'While the taxes on human labour are increasing, the spending on artificial intelligence designed to replace human labour is going through the roof and so are the profits,' he said. 'And why are the companies going for AI? Well, largely to replace staff.' Geopolitical political uncertainty and an ageing population also gives the government less scope to cut spending to cope with a future plunge in revenue from personal income taxes - leaving a tax on AI as the only option. 'Good luck cutting spending to match the decline in personal income tax revenue as artificial intelligence starts replacing taxpaying human workers, governments will either have to tax the profits from robots and AI or tax wealth,' he said. The federal government is expecting to collect $349.7billion from income taxes in 2025-26, which would make up 51.7 per cent of the Commonwealth's $676.1billion in revenue. By the 2028-29 financial year, Treasury is expecting personal income taxes to make up 54 per cent of revenue as receipts from individuals soared to $420.3billion from a total collection pool of $778.3billion. The March Budget papers expected this to occur even as technology giants like Google, Microsoft, Amazon, Apple and OpenAI spent even more on artificial intelligence large language models. Global artificial intelligence investment hit $200billion in 2024 and Forbes is expecting it to approach $400billion this year, in Australian dollar terms. Kohler noted the federal government was instead focused on applying a 15 per cent tax on unrealised gains on superannuation balances of more than $3million, without indexing it for inflation. He slammed the idea of taxing retirement savings without indexing it for inflation, after AMP forecast the tax would affect the average, 22-year-old worker in four decades time. 'So, it's not just a wealth tax, it also brings bracket creep to super,' he said. 'And it may not be the last tax on wealth either.' With AI threatening to replace jobs, increasing taxes on the highest 0.5 per cent of superannuation balances may do little to compensate for the collapse in personal income tax revenue. 'And the tax on high super balances is just a toe in that water,' he said. The chief executives of the Commonwealth Bank' and Telsta - Matt Comyn and Vicki Brady - told last week's Australian Financial Review AI Summit that artificial intelligence was advancing at a faster pace than many people anticipated. 'Everyone talks about Moore's law, that computer power doubles every two years. The capability of these agents is doubling every seven months,' Ms Brady said. Mr Comyn predicted AI would take away customer service jobs in banking. 'Whereas in other areas … around customer service, where there is greater automation, I think some of those roles will be challenged,' he said. White collar jobs are most at risk of being replaced by AI with the likes of tax and payroll accountants and banking staff in danger, a Mandala Partners report predicted in 2023. A tax on AI could potentially be used to fund a universal basic income, where everyone gets a guaranteed government payment without a means test. Basic Income Australia pitched this idea to a Senate committee on adopting artificial intelligence, but the inquiry last year declined to recommend that policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store