logo
#

Latest news with #AIAct

Is ASML Stock a Buy Ahead of Q2 Earnings?
Is ASML Stock a Buy Ahead of Q2 Earnings?

Business Insider

time4 hours ago

  • Business
  • Business Insider

Is ASML Stock a Buy Ahead of Q2 Earnings?

Dutch semiconductor services company ASML (ASML) is set to release its Q2 earnings report this week. This has some investors wondering whether it's a good idea to buy shares of ASML stock beforehand. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Make smarter investment decisions with TipRanks' Smart Investor Picks, delivered to your inbox every week. What Wall Street Expects Wall Street is expecting ASML to announce quarterly earnings of $5.94 per share, up 37.5% compared with the same period last year. Revenues are projected to reach $8.55 billion, increasing 27.2% from a year ago. Will ASML be able to beat these estimates? As one can see below, it has a very strong track record of doing just that in recent times. Key Issues Ahead of Earnings ASML said at an investor event last November that 2026 would be a growth year for the business. As the months have gone on, investor confidence in that prediction has grown. The stock's share price has dropped 24% over the last twelve months, but is up 17% since the start of the year. In its first quarter the group reported revenue of €7.74 billion, up from €5.29 billion in the same period last year. However, net bookings of €3.94 billion missed analyst forecasts of €4.89 billion. Its net profit was €2.36 billion, versus forecasts of €2.3 billion. At the time the ASML chief executive Christope Fouquet warned that President Trump's tariffs had created a new uncertainty for the economy and 'our potential market demands.' Indeed, chip export restrictions to China mean its customers there have been buying lower-end ASML equipment. There are also concerns over a ramp up in European regulation on the AI and digital sector through measures such as the AI Act. Analysts, on average, expect second-quarter bookings to reach €4.44 billion, and €21.3 billion for the full-year. Marc Hesselink, analyst at ING, believes that hitting those forecasts depends largely on orders from the world's top contract chipmaker TSMC (TSM). The company, which is also ASML's top customer, is expected to order the tools it needs for its upcoming manufacturing process, N2, this year. 'We see a better-than-expected demand and order from TSMC and China players, but lower-than-expected demand and order from Intel (INTC) and Samsung,' added Kevin Wang, analyst at Mizuho. Is ASML a Good Stock to Buy Now?

Rogue bots? AI firms must pay up
Rogue bots? AI firms must pay up

Economic Times

time15 hours ago

  • Business
  • Economic Times

Rogue bots? AI firms must pay up

When Elon Musk's xAI was forced to apologise this week after its Grok chatbot spewed antisemitic content and white nationalist talking points, the response felt depressingly familiar: suspend the service, issue an apology and promise to do better. Rinse and isn't the first time we've seen this playbook. Microsoft's Tay chatbot disaster in 2016 followed a similar pattern. The fact that we're here again, nearly a decade later, suggests the AI industry has learnt remarkably little from its mistakes. But the world is no longer willing to accept 'sorry' as sufficient. This is because AI has become a force multiplier for content generation and dissemination, and the time-to-impact has shrunk. Thus, liability and punitive actions are being discussed. The Grok incident revealed a troubling aspect of how AI companies approach accountability. According to xAI, the problematic behaviour emerged after they tweaked their system to allow more 'politically incorrect' responses - a decision that seems reckless. When the inevitable happened, they blamed deprecated code that should have been removed. If you're building systems capable of reaching millions of users, shouldn't you know what code is running in production?The real problem isn't technical - it's philosophical. Too many AI companies treat bias and harmful content as unfortunate side effects to be addressed after deployment, rather than fundamental risks to be prevented beforehand. This reactive approach worked when the stakes were lower, but AI systems now operate at unprecedented scale and influence. When a chatbot generates hate speech, it's not embarrassing - it's dangerous, legitimising and amplifying extremist ideologies to vast legal landscape is shifting rapidly, and AI companies ignoring these changes do so at their peril. The EU's AI Act, which came into force in February, represents a shift from reactive regulation to proactive governance. Companies can no longer apologise their way out of AI failures - they must demonstrate they've implemented robust safeguards before AB 316, introduced last January, takes an even more direct approach by prohibiting the 'the AI did it' defence in civil cases. This legislation recognises what should be obvious: companies that develop and deploy AI systems bear responsibility for their outputs, regardless of whether those outputs were 'intended'.India's approach may prove more punitive than the EU's regulatory framework and more immediate than the US litigation-based system, focusing on swift enforcement of existing criminal laws rather than waiting for new AI-specific legislation. India doesn't yet have AI-specific legislation, but if Grok's antisemitic incident had occurred with Indian users, then steps like immediate blocking of the AI service, a criminal case against xAI under IPC 153A, and a demand for content removal from the X platform would have been Grok incident may mark a turning point. Regulators worldwide are demanding proactive measures rather than reactive damage control, and courts are increasingly willing to hold companies directly liable for their systems' shift is long overdue. AI systems aren't just software - they're powerful tools that shape public discourse, influence decision-making and can cause real-world harm. The companies that build these systems must be held to higher standards than traditional software developers, with corresponding legal and ethical question facing the AI industry isn't whether to embrace this new reality - it's whether to do so voluntarily or have it imposed by regulators and courts. Companies that continue to rely on the old playbook of post-incident apologies will find themselves increasingly isolated in a world demanding AI industry's true maturity will show not in flashy demos or sky-high valuations, but in its commitment to safety over speed, rigour over shortcuts, and real accountability over empty apologies. In this game, 'sorry' won't cut it - only responsibility writer is a commentator ondigital policy issues (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Rumblings at the top of Ola Electric The hybrid vs. EV rivalry: Why Maruti and Mahindra pull in different directions. What's best? How Safexpress bootstrapped its way to build India's largest PTL Express business Zee promoters have a new challenge to navigate. And it's not about funding or Sebi probe. Newton vs. industry: Inside new norms that want your car to be more fuel-efficient Stock Radar: UltraTech Cements hit a fresh record high in July; what should investors do – book profits or buy the dip? F&O Radar | Deploy Bear Put Spread in Nifty to gain from index correction Weekly Top Picks: These stocks scored 10 on 10 on Stock Reports Plus

NZ's AI strategy: ‘light touch' regulation and business opportunities
NZ's AI strategy: ‘light touch' regulation and business opportunities

Newsroom

time16 hours ago

  • Business
  • Newsroom

NZ's AI strategy: ‘light touch' regulation and business opportunities

The Government's AI strategy confirms the country is taking a light-touch approach to AI regulation. This will provide reassurance to businesses looking to embrace the benefits of AI, while also reminding businesses of their governance responsibilities and the need to ensure compliance with existing legal frameworks. The AI strategy follows recent guidance for the public sector, discussed in our previous article. Alongside the AI strategy, the Government has also issued a note entitled 'Responsible AI Guidance for Businesses'. In this article, we explore the key takeaways for New Zealand businesses and next steps. Key takeaways The AI strategy has been developed following a Cabinet decision in July 2024 committing to a strategic approach. The paper recognised a clear strategic direction would 'clear the path for AI to deliver better outcomes for people in New Zealand'. The new strategy seeks to achieve this in various ways: Regulatory clarity and light touch legislation The strategy notes uncertainty about how existing laws apply to AI may result in 'a cautious approach to AI implementation until regulatory clarity improves'. As a result, it confirms New Zealand is taking a light touch and 'principles-based' approach to AI policy. It helpfully identifies New Zealand has existing regulatory frameworks (e.g., privacy, consumer protection, human rights) which are largely principles-based and technology-neutral, and which can be updated if needed to enable AI innovation. This is a pragmatic and positive approach we expect will provide reassurance to businesses exploring the adoption of AI, and will avoid some of the challenges created by detailed standalone legislation such as the EU's AI Act (as discussed in our prior commentary here). Adoption focus The strategy outlines New Zealand's deliberate focus on AI adoption rather than development. That recognises the economic challenge and significant investment required for creating foundational AI. This approach is intended to 'more rapidly realise productivity benefits across the economy without waiting for local AI development to mature'. Upskilling the workforce It identifies that New Zealand faces a shortage of AI expertise across several sectors. The paper notes New Zealand universities are helping to bridge the gap by building a 'future-ready' workforce through specialised programmes, and notes the Government's investment in tuition, STEM, and youth support to boost enrolment and career pathways. In addition, the new 'Responsible AI Guidance' offers a valuable framework to help businesses adopt AI responsibly and effectively. The guidance encourages organisations to clearly define their purpose for using AI, prepare thorough stakeholder engagement and safe testing, and align AI objectives with internal policies. It also recommends building strong governance structures and ensuring compliance with existing regulations. The guidance emphasises the importance of high-quality, unbiased data and cautions against using AI in areas where human judgment is essential. What does this mean for your business? The strategy will provide reassurance to businesses seeking to adopt AI systems, and the Responsible AI Guidance offers a helpful consolidated roadmap. In practice, however, the application of the recommendations can be complex and businesses should start thinking early about the implications of the Government's announcement and how they can respond to the new guidance. We summarise below what we see as the key takeaways and next steps: Clarify your AI purpose: Define what you want AI to achieve in your organisation and ensure the intended use is lawful and aligned with your business goals. Prepare for adoption: This should include identifying current processes that are inefficient and could benefit from AI, engaging with stakeholders for input, and testing solutions in controlled environments like AI sandboxes. Build internal capability: Set up dedicated teams to identify the business's AI objectives and values, develop internal principles to guide the responsible and ethical use of AI, and develop consistent principles and terminology across the business. Establish governance frameworks: Form a governance team to oversee risk, compliance, and regulatory alignment, and maintain transparent communication with stakeholders to build trust. Ensure data quality and ethical use: Use clean, unbiased data to train AI systems, and avoid deploying AI in areas where human judgment is critical to protect individuals' rights and wellbeing. In addition, given the Government's light-touch regulatory approach and preference for relying on existing legal frameworks, it will be critical for businesses to ensure they are familiar with how current laws will apply to the new technology. That should include in particular ensuring that:

Germany's two biggest technology companies are unhappy with EU's AI regulations; call it 'Toxic' for ...
Germany's two biggest technology companies are unhappy with EU's AI regulations; call it 'Toxic' for ...

Time of India

timea day ago

  • Business
  • Time of India

Germany's two biggest technology companies are unhappy with EU's AI regulations; call it 'Toxic' for ...

Two of the biggest technology companies in Germany has raised alarm over the European Union's AI regulations. Top executives of Siemens and SAP have slammed EU's AI Act, blaming it for Europe lagging behind. The CEOs of Siemens and SAP have called on the European Union (EU) to overhaul its artificial intelligence regulations, arguing that the current framework is stifling technological innovation, according to an interview with the Frankfurter Allgemeine Zeitung. EU' AI Act governs how companies handle consumer and corporate data. Siemens CEO Roland Busch and SAP CEO Christian Klein slammed the EU's AI Act, which became law last year to ensure that AI systems are safe, transparent, and respect fundamental rights. The legislation categorizes AI applications by risk, imposing specific security and transparency requirements on providers. However, Busch argued that the Act is a significant factor in Europe's lag in AI development, compounded by overlapping and contradictory regulations. "The EU's regulatory approach is holding back progress," Busch told the newspaper. He also described the EU's Data Act as "toxic" for digital business models. Siemens and SAP mum on American companies letter to EU by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like The Most Successful Way of Intraday Trading is "Market Profile" TradeWise Learn More Undo While companies like Google and Meta recently urged Brussels to delay the AI rules in a letter, Busch declined to support their letter, stating it did not address the core issues. Klein, meanwhile, cautioned against merely replicating U.S. strategies focused on heavy infrastructure investments. "Infrastructure shortages are not the main barrier in Europe," Klein said. "The real issue is unlocking the potential of our data. "Busch echoed this sentiment, noting, "We are sitting on a treasure trove of data in Europe, but we are not yet able to tap into it. It's not access to computing capacity that we're lacking, but the release of resources." Both CEOs urged the EU to reform data regulations before prioritizing investments in data centers, emphasizing the need for a regulatory framework that fosters rather than hinders technological advancement. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

New Zealands National AI Strategy big on ‘economic opportunity', short on managing ethical and social risk: Opinion
New Zealands National AI Strategy big on ‘economic opportunity', short on managing ethical and social risk: Opinion

NZ Herald

time2 days ago

  • Business
  • NZ Herald

New Zealands National AI Strategy big on ‘economic opportunity', short on managing ethical and social risk: Opinion

Nvidia, which makes the hardware that powers AI technology, just became the first publicly traded company to surpass a $4 trillion market valuation. It'd be great if New Zealand could get a slice of that pie. New Zealand doesn't have the capacity to build new generative AI systems, however. That takes tens of thousands of Nvidia's chips, costing many millions of dollars that only big tech companies or large nation states can afford. What New Zealand can do is build new systems and services around these models, either by fine-tuning them, or using them as part of a bigger software system or service. The Government isn't offering any new money to help companies do this. Its AI strategy is about reducing barriers, providing regulatory guidance, building capacity and ensuring adaptation happens responsibly. But there aren't many barriers to begin with. The regulatory guidance contained in the strategy essentially says 'we won't regulate'. Existing laws are said to be 'technology-neutral' and therefore sufficient. As for building capacity, the country's tertiary sector is more under-funded than ever, with universities cutting courses and staff. Humanities research into AI ethics is also ineligible for Government funding as it doesn't contribute to economic growth. A relaxed regulatory regime The issue of responsible adoption is perhaps of most concern. The 42-page Responsible AI Guidance for Businesses document, released alongside the strategy, contains useful material on issues such as detecting bias, measuring model accuracy, and human oversight. But it is just that – guidance – and entirely voluntary. This puts New Zealand among the most relaxed nations when it comes to AI regulation, with Japan and Singapore. At the other end is the European Union, which enacted its comprehensive AI Act in 2024, and has stood fast against lobbying to delay legislative rollout. The relaxed approach is interesting in light of New Zealand being ranked third-to-last out of 47 countries in a recent survey of trust in AI. In another survey from last year, 66% of New Zealanders reported being nervous about the impacts of AI. Some of the nervousness can be explained by AI being a new technology with well-documented examples of inappropriate use, intentional or not. Deepfakes as a form of cyber bullying have become a major concern. Even the Act Party, not generally in favour of more regulation, wants to criminalise the creation and sharing of non-consensual, sexually explicit deepfakes. Generative image, video and music creation is reducing the demand for creative workers – even though it is their very work that was used to train the AI models. But there are other, more subtle issues, too. AI systems learn from data. If that data is biased, then those systems will learn to be biased, too. New Zealanders are right to be anxious about the prospect of private sector companies denying them jobs, entry to supermarkets or a bank loan because of something in their pasts. Because modern deep learning models are so complex and impenetrable, it can be impossible to determine how an AI system made a decision. And what of the potential for AI to be used online to mislead voters and discredit the democratic process, as the New York Times has reported may have occurred already in at least 50 cases? Managing risk the European way The strategy is essentially silent on all of these issues. It also doesn't mention Te Tiriti o Waitangi/Treaty of Waitangi. Even Google's AI summary tells me this is the nation's founding document, laying the groundwork for Māori and the Crown to coexist. AI, like any data-driven system, has the potential to disproportionately disadvantage Māori if it involves systems from overseas designed (and trained) for other populations. Allowing these systems to be imported and deployed in Aotearoa New Zealand in sensitive applications – healthcare or justice, for example – without any regulation or oversight risks worsening inequalities even further. What's the alternative? The EU offers some useful answers. It has taken the approach of categorising AI uses based on risk: 'Unacceptable risk' – the likes of social scoring (where individuals' daily activities are monitored and scored for their societal benefit) and AI hacking – is outright banned. High-risk systems, such as uses for employment or transportation infrastructure, require strict obligations, including risk assessments and human oversight. Limited and minimal risk applications – the biggest category by far – imposes very little red tape on companies. This feels like a mature approach New Zealand might emulate. It wouldn't stymie productivity much – unless companies were doing something risky. In which case, the 66% of New Zealanders who are nervous about AI might well agree it's worth slowing down and getting it right.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store