Latest news with #AIAct


Irish Independent
a day ago
- Business
- Irish Independent
‘It's about competitiveness' – Government ‘considering' ChatGPT rollout to schools
It comes as almost three in four secondary school students admit using the technology. The move would follow Estonia, which announced a rollout of ChatGPT in September to 20,000 secondary school students and 3,000 teachers, with another 38,000 students and an extra 3,000 teachers joining the scheme in September 2026. Under the EU's AI Act, Irish teachers are already required to undergo AI literacy training. Senior officials from OpenAI, which owns ChatGPT, met this week with Taoiseach Micheál Martin and Enterprise Ministers Peter Burke and Niamh Smyth. Speaking to the Irish Independent, chief financial officer Sarah Friar said that the proposed rollout would be an 'enterprise' deployment, controlled by schools and teachers. She said that Ireland has expressed keen interest in deploying the technology. 'They understand that it's about competitiveness,' she said. 'They're receptive.' A recent Studyclix survey of 1,300 Irish secondary students claimed that 71pc now use ChatGPT or alternative AI software, with almost two in three using it for school-related work. A spokesperson for the Teachers' Union of Ireland said that it had received no consultation to date on the issue and that secondary school teachers would need substantial training before any such rollout. 'While we have no position on particular platforms, our general position on AI has been that every effort must be made to optimise the potential benefits and protect against the risks that it presents to the education system,' the spokesperson said. 'A survey of our members earlier this year showed a growing concern at a lack of adequate guidelines and training on AI.' ADVERTISEMENT Learn more A spokesperson for the Department of Education declined to comment. Some Irish second-level principals have expressed concern over AI in schoolwork. Last month, the principal of the Cork city-based secondary school Coláiste Éamonn Rís warned of a 'flood' of AI-generated project work. 'You've got to do reform with consultation and the people you need to consult with are the teachers, because they're the people on the ground,' said Aaron Wolfe. According to OpenAI, 28pc of Irish people now use ChatGPT at least once a week, a figure described by the tech giant as 'low' compared to other EU countries. 'We have an incredible deal with Estonia, where they're putting ChatGPT in for secondary school students,' said Ms Friar. 'The UK government's using ChatGPT to create lesson planning.' The company says that such educational deployments are aimed at making ChatGPT 'as fundamental as the internet' to schools. 'ChatGPT has become a go-to tool for students globally to personalise their education and advance their personal development,' the company said this year when announcing its Estonian school rollout. 'Most ChatGPT users – nearly four in five – are under the age of 35 and the majority of conversations are focused on learning and schoolwork. 'By supporting AI literacy programmes, expanding access to AI, and developing policies to make AI training accessible and affordable, we can ensure students will be better equipped as the workforce of the future.'


Mint
4 days ago
- Health
- Mint
Mint Primer: Dr AI is here, but will the human touch go away?
Shanghai-based company Synyi AI recently unveiled a fully artificial intelligence (AI)-run clinic in Saudi Arabia. While AI excels at diagnosis, it lacks human empathy and nuanced judgement. Is the future of healthcare hybrid or autonomous? And can we trust AI docs? Just how do these AI clinics function? Synyi AI works with hospitals in China, using AI for diagnosis and medical research. Its new clinic, built with Saudi Arabia's Almoosa Health Group as a pilot, is led by an AI 'doctor". Christened Dr Hua, it independently conducts consultations, diagnoses, and suggests treatments via a tablet. A human doctor then reviews and approves each plan. This marks a shift from AI as a support tool to primary care providers. The AI doctor covers about 30 respiratory illnesses, including asthma and pharyngitis. Synyi plans to expand its scope to 50 conditions including gastrointestinal and dermatological problems. Read more: Mint Primer: A robot for every 3 humans: What happens to us? From assistant to doc seems like a jump... Not really. AI systems already assist with checking symptoms, asking routine questions, and prioritizing patients before doctors take over. They can interpret scans and flag critical results. Hospitals in South Korea, China, India and the UAE use AI to manage logistics, bed-use and infection control. In May 2024, Tsinghua University went a step further when it introduced a virtual 'Agent Hospital" with large language model (LLM)-powered doctors. Months later, Bauhinia Zhikang launched 42 AI doctors across 21 departments for internal testing of their diagnostics. With Synyi AI, fully autonomous clinics may become commonplace. What can AI doctors do that humans can't? AI tools from Google, Microsoft, Meta, Amazon and Nvidia can analyze X-rays, medical records and large datasets with precision and speed. They also generate treatment summaries. Unlike humans, AI doctors don't get tired. They can handle up to 10,000 patients a week, compared with around 100 by a typical human doctor—valuable in overburdened health systems. Can AI replace human doctors entirely? AI excels at repetitive, data-heavy tasks like diagnosing common illnesses, analyzing scans and flagging abnormalities. Surgical robots like Da Vinci are used in hospitals worldwide, including Apollo, AIIMS and Fortis in India. But AI lacks empathy, moral reasoning, and adaptability in complex or unclear cases. Even at Synyi's AI-run clinic, a human doctor must approve each decision. Over-promising, too, can be risky. Babylon Health collapsed after its claims of diagnosing better than doctors fell short. Read more: AI application startups in India set to get more investments from VC firms Accel, Peak XV, Lightspeed Can AI in healthcare be trusted? There is a need for supervision given growing use in Asia—from surgical robots in South Korea, China, Japan and India to home-care AI research in Singapore. But regulation is catching up: Europe's AI Act, the US FDA and the World Health Organisation (WHO) all emphasize transparency, safety, and ethics, especially as LLMs can hallucinate (provide false answers confidently) and show bias. India's medical ethics code requires doctors to disclose AI use to patients. Keeping human doctors in the loop remains key for trust.


Forbes
22-05-2025
- Business
- Forbes
How AI Ecosystems Are Transforming Business Applications
How AI Ecosystems Are Transforming Business Applications We can't emphasize enough the importance of interconnected networks and ecosystems to the enterprise application software market. Industry cloud providers and hyperscalers possess several key advantages in nurturing and leading these innovation networks. So what does this acceleration of AI software and services on industry cloud and hyperscaler marketplaces mean? Well, it depends on the customer segment the providers are vying to attract. Enterprises are driven by strategic advantages, risk mitigation, maximizing the value derived from their AI investments, improving data locality, and reducing latency — all while prioritizing optimizing costs and operational performance. Independent software vendors (ISVs) are driven by a unique set of business and strategic goals that focus on building trust and meeting customer requirements while protecting their IP and mindshare. For regulated industries, because these ecosystems often involve third-party vendors and cloud platforms, the vetting of AI partners and solutions requires a heightened level of scrutiny. The desire for AI sovereignty is much stronger than a policy concern — it must comply with strict legal mandates and AI-specific legislation such as the EU AI Act; this is critical for national security and economic interests. They are driven by the control over key enablers of AI development, deployment, and the implications of global access and collaboration. What do all the stakeholders have to gain from enterprise software markets operating within these massive ecosystems? ISVs, this is your opportunity to leverage these marketplaces and ecosystems to develop AI models and create AI solutions that address these challenges — making your solutions a critical asset to organizations for deploying enterprise solutions at scale. This post was written by VP, Research Director Linda Ivy-Rosser and Principal Analyst Faram Medhora, the blog originally featured here.


Euronews
22-05-2025
- Business
- Euronews
'Thank you for the copyright': ABBA legend warns against AI code
ABBA member Björn Ulvaeus warned MEPs in Brussels on Tuesday that he is concerned about 'proposals driven by Big Tech' that weaken creative rights under the EU's AI Act. 'I am pro-tech, but I am concerned about current proposals that are being driven by the tech sector to weaken creative rights,' Ulvaeus told a hearing in the European Parliament's Committee on Culture and Education on Tuesday. The comments from the singer songwriter - who is the president of the International Confederation of Societies of Authors and Composers (CISAC) - add to concerns voiced by the creative industry, including publishers and rights holders in recent months, on the drafting process of a voluntary Code of Practice on General Purpose AI (GPAI) for large language models like ChatGPT under the AI Act. The European Commission appointed 13 experts to consider the issue last September, using plenary sessions and workshops to allow some 1,000 participants to share feedback. The draft texts since published aimed at helping providers of AI models comply with the EU's AI rulebook, but publishers criticised them for the interplay with copyright rules, while US tech giants complained about the 'restrictive' and burdensome effects. 'The argument that AI can only be achieved if copyright is weakened is false and dangerous. AI should not be built on theft, it would be an historic abandonment of principles,' Ulvaeus said. 'The EU has been a champion of creative rights. But now we see that the Code ignores calls from the creative sector for transparency. What we want is for the EU to lead on AI regulation, not to backslide,' he said, adding that the implementation of the act should 'stay true to the original objective'. The latest draft, due early May, was delayed because the Commission received a large number of requests to leave the consultations open longer than originally planned. The aim is to publish the latest draft before the summer. On 2 August, the rules on GP AI tools enter into force. The administration led by US President Donald Trump has said the EU's digital rules, including the Code, stifle innovation. The US Mission to the EU sent a letter to the EU executive pushing back against the Code in April. Similar concerns were voiced by US Big Tech companies: Meta's global policy chief, Joel Kaplan, said in February that it would not sign the code because it had issues with the text as it then stood. A senior official working at the Commission's AI Office told Euronews earlier this month however, that US companies 'are very proactive' and that there is not the sense that 'they are pulling back because of a change in the administration.' The AI Act itself - which regulates AI tools according to the risk they pose to society - entered into force in August last year. Its provisions apply gradually, before the Act will be fully applicable in 2027. The EU executive can decide to formalise the Code under the AI Act, through an implementing act. Sir Elton John recently described the UK government as "absolute losers" and said he felt "incredibly betrayed" over plans to exempt technology firms developing AI from copyright laws. The European Commission will offer relief to small mid-cap companies burdened by the current scope of the General Data Protection Regulation (GDPR) in a rule simplification package known as an Omnibus to be published on Wednesday, according to a working document seen by Euronews. Currently, companies with fewer than 250 employees are exempt from the data privacy rules to reduce their administrative costs, the Commission now proposes to extend this derogation to the so-called small mid-cap companies. Small mid-cap companies can employ up to 500 employees and make higher turnovers. Under the plan - the Commission's fourth such Omnibus - such companies will only have to keep a record of the processing of the users' data when it's considered 'high risk', for example private medical information. The change comes seven years after the GDPR took effect. Since then the rulebook has shielded consumer data from US tech giants but is also perceived as burdensome for smaller and mid-sized companies that often did not have the means to hire data protection lawyers. The biggest fine issued under the rules so far is €1.2 billion on US tech giant Meta: the Irish data protection authority fined the company in 2023 for invalid data transfers. Although fines are generally lower for smaller businesses, at up to €20 million or 4% of annual turnover they remain significant. In the Netherlands for example, VoetbalTV, a video platform for amateur football games, was fined €575,000 by the Dutch privacy regulator in 2018. Although the company appealed and the court overturned the fine, it had to file for bankruptcy. Both EU lawmaker Axel Voss (Germany/EPP), who was involved in steering the legislation through the European Parliament, and Austrian privacy activist Max Schrems, whose organisation NOYB filed numerous data protection complaints with regulators, called for different rules for smaller companies earlier this year. Under the plan, 90% of the businesses – small retailers and manufacturers -- would just face minor compliance tasks and would not need an in-house data protection officer anymore, no excessive documentation and lower administrative fines, capped at €500,000. Voss said his proposal would not weaken the EU's privacy standards, but make it 'more enforceable, and more proportionate'. Similar calls are coming from the member states: the new German government stressed in its coalition plan that it will work on EU level to ensure that 'non-commercial activities (for example, associations), small and medium-sized enterprises, and low-risk data processing are exempt from the scope of the GDPR.' By contrast, civil society and consumer groups have warned that the Commission's plan to ease GDPR rules could have unintended consequences. On Tuesday, privacy advocacy group EDRi stated in an open letter that the change risks 'weakening key accountability safeguards' by making data protection obligations depend on company size rather than the actual risk to people's rights. It also fears this could lead to further pressure to roll back other parts of the GDPR. Consumer advocates share similar concerns, in a letter from late April, pan-European consumer group BEUC warned that even small companies can cause serious harm through data breaches. It argued that using headcount or turnover as a basis for exemptions could create legal uncertainty and go against EU fundamental rights. Both groups say the focus should instead be on better enforcement of existing rules and more practical support for small companies. Meanwhile reforms of the data privacy law are under negotiation between the Council and the European Parliament. A new round of political discussions on the GDPR Procedural Regulation is expected to take place on Wednesday. EU institutions are attempting to finalise a long-awaited deal to improve cooperation between national data protection authorities. The regulation is meant to address delays and inconsistencies in how cross-border cases are handled under the GDPR, by harmonising procedures and timelines. According to experts familiar with the file, one of the main sticking points is whether to introduce binding deadlines for national authorities to act on complaints. While the Parliament has pushed for clearer timelines to speed up enforcement, some member states argue that fixed deadlines could overwhelm authorities and increase legal risks. This change is however not expected to impact the Commission's 4th Omnibus package.

Finextra
21-05-2025
- Business
- Finextra
Fewer than 1 in 4 banks ready for AI era
A vast majority of banks are unprepared for the advent of artificial intelligence, according to recently published research 0 This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. A report from Boston Consulting Group (BCG) found that almost all banks have invested in AI technology, yet less than 1 in 4 have progressed from pilots and proof of concepts to fully implement the technology into their daily operations. "The leap from predictive analytics to generative AI—and now to fully autonomous, agentic systems—is here," states the report. "AI is no longer a fringe experiment; it's the engine of next-generation banking. Customer interactions, loan approvals, fraud detection, even compliance monitoring: all are ripe for reinvention." Yet a recent BCG survey finds that only 25% of institutions have "woven these capabilities into their strategic playbook" states the report. "The other 75% remain stuck in siloed pilots and proofs of concept, risking irrelevance as digital-first competitors accelerate ahead. Most banks are deploying AI toward basic activities—not those that lead to transformation." According to BCG, banks must move beyond pilots to redefine strategy, technology and governance - or "risk losing control of the financial landscape to faster movers". "Early movers will set the pace—and the terms—of AI competition," states the report. "Lagging banks will find themselves racing to catch up under conditions they didn't choose." The publication of the report comes at an important time for AI in financial services on both sides of the Atlantic. The EU's AI Act came into force in August 2024. Meanwhile, the US largest bank, JP Morgan Chase, recently suggested it intends to ramp up its use of AI to increase efficiency while also calling for a slowdown in hiring. The bank's CFO, Jeremy Barnum, told investors at a meeting in New York that recruitment is set to slow following the appointment of 60,000 people over the last five years, equivalent to a 23% increase in head count. 'We're asking people to resist head count growth where possible and increase their focus on efficiency,' said Barnum, in comments reported by Business Insider.