Latest news with #InnovationandTechnology


Time of India
16 hours ago
- Politics
- Time of India
Kairali Chip lacks patent, proof or use case: SUCC
T'puram: Kerala University of Digital Sciences, Innovation and Technology (DUK) is back in the news for the wrong reasons. This time, many are questioning the scientific basis and credibility of its claim that it developed a semiconductor chip (Kairali Chip or K-Chip) billed as the first-of-its-kind in the country. Governor R V Arlekar has already asked the state police to probe the alleged financial irregularities at DUK. He acted on a report from former vice-chancellor (in-charge) Ciza Thomas and also recommended a CAG audit at the university. However, CM Pinarayi Vijayan, who serves as DUK's pro-chancellor, dismissed the allegations raised by opposition leader VD Satheesan about the university's functioning. Save University Campaign Committee (SUCC) questioned the truth behind the chip development claim. After DUK announced the chip project, IT department gave Rs 25 lakh to Prof Alex P James who claimed he led the research and development. "It is important to note that the so-called 'K-Chip' is not an industrially-viable product. It was only a design entry submitted on a global online platform (supported by Google) meant for education and research training. Thousands of students and researchers across the world use this platform to submit similar designs for skill-building," said SUCC in a memorandum to Arlekar. The claim that Kerala developed a full-fledged chip through this process is misleading. Presenting an academic-level design as a fully developed indigenous semiconductor chip is at best a serious exaggeration and at worst a deliberate attempt to mislead the public and policy-makers, it added. Despite the hype around the so-called 'K-Chip', DUK lacks even the basic infrastructure needed for full semiconductor design, testing or production. No official validation reports, peer-reviewed papers, deployment records or actual real-world uses exist in connection with this chip. Its design has not been patented or shared in any credible scientific forum and there is no proof of any commercial value. Without clear technical backing, calling this India's 'first indigenous chip' was both premature and misleading, SUCC said in the memorandum. "Kerala govt and DUK's reluctance to provide an official explanation or documentation regarding K-Chip even when the Centre is actively investing in chip-making, raises concerns," it added. The memorandum also questioned the academic background of James and claimed that project funds worth crores given to DUK were going to private firms registered in his name allegedly with support from key people in the CMO.


Scoop
a day ago
- Health
- Scoop
New Investment To Drive AI And Biotech Innovation
Minister of Science, Innovation and Technology The Government is investing $24 million in smart, practical science that will help New Zealanders live healthier lives and support the development of sustainable food industries. Science, Innovation and Technology Minister Dr Shane Reti today announced two major research programmes in partnership with Singapore, focusing on artificial intelligence (AI) tools for healthy ageing and biotechnology for future food production. 'Science and innovation are critical to building a high-growth, high-value economy. That's why we're investing in research with a clear line of sight to commercial outcomes and real public benefit,' Dr Reti says. 'This Government is focused on backing the technologies that will deliver real-world results for New Zealanders – not just in the lab, but in our hospitals, homes, and businesses. 'Whether it's supporting older Kiwis to live well for longer or developing smarter food production systems, these projects are about practical applications of advanced science to solve problems and grow our economy.' Funded through the Catalyst Fund, designed to facilitate international collaboration, the investment will support seven joint research projects over the next three years, deepening New Zealand's research ties with Singapore and building capability in AI and biotechnology. The AI programme, delivered alongside AI Singapore, directly supports the Government's Artificial Intelligence Strategy – a plan to use AI to safely and effectively boost productivity and deliver better public services. 'Our AI Strategy is about encouraging the uptake of AI to improve productivity and realise its potential to deliver faster, smarter, and more personalised services, including in healthcare,' says Dr Reti. 'These projects will help develop tools that support clinicians and improve care for our ageing population. Our collaboration with Singapore, a country well advanced in their use and development of AI, will help grow Kiwi capability to explore future practical uses of AI.' The biotechnology programme will focus on turning scientific research into scalable food solutions, including alternative proteins and new food ingredients, in partnership with Singapore's A*STAR. 'These partnerships are about future-proofing our economy and our communities — tackling global challenges with New Zealand science at the forefront,' Dr Reti says.


Spectator
a day ago
- Politics
- Spectator
The lanyard class is imploding – and it can't blame Musk
I was surprised to read a report by Sunder Katwala's thinktank British Future saying the UK is a 'powder keg' of community tensions and warning of further unrest this summer. In a foreword by Sajid Javid and Jon Cruddas, who are co-chairing a commission looking into last year's riots, Britain is described as 'fragmented' and 'fragile', seemingly only one newspaper headline away from descending into civil war. Aren't these the same public intellectuals and politicians who, until ten minutes ago, were cheerleaders for multiculturalism? I thought the arrivalof hundreds of thousands of immigrants a year was enriching our street life, improving our cuisine and revitalising our art and literature? Isn't the absorption of millions of foreign nationals, many from countries with very different customs to ours, a great British success story? Diversity is our strength, don'tcha know. Now, suddenly, our cities are hellscapes, riven with ethnic and religious tensions that could erupt into violence any minute. Enoch Powell was a prophet all along. This loss of faith by our metropolitan overlords seems to have happened overnight. Have they all had their iPhones ripped from their hands by gangs of marauding cyclists? Their rose-tinted view of mass immigration has been replaced by a pathological fear of social disorder: anarchophobia. That was brought home to me when it emerged that one reason the government suppressed the news about rehousing 24,000 Afghans was the fear that it would light the blue touch paper and… kaboom! Our lords and masters really do think that ordinary people are a bunch of angry troglodytes milling about on street corners looking for the slightest excuse to start setting emergency vehicles ablaze. Needless to say, there's only one solution to this combustible state of affairs: more censorship. Forget about stopping the boats or doing anything about the grooming gangs. No, the reason the proles are on a hair trigger is 'misinformation'. It's all Elon Musk's fault! That was the conclusion of a cross-party group of MPs on the Science, Innovation and Technology select committee who issued a report two weeks ago claiming the Online Safety Act is basically useless because it hasn't given Ofcom the power to force platforms to remove fake news. 'The viral amplification of false and harmful content can cause very real harm – helping to drive the riots we saw last summer,' said the chair, Dame Chi Onwurah. No wonder Lucy Connolly has been banged up for 31 months. It was her tweet after the Southport attack saying people could set fire to asylum hotels for all she cared, and implying an illegal immigrant was responsible for the murders, that was single–handedly responsible for the worst outbreak of social disorder since 2011. If only Musk had trained his algorithms to delete such dangerous 'misinformation', we'd have had a summer of multicultural street parties with Progress Pride flags and big banners saying: 'Refugees Welcome.' I've been trying to work out the mental gymnastics behind such an 'analysis' and I think it goes something like this: 'We have no regrets about promoting mass immigration despite the electorate repeatedly telling us not to. That policy has absolutely nothing to do with the growing cynicism about politicians, collapsing trust in institutions and fraying social cohesion. Our vision of a rainbow Britain would have come to pass were it not for that pesky Musk and his hateful algorithms.' I'm exaggerating, but only slightly. Listen to Imran Ahmed, the chief executive of the Center for Countering Digital Hate, a pro-censorship lobby group. 'One year on from the Southport riots, X remains the crucial hub for hate-filled lies and incitement of violence targeting migrants and Muslims,' he told the Guardian. Incidentally, Imran has fled to Washington, so great is his anarchophobia. At some point you'd think it would occur to these geniuses that the aggressive policing of social media posts – more than 30 people a day are being nicked for speech crimes– isn't having the desired effect. Last time I checked, Reform UK was riding high at 34 per cent in the polls. Maybe the real reason these people don't like their policies being attacked online is that, deep down, they've lost faith in them themselves and don't want to be forced to defend them. It's not British society that's on the verge of imploding. It's the lanyard class.


Forbes
3 days ago
- Business
- Forbes
UK Announces Deal With OpenAI To Augment Public Services And AI Power
UK and OpenAI Announce a new MOU OpenAI and the United Kingdom's Department for Science, Innovation and Technology signed a joint memorandum of understanding yesterday that sets out an ambitious joint plan to put OpenAI's models to work in day-to-day government tasks, to build new computing hubs on British soil, and to share security know-how between company engineers and the UK's AI Safety Institute. The agreement is voluntary and doesn't obligate the UK in any exclusive manner, yet the commitments are concrete: both sides want pilot projects running inside the civil service within the next twelve months. Details of the Joint MOU The memorandum identifies four key areas of joint innovation. It frames AI as a tool to raise productivity, speed discovery and tackle social problems, provided the public is involved so trust grows alongside capability. The partners will look for concrete deployments of advanced models across government and business, giving civil servants, small firms and start-ups new ways to navigate rules, draft documents and solve hard problems in areas such as justice, defence and education. To run those models at scale, they will explore UK-based data-centre capacity, including possible 'AI Growth Zones,' so critical workloads remain onshore and available when demand peaks. Finally, the deal deepens technical information-sharing with the UK AI Security Institute, creating a feedback loop that tracks emerging capabilities and risks and co-designs safeguards to protect the public and democratic norms. OpenAI also plans to enlarge its London office, currently at more than 100 staff, to house research and customer-support AI CEO Sam Altman has long been interested in the UK as a region for AI development because of the UK's long history in AI research, most notably starting with Alan Turing. 'AI will be fundamental in driving the change we need to see across the country, whether that's in fixing the NHS, breaking down barriers to opportunity or driving economic growth,' said UK Technology Secretary, Peter Kyle. 'That's why we need to make sure Britain is front and centre when it comes to developing and deploying AI, so we can make sure it works for us.' Altman echoed the ambition, calling AI 'a core technology for nation building' and urging Britain to move from planning to delivery. The Increasing Pace of Governmental AI Adoption and Funding Universities in London, Cambridge and Oxford supply a steady stream of machine-learning talent. Since the Bletchley Park summit in 2023, the UK has positioned itself as a broker of global safety standards, giving investors a sense of legal stability. And with a sluggish economy, ministers need a credible growth story; large-scale automation of paperwork is an easy pitch to voters. The UK offers scientists clear rules and public money. The UK government has already promised up to £500 million for domestic compute clusters and is reviewing bids for 'AI Growth Zones'. Three factors explain the timing. The UK is not alone in its AI ambitions. France has funnelled billions into a joint venture with Mistral AI and Nvidia, while Germany is courting Anthropic after its own memorandum with DSIT in February. The UK believes its head start with OpenAI, still the best-known brand in generative AI, gives it an edge in landing commercial spin-offs and high-paid jobs. Risks that could derail the plan Kyle knows that any mis-step, such as an AI bot giving faulty benefit advice, could sink trust. That is why the memorandum pairs deployment with security research and reserves the right for civil-service experts to veto outputs that look unreliable. The UK has had a long history with AI, and the risks posed by lack of progress. Notably, the infamous Lighthill report in 1973 was widely credited with contributing the first 'AI Winter', a marked period of decline of interest and funding in AI. As such, careful political consideration of AI is key to ensuring ongoing support. Public-sector unions may resist widespread automation, arguing that AI oversight creates as much work as it saves. Likewise there is widespread concern of vendor lock-in with the deal with OpenAI. By insisting on locally owned data centres and keeping the MOU open to additional suppliers, ministers hope to avoid a repeat of earlier cloud contracts that left sensitive workloads offshore and pricy relationships locked in. Finally, a single headline error, such as a chatbot delivering wrong tax guidance, for instance, could spark calls for a pause. However, the benefits currently outweigh the risks. No department stands to gain more than the UK's National Health Service, burdened by a record elective-care backlog. Internal modelling seen by officials suggests that automated triage and note-summarisation tools could return thousands of clinical hours each week. If early pilots succeed, hospitals in Manchester and Bristol will be next in line. And OpenAI is not new to the UK government. A chatbot for small businesses has been live for months, and an internal assistant nicknamed 'Humphrey' now drafts meeting notes and triages overflowing inboxes. Another tool, 'Consult,' sorts thousands of public submissions in minutes, freeing policy teams for higher-level work. The new agreement aims to lift these trials out of the margins and weave them more fully into the fabric of government. What's Next Joint project teams will start by mapping use-cases in justice, defence and social care. They must clear privacy impact assessments before live trials begin. If those trials shave measurable hours off routine tasks, the Treasury plans to set aside money in the 2026 Autumn Statement for a phased rollout. UK's agreement with OpenAI is an experiment in modern statecraft. It tests whether governmental organizations can deploy privately built, high-end models while keeping control of data and infrastructure. Success would mean delivering the promised benefits while avoiding the significant risks. Failure would reinforce arguments that large language models remain better at publicity than at public service.

Engadget
4 days ago
- Business
- Engadget
OpenAI is getting closer with the UK government
The UK government has announced a new strategic partnership with OpenAI that could lead the company to "expand AI security research collaborations, explore investing in UK AI infrastructure like data centers, and find new ways for taxpayer funded services" to use AI. The move follows the introduction of the AI Action Plan in January, which fast-tracks the construction of data centers in certain regions of the UK. In the (entirely voluntary) partnership agreement — technically a Memorandum of Understanding — OpenAI and the Department for Science, Innovation and Technology (DSIT) agree to tackle positive-sounding, but ultimately vague tasks things like finding ways for "advanced AI models" to be used in both the public and private sectors and sharing information around the security risks of AI. OpenAI is also supposed to help DSIT identify ways it can deliver on the infrastructure goals of the AI Action Plan, and possibly explore building in one of the UK's new data center-friendly "AI Growth Zones." All of this sounds nebulous and non-committal because the memorandum OpenAI signed is not at all legally-binding. The partnership sounds nice for elected officials eager to prove the UK is competing in AI, but it doesn't tie anyone down, including the UK government: If Anthropic offers a deal on Claude, they can take it. OpenAI already has offices in London, so deepening its investment doesn't seem out of the question. Signing the memorandum is also consistent with OpenAI's growing interest in working with governments desperate for the high-tech gloss of the AI industry. The logic follows that if OpenAI can get regulators dependent on its tools — say, a ChatGPT Gov specifically designed for government agencies — they'll be more inclined to favor the company in policy decisions. Or at the very least, making a show of collaborating early could win the company a sweeter deal down the road.