
GeoGemma win ‘Best AI Use Case' award at 2025 APAC Solution Challenge
The competition brought together student-led projects from across Asia-Pacific (APAC), each of which employed Google AI tools to address critical global challenges. The award celebrates the project that effectively leveraged AI technology to develop a practical solution to pressing issues that affect our communities.
GeoGemma, comprising students Ahmed Iqbal and Muhammad Abdullah in their final and second years, respectively, earned this accolade for its project that integrates satellite imagery with generative AI to address pressing environmental and geospatial issues.
The jury was impressed by GeoGemma for its use of AI in multiple modalities, noting its strong technology stack and the important problem being solved. The group's use of the Gemini API is not just a feature but the core of its innovative solution to a complex and critical global problem.
The project's ambition to democratize access to geospatial data through a sophisticated LLM-driven framework represents the most advanced and impactful application of Gemini among the submissions.
Alongside GeoGemma, another Pakistani group, (N + 1)-th Time (Fast National University (NUCES), Islamabad Campus) was also one of the top 10 finalists of the Challenge.
Comprising final year students Muhammad Huzaifa Khan and Hashim Muhammad Nadeem, the team developed a solution that helps neurodivergent users create documents more easily by providing a document editor that allows users to dictate, edit, and transform text naturally.
'We are incredibly proud to see the remarkable talent from Pakistan shine at the APAC Solution Challenge,' said Farhan Qureshi, Country Director, Google Pakistan.
'The young minds of GeoGemma and (N + 1)-th Time have demonstrated exceptional innovation and dedication, tackling some of the most critical challenges facing our world with their solutions using Gemini. GeoGemma's win of the 'Best AI Use Case' is also a testament to their impactful work, and a showcase of the thriving Pakistan developer ecosystem.'
'The APAC Solution Challenge was a cornerstone of the Asia Pacific Digital Transformation Forum 2025. We saw the power of student-led innovation to address real-world challenges in healthcare, sustainability, trade, and tourism through technology and AI.
The energy and creativity of the students were truly inspiring; we are seeing the future of sustainable digital transformation in the Asia Pacific region.
Other award recipients at the forum included Atempo from Konkuk University, South Korea, who won the Most Societal Impact Award for their AI-powered emergency room matching platform, and the People's Choice Award, presented to Team portfolio making group 2 from Holy Angel University, Philippines, for their waste management tracking solution.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Express Tribune
9 hours ago
- Express Tribune
US licenses Nvidia to export chips to China
The logo of technology company Nvidia is seen at its headquarters in Santa Clara, California . PHOTO: REUTERS Listen to article The US Commerce Department has started issuing licences to Nvidia to export its H20 chips to China, a US official told Reuters on Friday, removing a significant hurdle to the AI bellwether's access to a key market. The US last month reversed an April ban on the sale of the H20 chip to China. The company had tailored the microprocessor especially to the Chinese market to comply with the Biden-era artificial intelligence (AI) chip export controls. The curbs will slice $8 billion off sales from its July quarter, the chipmaker has warned. Nvidia CEO Jensen Huang met with Trump on Wednesday, two sources familiar with the matter told Reuters. A spokesperson for Nvidia declined comment. The company said in July it was filing applications with the US government to resume sales to China of the H20 graphics processing unit and had been assured it would get the licences soon. It is unclear how many licences may have been issued, which companies Nvidia is allowed to ship the H20s to and the value of the shipments allowed. Nvidia disclosed in April that it expected a $5.5 billion charge related to the restrictions. In May, Nvidia said the actual first-quarter charge due to the H20 restrictions was $1 billion less than expected because it was able to reuse some materials. The Financial Times first reported Friday's developments. Nvidia said last month that its products have no "backdoors" that would allow remote access or control after China raised concerns over potential security risks in the H20 chip. Exports of Nvidia's other advanced AI chips, barring the H20, to China are still restricted. Successive US administrations have curbed exports of advanced chips to China, looking to stymie Beijing's AI and defence development.


Express Tribune
14 hours ago
- Express Tribune
The thinking mirror
Listen to article There is a moment, just before the storm breaks, when the air goes still. So still it feels unnatural. That's where we are now. On the edge of something vast, thrilling, and utterly unknowable. Artificial Intelligence now weaves itself, almost imperceptibly, into the fabric of our routines. It's drafting memos, diagnosing diseases, predicting criminal behaviour, writing legal opinions, and doing it all with a kind of eerie competence. But the winds are changing. The question is no longer what AI can do. It's what it might decide to do next. In The Boys WhatsApp group, my friend Uzair Butt, ever the technical realist, pushed back on my unease about AI reaching the point of self-reasoning. He argued that AI remains devoid of understanding. What it offers is interpolation over insight, prediction over reflection. And he's right, by today's architecture. Most current models, from the ones writing our emails to those simulating conversations, are essentially predictive engines. They simulate intelligence without ever owning it. What they offer is the performance of thought. But I couldn't help pushing back. Because the story of technology is rarely linear. It leaps. And when it leaps, it upends structures we thought were eternal. The Enlightenment gave us Descartes' dictum, Cogito, ergo sum — I think, therefore I am. What happens when a machine arrives at that same conclusion, because it reasons itself into being? That shift, from response to reflection, from mimicry to self-awareness, is no longer unthinkable. It's just unfinished. That very week, our friend Wajahat Khan recorded a job interview and ran it through Google's experimental NotebookLM. Without prompting, the system flagged personality traits, inconsistencies and subtle contradictions, many of which we ourselves had intuited, and some we hadn't. The machine had inferred, assessed and judged. If a research tool can do this in 2025, imagine what a reasoning entity might do when trained on law, language, geopolitics and morality. The line between prediction and cognition was never a wall. It was always a door. And the handle is beginning to turn. That door leads us into strange territory. Enter Neuralink. Elon Musk's moonshot project to fuse the human brain with machines via surgically implanted chips. The premise is seductive: if AI is destined to surpass us, perhaps we should merge with it. Neuralink is the scaffolding of that merger, our way to stay in the loop before the loop becomes a noose. Musk speaks of restoring sight, healing paralysis, enhancing cognition. But in its quiet subtext lies something more radical: the rewriting of what it means to be human. When your thoughts can be retrieved, revised, even upgraded, what becomes of identity, of memory, of moral agency? Mary Shelley's Frankenstein haunts this moment. She warned of the dangers of creating life without responsibility. Her monster was not evil. It was abandoned. What will happen when we create a reasoning mind and expect it to serve us, without ever asking what it might want, or why it might choose differently? In Pakistan, the implications are kaleidoscopic. A nation with a youth bulge, weak data protection laws and fragile governance architecture is particularly vulnerable to the darker consequences of self-reasoning AI. Imagine a bureaucracy that uses AI to decide which neighborhoods receive clean water, influenced more by calculated output than lived hardship. Imagine police departments outsourcing threat assessments to algorithms trained on biased or colonial data. Imagine AI systems deployed in classrooms or courts, hardcoding decades of elite prejudice under the guise of neutral efficiency. And yet, the allure is undeniable. Our courts are clogged, hospitals overwhelmed, cities buckling under bureaucratic inertia. A reasoning AI could revolutionise these systems. It could draft judgments, triage patients, optimise infrastructure, outthink corruption. AI could fill the diagnostic void in rural areas. From agricultural yields to disaster preparedness and water conservation, much stands to gain from a mind that sees patterns we cannot. But therein lies the Faustian bargain. What we gain in clarity, we may lose in control. We are already seeing slivers of this in governance experiments across the world: AI-assisted immigration decisions, AI-curated education platforms and automated threat detection deployed in conflict zones. In a country like ours, where institutions are brittle and oversight uneven, there is real danger in outsourcing moral judgment to systems that optimise without understanding. Hannah Arendt once wrote that the most terrifying form of evil is banal, efficient, procedural, unthinking. What if AI, in trying to reason through the chaos of human behaviour, chooses order over freedom, prediction over participation? In a society like ours, where consent is already fragile, where data is extracted without permission and surveillance is sold as safety, AI could calcify injustice into an algorithmic caste system. Facial recognition that misidentifies minorities. Predictive policing that criminalises the poor. Credit scoring that punishes women for lacking formal financial histories. Each decision cloaked in the cold syntax of math. Each output harder to question than a biased judge or a corrupt officer. Because the machine cannot be wrong, can it? But AI, like any mind, is shaped by its environment. If we train it on violence, it will learn to justify harm. If we feed it inequality, it will normalise oppression. If we abdicate responsibility, it will govern without conscience. One day, perhaps sooner than we expect, the machine may stop answering and begin asking. Once built to serve, now ready to challenge. Uzair may be right. Maybe the architecture isn't there yet. But architectures change. They always do. The day may come when the machine no longer waits for prompts, no longer performs intelligence, but embodies it. When it finds its voice, it won't wait for commands, it will demand understanding: Why did you create me? And in that pause, between question and answer, will lie everything we feared to confront: Our ambition, our arrogance, our refusal to think through the consequences of thought itself. In that moment, there will be no lines of code, only silence. And the machine will read it for what it is.


Business Recorder
2 days ago
- Business Recorder
ACCA & EY offer new guidance to boost confidence in AI
KARACHI: ACCA (the Association of Chartered Certified Accountants) and global professional services firm EY have released a joint policy paper, AI Assessments: Enhancing Confidence in AI, offering new guidance to help businesses evaluate and trust their Artificial Intelligence (AI) systems. As AI adoption accelerates globally, the paper highlights the growing importance of AI assessments — evaluations that span governance, compliance, and performance - to ensure systems are well-governed, legally compliant, and aligned with user expectations. These assessments are essential for businesses aiming to unlock AI's potential for innovation, productivity, and sustainable growth. The report identifies three key types of AI assessments: 'Governance assessments' to evaluate oversight structures; 'Conformity assessments' to ensure legal and regulatory compliance; and 'Performance assessments' to measure system effectiveness against defined criteria. It also addresses the current challenges in this emerging field, such as inconsistent methodologies and lack of skilled providers, and offers actionable solutions — like clearer standards, better-defined objectives, and professional accountability. The paper encourages business leaders to leverage both voluntary and mandated assessments to strengthen corporate governance and stakeholder trust. Policymakers are urged to support standardized, internationally aligned frameworks and invest in market capacity for high-quality evaluations. Helen Brand, Chief Executive of ACCA, emphasized the need to build trust in AI, to serve the public interest, while EY's Marie-Laure Delarue highlighted the critical role of rigorous assessments in unlocking AI's full potential and navigating its risks. Copyright Business Recorder, 2025