
AI gamble must be smart, not just fast
Listen to article
The future of data sharing changed drastically when the US realised that 9/11 was a failure of intelligence agencies to act in concert on then-available data and hence called the incident a "data fusion" crisis. The US Department of Homeland Security began setting up a robust network of "fusion centres" – state and locally run organisations that allow real-time sharing of critical intelligence and datasets between two or more government units for identifying red flags.
Fast forward to 2025, and now Artificial Intelligence (AI) is taking over such "fusion centres" worldwide – with possibilities that are endless. AI agents are replacing humans, and language models are generating insights that were previously unheard of. However, as is the case with every technology, the use of AI, especially in the public sector and in legal matters, remains a double-edged sword and must be handled with a pinch of salt.
For instance, in June 2023, Schwartz, an attorney with Levidow, Levidow & Oberman in New York, used ChatGPT for legal case research and was fined by the judge for citing false precedents with bogus names in his brief. The large language model (LLM) was apparently hallucinating – a problem where these chatbots make up fictitious data on their own.
Similarly, in March 2024, the Microsoft-powered chatbot MyCity gave incorrect legal information that could have led prospective businessmen to break the law. It falsely claimed that landlords could openly discriminate based on the income of tenants and that restaurant owners could take a share of their workers' tips.
Hence, when it comes to using AI, public institutions are now faced with a tough choice: should they rely on public AI models hosted by third parties such as ChatGPT, adopt open-source models such as LLaMA, or train their own proprietary AI models in the long run? Choosing the right AI strategy is crucial here.
In 2024, Air Canada's virtual assistant was found to be giving factually incorrect information about discounts to a customer who then took the matter to court and was awarded damages.
Similarly, when Denmark rolled out AI algorithms in its social security system, the system was found to have an inherent bias against marginalised groups such as the elderly, low-income families, migrants, and foreigners. Ninety per cent of the cases that AI marked as fraud later turned out to be genuine, and the whole episode is now taught as a classic case study in discrimination and breach of the European Union's (EU) AI Act's regulations on social scoring systems.
Therefore, if any public sector organisation chooses to use a third-party model trained by OpenAI in its operations, there is a risk of bias against people of colour and disadvantaged groups – as the training data scraped from the internet, social media and discussion forums is usually biased itself.
A good AI strategy involves thoughtful and controlled phased deployments with well-planned use cases. For example, the Department of Homeland Security (DHS) began with publicly available AI tools to improve employee productivity but also rolled out its AI vision and development roadmap. In the meantime, it focused on developing specialised AI applications – such as one to train officers dealing with asylum applications and conducting security investigations.
By December 2024, DHS had launched DHSChat on its internal secure network – a cutting-edge algorithm that can draft reports, streamline tasks, develop software, and, unlike other large language models, ensures employee data is protected and not used to train external models. In fact, as a best practice and as mandated by the Trump administration's executive order, DHS actively maintains its AI inventory, which includes a list of use cases related to AI in its operations.
For countries like Pakistan, our institutions could use a mix of public, open-source and proprietary models – depending on the nature of the task at hand. When it comes to using AI as the new Google, public models are usually fine, but for drafting memos and summarising reports, it is not advisable to use a public model. For that, the Ministry of IT or other institutions can host their own open-source AI models in their data centres or fine-tune them to develop proprietary models.
For critical systems, it is always recommended not to entirely replace existing automation with AI. There is a need to install a supervisor for fact-checking and verifying the output of AI models for hallucinations and bias. No matter how lucrative the idea of an AI-driven public sector may be, it is important to thoroughly test and check the behaviour of these models before deploying them.
The AI-based transformation project currently being executed at the Federal Board of Revenue (FBR) will serve as a test case for other AI-aspiring public agencies.
The writer is a Cambridge graduate and is working as a strategy consultant
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Express Tribune
9 hours ago
- Express Tribune
AI gamble must be smart, not just fast
Listen to article The future of data sharing changed drastically when the US realised that 9/11 was a failure of intelligence agencies to act in concert on then-available data and hence called the incident a "data fusion" crisis. The US Department of Homeland Security began setting up a robust network of "fusion centres" – state and locally run organisations that allow real-time sharing of critical intelligence and datasets between two or more government units for identifying red flags. Fast forward to 2025, and now Artificial Intelligence (AI) is taking over such "fusion centres" worldwide – with possibilities that are endless. AI agents are replacing humans, and language models are generating insights that were previously unheard of. However, as is the case with every technology, the use of AI, especially in the public sector and in legal matters, remains a double-edged sword and must be handled with a pinch of salt. For instance, in June 2023, Schwartz, an attorney with Levidow, Levidow & Oberman in New York, used ChatGPT for legal case research and was fined by the judge for citing false precedents with bogus names in his brief. The large language model (LLM) was apparently hallucinating – a problem where these chatbots make up fictitious data on their own. Similarly, in March 2024, the Microsoft-powered chatbot MyCity gave incorrect legal information that could have led prospective businessmen to break the law. It falsely claimed that landlords could openly discriminate based on the income of tenants and that restaurant owners could take a share of their workers' tips. Hence, when it comes to using AI, public institutions are now faced with a tough choice: should they rely on public AI models hosted by third parties such as ChatGPT, adopt open-source models such as LLaMA, or train their own proprietary AI models in the long run? Choosing the right AI strategy is crucial here. In 2024, Air Canada's virtual assistant was found to be giving factually incorrect information about discounts to a customer who then took the matter to court and was awarded damages. Similarly, when Denmark rolled out AI algorithms in its social security system, the system was found to have an inherent bias against marginalised groups such as the elderly, low-income families, migrants, and foreigners. Ninety per cent of the cases that AI marked as fraud later turned out to be genuine, and the whole episode is now taught as a classic case study in discrimination and breach of the European Union's (EU) AI Act's regulations on social scoring systems. Therefore, if any public sector organisation chooses to use a third-party model trained by OpenAI in its operations, there is a risk of bias against people of colour and disadvantaged groups – as the training data scraped from the internet, social media and discussion forums is usually biased itself. A good AI strategy involves thoughtful and controlled phased deployments with well-planned use cases. For example, the Department of Homeland Security (DHS) began with publicly available AI tools to improve employee productivity but also rolled out its AI vision and development roadmap. In the meantime, it focused on developing specialised AI applications – such as one to train officers dealing with asylum applications and conducting security investigations. By December 2024, DHS had launched DHSChat on its internal secure network – a cutting-edge algorithm that can draft reports, streamline tasks, develop software, and, unlike other large language models, ensures employee data is protected and not used to train external models. In fact, as a best practice and as mandated by the Trump administration's executive order, DHS actively maintains its AI inventory, which includes a list of use cases related to AI in its operations. For countries like Pakistan, our institutions could use a mix of public, open-source and proprietary models – depending on the nature of the task at hand. When it comes to using AI as the new Google, public models are usually fine, but for drafting memos and summarising reports, it is not advisable to use a public model. For that, the Ministry of IT or other institutions can host their own open-source AI models in their data centres or fine-tune them to develop proprietary models. For critical systems, it is always recommended not to entirely replace existing automation with AI. There is a need to install a supervisor for fact-checking and verifying the output of AI models for hallucinations and bias. No matter how lucrative the idea of an AI-driven public sector may be, it is important to thoroughly test and check the behaviour of these models before deploying them. The AI-based transformation project currently being executed at the Federal Board of Revenue (FBR) will serve as a test case for other AI-aspiring public agencies. The writer is a Cambridge graduate and is working as a strategy consultant


Express Tribune
2 days ago
- Express Tribune
Apple mulls bid for AI startup Perplexity in search shake-up: Bloomberg report
man walks past an Apple logo outside an Apple store in Aix-en Provence, France, January 15, 2025. Photo:REUTERS Listen to article Apple executives have held internal talks about potentially bidding for artificial intelligence startup Perplexity, Bloomberg News reported on Friday, citing people with knowledge of the matter. The discussions are at an early stage and may not lead to an offer, the report said, adding that the tech behemoth's executives have not discussed a bid with Perplexity's management. "We have no knowledge of any current or future M&A discussions involving Perplexity," Perplexity said in response to a Reuters' request for comment. Apple did not immediately respond to a Reuters' request for comment. Big tech companies are doubling down on investments to enhance AI capabilities and support growing demand for AI-powered services to maintain competitive leadership in the rapidly evolving tech landscape. Bloomberg News also reported on Friday that Meta Platforms tried to buy Perplexity earlier this year. Exclusive: Apple executives have held internal discussions about potentially bidding for artificial intelligence startup Perplexity AI, seeking to address the need for more AI talent and technology — Bloomberg (@business) June 20, 2025 Meta announced a $14.8 billion investment in Scale AI last week and hired Scale AI CEO Alexandr Wang to lead its new superintelligence unit. Adrian Perica, Apple's head of mergers and acquisitions, has weighed the idea with services chief Eddy Cue and top AI decision-makers, as per the report. The iPhone maker reportedly plans to integrate AI-driven search capabilities - such as Perplexity AI - into its Safari browser, potentially moving away from its longstanding partnership with Alphabet's Google. Banning Google from paying companies to make it their default search engine is one of the remedies proposed by the US Department of Justice to break up its dominance in online search. NEW: Apple executives have held internal discussions about potentially bidding for artificial intelligence startup Perplexity AI, seeking to address the need for more AI talent and technology — — Mark Gurman (@markgurman) June 20, 2025 While traditional search engines such as Google still dominate global market share, AI-powered search options including Perplexity and ChatGPT are gaining prominence and seeing rising user adoption, especially among younger generations. Perplexity recently completed a funding round that valued it at $14 billion, Bloomberg News reported. A deal close to that would be Apple's largest acquisition so far. The Nvidia-backed startup provides AI search tools that deliver information summaries to users, similar to OpenAI's ChatGPT and Google's Gemini.


Express Tribune
3 days ago
- Express Tribune
MIT AI study: Using tools like ChatGPT is making you dumber, study reveals
A new study from the Massachusetts Institute of Technology (MIT) suggests that frequent use of generative artificial intelligence (GenAI) tools, such as large language models (LLMs) like ChatGPT, may suppress cognitive engagement and memory retention. In the experiment, published by MIT, researchers monitored the brain activity of participants as they wrote essays using different resources: one group relied on LLMs, another used internet search engines, and a third worked without any digital tools. The results revealed a consistent pattern — participants who used GenAI tools displayed significantly reduced neural connectivity and recall, compared to those who relied on their own cognitive abilities. Brain scans taken during the experiment showed that LLM users exhibited weaker connections between brain regions associated with critical thinking and memory. While their essays scored well in both human and AI evaluations — often praised for their coherence and alignment with the given prompt — the writing was also described as formulaic and less original. Notably, those who used LLMs struggled to quote from or recall their own writing in subsequent sessions. Their brain activity reportedly "reset" to a novice state regarding the essay topics, a finding that strongly contrasts with participants in the "brain-only" group, who retained stronger memory and demonstrated deeper cognitive engagement throughout. Participants who used search engines showed intermediate neural activity. Though their writing lacked variety and often reflected similar phrasing, they exhibited better memory retention than the LLM group, suggesting that the process of searching and evaluating sources provided more mental stimulation. In a later phase of the experiment, the groups were shuffled. Participants who had initially used GenAI tools showed improved neural connectivity when writing without digital aids — an encouraging sign that cognitive function could rebound when AI dependence is reduced. The findings could carry important implications for education and the workplace. BREAKING: MIT just completed the first brain scan study of ChatGPT users & the results are terrifying. Turns out, AI isn't making us more productive. It's making us cognitively bankrupt. Here's what 4 months of data revealed: (hint: we've been measuring productivity all wrong) — Alex Vacca (@itsalexvacca) June 18, 2025 With GenAI tools increasingly integrated into school assignments and professional tasks, concerns about cognitive atrophy are rising. Some students now generate entire essays with tools like ChatGPT, while educators rely on similar software to grade and detect AI-generated work. The study suggests that such widespread use of digital assistance — even when indirect — may hinder mental development and reduce long-term memory retention. As schools and organisations continue to navigate the integration of AI tools, the MIT research underscores the importance of balancing convenience with cognitive engagement. Researchers suggest that while GenAI can be a useful aid, overreliance could have unintended consequences for human memory and creativity.