logo
It's time to dream bigger: how organisations can improve and disrupt by building their own AI

It's time to dream bigger: how organisations can improve and disrupt by building their own AI

The Guardian11-03-2025

Businesses have the data they need to fuel advanced AI – here's what they can do with it
In his seminal work the Innovator's Dilemma, Harvard Business School professor Clayton Christensen discusses how organisations that focus on sustaining innovations, such as improving existing products to meet the needs of current customers, can miss opportunities to develop more disruptive innovations. These could include new products, services or capabilities – which may have the potential to create entirely new markets.
It's a classic balancing act that business leaders have to manage – and one that becomes even more urgent in the era of AI, as Soraya Scott, Chief Operating Officer of Microsoft UK, says.
'Business leaders know they need to act now to get ahead on AI, but aren't always sure how to approach effectively in the short and long term to maximise impact. As a COO, I understand this challenge. My role involves ensuring that systems are in place to enable optimal performance now, while also developing long-term strategic plans and initiatives that drive growth, innovation, and resilience.
'The good news is that a sophisticated approach to AI can achieve both.'
Employees want access to AI now – and won't wait for companies to catch up. Microsoft's Work Trend Index (WTI) shows three quarters of global knowledge workers are already using generative AI, and 78% of AI users are bringing their own AI tools to work (BYOAI). However, the biggest benefits of AI are unlocked when organisations securely and responsibly combine the latest AI models with their company's unique information and expertise and build their own more advanced and strategic AI applications, for adoption at scale.
'Organisations are sitting on a treasure trove of untapped potential within their data' says Scott.
'With the right foundations, such as flexible and secure cloud infrastructure, robust data collection, and clean, diverse datasets, organisations can tap into the gold buried beneath their feet. By leveraging data as the fuel for more innovative AI development with Azure OpenAI, businesses can turn unstructured data into actionable insights, automate processes and enhance decision-making, as well as create new personalised customer experiences like never before.'
So, what innovative capabilities can organisations unleash when they build their own AI tools, products and services? The opportunities and capabilities of advanced AI
Vision
Orbital is a groundbreaking legal tech business founded in 2018. Based in London and New York, it offers AI-powered solutions to automate the administrative burden of property-related legal work, effectively mimicking the diligence tasks a real estate lawyer performs today. Among its technologies, the business chose to develop its innovative solutions using OpenAI models including GPT-4o and o1 offered by Microsoft Azure.
Orbital has built a custom AI Agent, Orbital Copilot, to speed up the process around real estate deals, enabling property professionals to analyse property documents and generate reports in seconds. This proprietary solution uses AI vision capabilities of Azure to process lengthy, often handwritten and photocopied, legal and property documents. Orbital Copilot is saving legal teams 70% of the time it usually takes to conduct property diligence work.
This is just one example of how AI can augment human performance and accelerate processes by identifying, classifying and contextualising visual information. AI vision can automate an array of static image analysis and recognition tasks, carry out optical character recognition (OCR), and even real-time spatial analysis, checking for and reporting on the presence and movement of people or objects – whether that's retail items on a shelf or people in a sports stadium.
Speech
Mercedes-Benz uses the Azure OpenAI Service to enhance their MBUX Voice Assistant. This in-car voice control enables dynamic conversations, offering car owners a voice assistant that understands more commands and engages in interactive conversations. Based on ChatGPT4o and Microsoft Bing search, the MBUX Virtual Assistant unites the collective knowledge of the internet. For example: 'Hey Mercedes, when does the cherry blossom season start in Japan?' – 'And when does it start in Germany?'. Or 'How does a black hole work? Explain it, so that children would understand.' Unlike standard voice assistants that often require specific commands, MBUX excels at handling follow-up questions while maintaining contextual understanding.
AI is incredibly useful for all kinds of speech related tasks, including transcription, language detection and translation. It can also generate human-like artificial audio for use in everything from audiobooks and announcements to podcasts. AI-powered real-time voice interaction is a game changer for customer service and call centre operations, enhancing efficiency and customer experience.
Decision making
In the retail sector, supermarket chain Iceland – one of Britain's fastest growing and most innovative food retailers – is using data and AI to enable 'business at the speed of thought'. To help surface the right store and business information to colleagues faster, Iceland uses Azure OpenAI to consolidate the organisation's knowledge base and create Genie, an app which employees use to find the information they need, conversationally.
Genie has already made a huge difference to how in-store colleagues are trained, as they can search using natural language, rather than being limited to exact terms or fuzzy matches. The answers are immediate, targeted and concise, providing a summarised response with links to the source documentation, making the experience quicker and more streamlined.
By emulating human-like reasoning and analysing vast amounts of historical and real-time data, AI can deliver new insights that help employees make smarter, better informed business decisions. This improves organisational agility by empowering employees to adapt to changing conditions faster, with immediate and intuitive access to key information. Don't wait to get started
Given the range and impact of these more innovative custom-built applications, you can see why IDC's Worldwide AI and Generative AI Spending Guide forecasts that enterprise spending on AI solutions will be five times greater than worldwide IT spending through to 2027.
Once solid data foundations are in place, the best place to start is by focusing on your organisation's most pressing needs. This could be improving customer service, optimising supply chains or enhancing decision making. Having clear objectives that tie back to your organisation's growth strategy is crucial for guiding AI proof of concept development.
However, to avoid falling prey to the innovators' dilemma, don't be afraid to dream bigger.
'Rather than selecting a single AI use case for implementation, consider taking a diverse, portfolio approach to AI adoption – developing multiple applications in parallel,' says Scott. 'This 'AI factory' approach mitigates risk, typically achieves faster time to value, and increases the chances of those 'eureka' moments from which new products and capabilities emerge.'
From enhancing customer experiences to creating entirely new solutions and services, advanced AI empowers organisations to dream bigger and achieve more. Now is the time to start unlocking the untapped value in your data and shaping a brighter future defined by innovation and growth.
To learn more, download the eBook Building AI Solutions that Drive Value. This content is paid for and supplied by the advertiser. Find out more with our

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'
Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'

Daily Mail​

time7 hours ago

  • Daily Mail​

Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'

Lawyers in England and Wales have been warned they could face 'severe sanctions' including potential criminal prosecution if they present false material generated by AI in court. The ruling, by one of Britain's most senior judges, comes on the back of a string of cases in which which artificially intelligence software has produced fictitious legal cases and completely invented quotes. The first case saw AI fabricate 'inaccurate and fictitious' material in a lawsuit brought against two banks, The New York Times reported. Meanwhile, the second involved a lawyer for a man suing his local council who was unable to explain the origin of the nonexistent precedents in his legal argument. While large language models (LLMs) like OpenAI 's ChatGPT and Google 's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply. The programs are also prone to what researchers call 'hallucinations' - outputs that are misleading or lack any factual basis. AI Agent and Assistance platform Vectera has monitored the accuracy of AI chatbots since 2023 and found that the top programs hallucinate between 0.7 per cent and 2.2 per cent of the time - with others dramatically higher. However, those figures become astronomically higher when the chatbots are prompted to produce longer texts from scratch, with market leader OpenAI recently acknowledging that its flagship ChatGPT system hallucinates between 51 per cent and 79 per cent of the time if asked open-ended questions. While large language models (LLMs) like OpenAI's ChatGPT and Google's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply - which can lead to them 'hallucinating' false information Dame Victoria Sharp, president of the King's Bench Division of the High Court, and Justice Jeremy Johnson KC, authored the new ruling. In it they say: 'The referrals arise out of the actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court. 'The facts of these cases raise concerns about the competence and conduct of the individual lawyers who have been referred to this court. 'They raise broader areas of concern however as to the adequacy of the training, supervision and regulation of those who practice before the courts, and as to the practical steps taken by those with responsibilities in those areas to ensure that lawyers who conduct litigation understand and comply with their professional and ethical responsibilities and their duties to the court.' The pair argued that existing guidance around AI was 'insufficient to address the misuse of artificial intelligence'. Judge Sharp wrote: 'There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,' While acknowledging that AI remained a 'powerful technology' with legitimate use cases, she nevertheless reiterated that the technology brought 'risks as well as opportunities.' In the first case cited in the judgment, a British man sought millions in damages from two banks. The court discovered that 18 out of 45 citations included in the legal arguments featured past cases that simply did not exist. Even in instances in which the cases did exist, often the quotations were inaccurate or did not support the legal argument being presented. The second case, which dates to May 2023, involved a man who was turned down for emergency accommodation from the local authority and ultimately became homeless. His legal team cited five past cases, which the opposing lawyers discovered simply did not exist - tipped off by the fact by the US spellings and formulaic prose style. Rapid improvements in AI systems means its use is becoming a global issue in the field of law, as the judicial sector figures out how to incorporate artificial intelligence into what is frequently a very traditional, rules-bound work environment. Earlier this year a New York lawyer faced disciplinary proceedings after being caught using ChatGPT for research and citing a none-existent case in a medical malpractice lawsuit. Attorney Jae Lee was referred to the grievance panel of the 2nd U.S. Circuit Court of Appeals in February 2025 after she cited a fabricated case about a Queens doctor botching an abortion in an appeal to revive her client's lawsuit. The case did not exist and had been conjured up by OpenAI's ChatGPT and the case was dismissed. The court ordered Lee to submit a copy of the cited decision after it was not able to find the case. She responded that she was 'unable to furnish a copy of the decision.' Lee said she had included a case 'suggested' by ChatGPT but that there was 'no bad faith, willfulness, or prejudice towards the opposing party or the judicial system' in doing so. The conduct 'falls well below the basic obligations of counsel,' a three-judge panel for the Manhattan-based appeals court wrote. In June two New York lawyers were fined $5,000 after they relied on fake research created by ChatGPT for a submission in an injury claim against Avianca airline. Judge Kevin Castel said attorneys Steven Schwartz and Peter LoDuca acted in bad faith by using the AI bot's submissions - some of which contained 'gibberish' - even after judicial orders questioned their authenticity.

WTI-Brent spread at narrowest in almost two years as US prices rise
WTI-Brent spread at narrowest in almost two years as US prices rise

Reuters

time9 hours ago

  • Reuters

WTI-Brent spread at narrowest in almost two years as US prices rise

HOUSTON, June 6 (Reuters) - The spread between U.S. West Texas Intermediate and Brent crude futures narrowed to its tightest level since September 2023 on Friday as U.S. prices rose on a sliding rig count and Canadian wildfires that cut supplies, analysts and traders said. U.S. futures ended the week 4.9% higher, while Brent futures rose 2.75%, as OPEC+ output increases put a cap on gains. A narrower spread indicates a closed arbitrage window for traders and weaker shipping economics to Europe and Asia. The tighter spread can act as an early indicator that U.S. crude exports will likely fall in the next few weeks, assuming the premium for Brent crude remains weak. The inclusion of WTI-Midland crude in the dated Brent index has meant that the spread between the two is increasingly correlated to freight rates, as the price of Dated Brent is set by WTI Midland on many trading days. The spread between the two crude benchmarks narrowed to as little as $2.78 a barrel during the session on Friday. A discount of $4 per barrel is typically considered the level that encourages U.S. exports to Europe, as traders see an open arbitrage route. The spread has remained narrower than $4 a barrel since May 1, according to data from LSEG, partly due to concerns around U.S. production, helping keep more barrels onshore, according to Phil Flynn, senior analyst with Price Futures Group. Since April, OPEC+ countries including Saudi Arabia and Russia have made or announced increases totaling 1.37 million barrels per day, or 62% of the 2.2 million bpd they aim to add back to the market. Meanwhile the U.S. oil and gas rig count, an early indicator of future output, fell by four to 559 in the week to June 6, the lowest since November 2021, energy services firm Baker Hughes (BKR.O), opens new tab said in its closely followed report on Friday, stoking some concerns around future U.S. production. This has helped create pricing that encourages U.S. oil to remain in the domestic market, traders and analysts said. Wildfires burning in Canada's oil-producing province of Alberta have further buoyed U.S. crude futures, analysts said, with Canadian daily crude production down by about 7%. "With Canadian wildfire season underway, further disruption could push the WTI/Brent spread below $3 this summer," said analysts at Sparta Commodities. "When you look at the WTI/Brent spread, you can see the concerns a little bit around leveling off U.S. production and concerns about export barrels tightening up," said Price Futures Group's Flynn.

DOGE used flawed AI tool to ‘munch' Veteran Affairs contracts, report claims
DOGE used flawed AI tool to ‘munch' Veteran Affairs contracts, report claims

The Independent

time11 hours ago

  • The Independent

DOGE used flawed AI tool to ‘munch' Veteran Affairs contracts, report claims

Employees in the Department of Government Efficiency reportedly used a flawed artificial intelligence model to determine the necessity of contracts in the Department of Veterans Affairs, resulting in hundreds of contracts, valued at millions of dollars, being canceled. Given only 30 days to implement President Donald Trump 's executive order directing DOGE to review government contracts and grants to ensure they align with the president's policies, an engineer in DOGE rushed to create an AI to assist in the task. Engineer Sahil Lavingia wrote code which told the AI to cancel, or in his words 'munch,' anything that wasn't 'directly supporting patient care' within the agency. However neither he, nor the model, required the knowledge to make those decisions. ' 'I'm sure mistakes were made,' he told ProPublica. Mistakes are always made.' One of the key problems was that the AI only reviewed the first 10,000 characters (roughly 2,500 words) of contracts to determine whether it was 'munchable' – Lavingia's term for if the task could be done by VA staffers rather than outsourcing, ProPublica reported. Experts who reviewed the code also told ProPublica that Lavingia did not clearly define many critical terms, such as 'core medical/benefits,' and used vague instructions, leading to multiple critical contracts being flagged as 'munchable.' For example, the model was told to kill DEI programs, but the prompt failed to define what DEI was, leaving the model to decide. At another point in the code, Lavingia asked the AI to 'consider whether pricing appears reasonable' for maintenance contracts, without defining what 'reasonable' means. In addition, the AI was created on an older, general purpose model not suited for the complicated task, which caused it to hallucinate, or make up, contract amounts, sometimes believing they were worth tens of millions as opposed to thousands. Cary Coglianese, a professor at the University of Pennsylvania who studies governmental use of AI, told ProPublica that understanding which jobs could be done by a VA employee would require 'sophisticated understanding of medical care, of institutional management, of availability of human resources' – all things the AI could not do. Lavingia acknowledged the AI model was flawed, but he assured ProPublica that all 'munchable' contracts were vetted by other people. The VA initially announced, in February, it would cancel 875 contracts. But various veteran affairs advocates sounded the alarm, warning that some of those contracts related to safety inspections at VA medical facilities, direct communications with veterans about benefits, and the VA's ability to recruit doctors. One source familiar with the situation in the department told the Federal News Network that some cuts demonstrated a 'communication breakdown' between DOGE advisors, VA leaders, and lawmakers who oversee the VA. The VA soon walked that number back, instead announcing in March it would cancel approximately 585 'non-mission-critical or duplicative contracts,' re-directing around $900 million back to the agency. Lavingia, who was fired from DOGE approximately 55 days his blog and released the code he used at the VA on GitHub.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store