logo
#

Latest news with #ObjectiveLens'

Integrating AI in your business can be a double-edged sword
Integrating AI in your business can be a double-edged sword

Daily Maverick

time20 hours ago

  • Business
  • Daily Maverick

Integrating AI in your business can be a double-edged sword

AI promises to transform businesses with smarter systems and personalised customer experiences. But AI, like any tool, requires skill to wield. Without proper scrutiny, these tools can do more harm than good. The AI sales pitch is irresistible: automated customer service, hyper-personalised engagement, razor-sharp inventory forecasting. But AI, like any tool, requires skill to wield. 'AI isn't magic; it's math and data,' said Matthew Elliot, chief delivery officer at tech consultancy CloudSmiths. 'The more visibility you have into how it works and what it's doing, the safer your outcomes will be.' 'Safe' is the key word here. While AI can give any business an edge, it can just as easily entrench inequality, expose private data, or simply, much like humans, make bad calls. Riskier than you think According to Wendy Tembedza, partner at Webber Wentzel, the issue isn't whether or not to use AI – because most businesses already do. The problems creep in when the risks of its usage are not rigorously assessed. 'Businesses must carefully consider the intended use case of any AI tool and conduct a risk assessment prior to implementation,' she advised. 'This helps manage the risks that may arise from incorrect or inappropriate use.' Those risks include exposure to legal liability if systems fail to meet ethical or compliance standards. If an AI tool inadvertently discriminates, say by offering preferential pricing or marketing to customers living in urban areas that shop online, it poses the risk of enforcing bias. 'In a society where South Africans have unequal access to the internet, AI tools that learn only from online behaviour run the risk of generating biased results,' Tembedza warned. 'The resulting insights may fail to reflect the actual behaviours and preferences of the wider customer base.' Bad inputs, bad outcomes Much of the risk begins with the data. AI tools need clean, well-structured, and representative data to train on. 'Bad inputs make bad decisions,' Elliot said. 'If you don't know where your data came from or how your model behaves, you're flying blind.' TymeBank chief technology officer, Bruce Paveley agrees on that point. 'Bank data is carefully selected and scrubbed to ensure we don't put dirty data into our AI tools,' he said. 'We also allow our AI tools to use learning from other reliable sources to avoid bias toward our own ideas.' What is 'dirty' data? This refers to information that is inaccurate, incomplete, or outdated, hindering the performance and reliability of AI models Elliot reckons that bias is a human problem as much as it is a technical issue, and if your model isn't trained on diverse, representative data, it will leave people behind. To mitigate this, CloudSmiths has developed workflows that stress-tests models against real-world scenarios, using tools such as 'Objective Lens' to detect underrepresentation and misclassification. 'If your AI doesn't work for everyone, it doesn't really work,' Elliot said. Inventory gains, infrastructure pains Inventory management is one of AI's most lucrative use cases. Predictive tools can anticipate buying trends and automate restocking, which reduces waste. But again, the model is only as smart as the data it learns from. 'In practice, many retailers do not have centralised or well-organised data sets,' Tembedza said. 'If data used to train AI models are outdated, inconsistent or inaccurate, no level of technical sophistication can compensate.' Even when the data is right, legacy IT systems often aren't. 'Many retailers continue to operate on legacy information technology systems that may not support the integration of the AI tools they intend to implement,' Tembedza added. In Africa, up to 60% of businesses cited IT infrastructure as a major barrier to AI implementation, a survey by cybersecurity company Fortinet found. Modern, cloud-first businesses like TymeBank are less encumbered, but even their barriers are not always technical. 'The challenge was more adoption by our team members rather than a technical limitation,' said Paveley. 'Education and exposure were key enablers to get our team to embrace AI as a productivity tool rather than one that threatens their jobs.' Leak risks of generative AI When it comes to businesses that adopt and use AI, the most insidious risk does not necessarily come from a chatbot gone rogue but rather from employees pasting confidential information into ChatGPT. 'Never assume off-the-shelf AI is private by default,' Elliot said. He sees companies frequently underestimating the risk of intellectual property exposure and data leaks from generative AI tools. His advice: Use air-gapped or private models for anything involving intellectual property. Train staff on what's safe to input. Review AI governance legislation, like the EU's AI Act. Establish monitoring and evaluation frameworks from day one. 'There needs to be a policy to govern, rather than restrict, usage of AI in our business,' Paveley said. 'Our staff need to complete AI training to ensure they understand the power of productivity improvements but also for awareness of the associated risks.' Boost engagement, don't alienate Done right, the shiny promise of personalisation through AI can boost engagement. If not, it reinforces stereotypes and alienates customers and clients. 'The most ethical AI is also the most useful,' Elliot said. 'If users trust the experience, they are more likely to engage with it – and that's good for everyone.' 'Personalisation shouldn't enforce stereotypes; it should surprise you, teach you something new, or reflect who you're becoming, not just who you've been.' Helpful or harmful? For all its potential, AI certainly doesn't solve problems on its own and certainly not by default. 'The questions are similar from what we hear every day from our South African and UK clients,' Elliot said. 'Typically it's concerns around safety, fairness, cost or return on investment. It seems most businesses are desperately trying to minimise AI risk while staying competitive.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store