Latest news with #dataScientists


Forbes
4 days ago
- Business
- Forbes
Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It
Most predictive AI projects neglect to estimate the potential profit – a practice known as ML ... More valuation – and that spells project failure. Here's the how-to. To be a business is to constantly work toward improved operations. As a business grows, this usually leads to the possibility of using predictive AI, which is the kind of analytics that improves existing, large-scale operations. But the mystique of predictive AI routinely kills its value. Rather than focusing on the concrete win that its deployment could deliver, leaders get distracted by the core tech's glamor. After all, learning from data to predict is sexy. This in turn leads to skipping a critical step: forecasting the operational improvement that predictive AI operationalization would deliver. As with any kind of change to large-scale operations, you can't move forward without a credible estimation of the business improvement you stand to gain – in straightforward terms like profit or other business KPIs. Not doing so makes deployment a shot in the dark. Indeed, most predictive AI launches are scrubbed. So why do most predictive AI projects fail to estimate the business value, much to their own demise? Ultimately, this is not a technology fail – it's an organizational one, a glaring symptom of the biz/tech divide. Business stakeholders delegate almost every aspect of the project to data scientists. Meanwhile, data scientists as a species are mostly stuck on arcane technical metrics, with little attention to business metrics. The typical data scientist's training, practice, shop-talk and toolset omits business metrics. Technical metrics define their comfort zone. Estimating the profit or other business upside of deploying predictive AI – aka ML valuation – is only a matter of arithmetic. It isn't the "rocket science" part, the ML algorithm that learns from data. Rather, it's the much-needed prelaunch stress-testing of the rocket. Say you work at a bank processing 10 million credit card and ATM card transactions each quarter. With 3.5% of the transactions fraudulent, the pressure is on to predictively block those transactions most likely to fall into that category. With ML, your data scientists have developed a fraud-detection model that calculates a risk level for each transaction. Within the most risky 150,000 transactions – that is, the 1.5% of transactions that are considered by the model most likely to be fraud – 143,000 are fraudulent. The other 7,000 are legitimate. So, should the bank block that group of high-risk transactions? Sounds reasonable off the cuff, but let's actually calculate the potential winnings. Suppose that those 143,000 fraudulent transactions represent $18,225,000 in charges – that is, they're about $127 each on average. That's a lot of fraud loss to be saved by blocking them. But what about the downside of blocking them? If it costs your bank an average of $75 each time you wrongly block due to cardholder inconvenience – which would be the case for each of the 7,000 legit transactions – that will come to $525,000. That barely dents the upside, with the net win coming to $17,700,000. So yeah, if you'd like to gain almost $18 million, then block those 1.5% most risky transactions. This is the monetary savings of fraud detection, and a penny saved is a penny earned. But that doesn't necessarily mean that 1.5% is the best place to draw the line. How much more might we save by blocking even more? The more we block, the more lower-risk transactions we block – and yet the net value might continue to increase if we go a ways further. Where to stop? The 2% most risky? The 2.5% most risky? To navigate the range of predictive AI deployment options, you've just got to look at it: A savings curve comparing the potential money saved by blocking the most risky payment card ... More transactions with fraud-detection models. The performance of three competing models is shown. This shows the monetary win for a range of deployment options. The vertical axis represents the money saved with fraud detection – based on the same kind of calculations as those in the previous example – and the horizontal axis represents the portion of transactions blocked, from most risky (far left) to least risky (far right). This view has zoomed into the range from 0% to 15%, since a bank would normally block at most only the top, say, two or three percent. The three colors represent three competing ML models: two variations of XGBoost and one random forest (these are popular ML methods). The first XGBoost model is the best one overall. The savings are calculated over a real collection of e-commerce transactions. So was the previous example's calculations. Let's jump to the curve's peak. We would maximize the expected win to more than $26 million by blocking the top 2.94% most risky transactions according to the first XGBoost model. But this deployment plan isn't a done deal yet – there are other, competing considerations. First, consider how often transactions would be wrongly blocked. It turns out that blocking that 2.94% would inconvenience legit cardholders an estimated 72,000 times per quarter. That adverse effect is already baked into the expected $26 million estimate, but it could incur other intangible or longer-term costs; the business doesn't like it. But the relatively flatness that you can see near the curve's peak signals an opportunity: If we block fewer transactions, we could greatly reduce the expected number wrongly blocked with only a small decrease in savings. For example, it turns out that blocking 2.33% rather than 2.94% cuts the number of estimated bad blocks in half to 35,000, while still capturing an expected $25 million in savings. The bank might be more comfortable with this plan. As compelling as these estimated financial wins are, we must take steps to shore up their credibility, since they hinge on certain business assumptions. After all, the actual win of any operational improvement – whether driven by analytics or otherwise – is only certain after it's been achieved, in a "post mortem" analysis. Before deployment, we're challenged to estimate the expected value and to demonstrate its credibility. One business assumption within the analysis described so far is that unblocked fraudulent transactions cost the bank the full magnitude of the transaction. A $100 fraudulent transaction costs $100 (while blocking it saves $100). And a $1,000 fraudulent transaction indeed costs ten times as much. But circumstances may not be that simple, and they may be subject to change. For example, certain enforcement efforts might serve to recoup some fraud losses by investigating fraudulent transactions even after they were permitted. Or the bank might hold insurance that covers some losses due to fraud. If there's uncertainty about exactly where this factor lands, we can address it by viewing how the overall savings would change if such a factor changed. Here's the curve when fraud costs the bank only 80% rather than 100% of each transaction amount: The same chart, except with each unblocked fraudulent transaction costing only 80% of the amount of ... More the transaction, rather than 100%. It turns out, the peak decreases from $26 million down to $20 million. This is because there's less money to be saved by fraud detection when fraud itself is less costly. But the position of the peak has moved only a little: from 2.94% to 2.62%. In other words, not much doubt is cast upon where to draw the decision boundary. Another business assumption we have in place is the cost of wrongly blocking, currently set at $75 – since an inconvenienced cardholder will be more likely to use their card less often (or cancel it entirely). The bank would like to decrease this cost, so it might consider taking measures accordingly. For example, it could consider providing a $10 "apology" gift card each time it realizes its mistake – an expensive endeavor, but one that might turn out to decrease the net cost of wrongly blocking from $75 down to $50. Here's how that would affect the savings curve: The same chart, except with each wrongly-blocked transaction costing only $50, rather than $75. This has increased the peak estimated savings to $28.6 million, and moves that peak from 2.94% up to 3.47%. Again, we've gained valuable insight: This scenario would warrant a meaningful increase in how many transactions are blocked (drawing the decision boundary further to the right), but would only increase profit by $2.6 million. Considering that this guesstimated cost reduction is a pretty optimistic one, is it worth the expense, complexity and uncertainty of even testing this kind of "apology" campaign in the first place? Perhaps not. For a predictive AI project to defy the odds and stand a chance at successful deployment, business-side stakeholders must be empowered to make an informed decision as to whether, which and how: whether the project is ready for deployment, which ML model to deploy and with what decision boundary (percent of cases to be treated versus not treated). They need to see the potential win in terms of business metrics like profit, savings or other KPIs, across a range of deployment options. And they must see how certain business factors that could be subject to change or uncertainty affect this range of options and their estimated value. We have a name for this kind of interactive visualization: ML valuation. This practice is the main missing ingredient in how predictive AI projects are typically run. ML valuation stands to rectify today's dismal track record for predictive AI deployment, boosting the value captured by this technology up closer to its true potential. Given how frequently predictive AI fails to demonstrate a deployed ROI, the adoption of ML valuation is inevitable. In the meantime, it will be a true win for professionals and stakeholders to act early, get out ahead of it and differentiate themselves as a value-focused practitioner of the art.


The Sun
14-05-2025
- Business
- The Sun
AI is being used by British Airways planes to avoid bad weather and flight delays
BRITISH Airways' £7billion investment in AI has led to the airline's flight punctuality soaring to record levels - with 86 per cent of jets now taking off on time. The impressive first quarter 2025 figures compared to a punctuality record of just 46 per cent in 2008 thanks to new cutting edge technology first revealed in The Sun. 3 BA services hit more than 90 per cent on-time departures on 38 of the 89 operational days. And across April, two thirds of all the airline's Heathrow departures left ahead of time - more than double the 2023 figures. Last year the Sun first revealed BA's £7bn investment programme - including £100m developing digital tools and apps to boost operational performance. More than 100 data scientists are now employed by the airline. BA can now allocate aircraft landing at Heathrow to stands based on a live analysis of the onward travel plans of customers on any given flight - cutting m issed connections and disruption to onward journeys. This has saved 160,000 minutes of delays. A real-time weather program proactively reroutes aircraft to avoid problems. This has saved 243,000 minutes of delays. New apps for pilots, cabin crew teams and aircraft dispatch teams will help speed-up aircraft departures. The Sun was the first media outlet invited inside the airline's new security-restricted nerve centre to showcase cutting edge technology making flying better. We revealed how six separate computer systems for different areas of BA have been jettisoned for one giant global interface - dubbed 'Mission Control' - which unites the airline; streamlining services and tracking aircraft movements. The Sun's Travel Editor Lisa Minot shares her expert packing tips The live-data is beamed into BA's Heathrow hub on huge screens, allowing bosses to take pre-emptive action to limit operational hazards. "It's like an elaborate game of computer puzzle Tetris', Richard Anderson, the airline's Director of Global Operations, told The Sun. Ground-breaking immediate responses to limit disruption are now the norm – meaning a smoother and seamless travel experience for flyers. Thrilled BA chairman Sean Doyle said yesterday that AI, forecasting, optimisation and machine learning' have transformed the airline's operational performance. At an aviation summit in Pittsburgh in the US, he said: "Improving operational performance is a key part of our investment programme because we know the impact delays and disruption can have on our customers. "Whilst disruption to our flights is often outside of our control, our focus has been on improving the factors we can directly influence and putting in place the best possible solutions for our customers when it does happen. "That's why we've invested £100m in our own operational resilience, putting funding into technology and tools, and devising a better way of working on the ground at Heathrow as well as creating an additional 600 operational roles into the airport. 3 "The tech which colleagues have at their fingertips has been a real game-changer for performance, giving them the confidence to make informed decisions for our customers based on a rapid assessment of vast amounts of data. "It's exciting that our industry is able to harness this capability, which will develop even further in the months and years to come." The Sun also tried out Plus, the insane training regime BA First Class flight attendants have to go through – with strict teapot and pillow rules. 3


Forbes
12-05-2025
- Business
- Forbes
Why Companies Are Losing AI Talent — And How Leaders Can Stop It
Companies keep blaming the AI talent shortage on competition and compensation. But the real problem ... More may lie within — in rigid cultures, outdated leadership and a failure to build environments where AI professionals actually want to stay. 'We can't find enough AI talent.' That's one of the major dilemmas in boardrooms around the world right now, as AI continues on an upward trajectory. The job postings are live, compensation is competitive and tools are top-tier. Yet still, machine learning engineers and data scientists walk away — or worse, never apply for these roles. But what if this isn't a hiring crisis at all? What if it's a leadership one? While the spotlight has been on salaries and skills shortages, some experts argue that it isn't just that AI professionals are hard to hire, but also that they're easy to lose. The argument is that this phenomenon isn't because these professionals aren't engaged with the work, but because the environment they're asked to work in is often fundamentally misaligned with how AI innovation thrives. 'AI professionals' rare expertise gives them unprecedented leverage in today's market,' noted Erika Glenn, a C-suite executive and board advisor. 'They can command high compensation while prioritizing workplace flexibility elsewhere. Many companies maintain rigid policies under leadership that rarely understands AI culture's unique needs — and that disconnect pushes experts to leave.' The case today, at least for a large chunk of the industry, is that AI talent isn't chasing ping-pong tables or inflated job titles. They're going after meaning, autonomy and a future-focused mission. When they don't find that, they leave — often to start their own ventures or join smaller companies with more adaptive cultures. According to Michelle Machado, a neurochange solutions consultant and global educator, the deeper issue lies with legacy mindsets. 'Too many leaders are still operating with 20th-century thinking while trying to compete in a 21st-century AI race,' she told me in an interview. 'It's like watching companies in the year 2000 debate whether they needed a website.' Machado pointed to a telling stat: nearly 40% of companies are failing at AI implementation because leadership doesn't understand its potential. This misunderstanding manifests in all the wrong ways — treating AI like a side project, demanding office-based routines for remote-ready work, or imposing waterfall processes on what should be experimental systems. Glenn added that many leaders 'still treat AI development like traditional software engineering, enforcing rigid schedules and micromanagement that stifle innovation.' That kind of control-heavy approach repels the very minds companies are desperate to retain. Worse, it builds resentment. When leadership demands agility from tech teams but clings to bureaucracy in its own decision-making, AI experts read the signal loud and clear: this is not a place where real innovation is welcome. A common misconception is that AI professionals are simply poached by bigger paychecks. But Machado challenges that. 'Unless leaders build a culture of experimentation, collaboration, and future-focused thinking, even the best AI hires won't stay,' she said. 'It's culture, not just compensation, that ultimately attracts and retains top talent.' Glenn agrees, noting that great leaders 'foster cultures of open dialogue and shared incentives, where controversial viewpoints are welcomed without repercussion.' They balance autonomy with accountability, shield teams from politics and reward experimentation, even when it fails. That environment is rare. But when it exists, it creates gravity that retains talent. And the organizations drawing and keeping the brightest AI minds are the ones with that kind of gravity, necessarily those with the most advanced models. When it comes to retaining talent, Machado's advice is that transparency is what fuels trust. 'People stay when they understand the impact of their work and how it connects to broader business outcomes,' she said. In a field as cross-functional and fast-paced as AI, where models must touch operations, compliance, customer data and ethics, that transparency must be baked into every layer of leadership. It also requires vulnerability; a willingness to admit what the company doesn't yet know and a commitment to build that knowledge together. 'When people feel seen, heard and valued,' Machado explained, 'they don't just contribute — they commit.' This is especially vital in large enterprises, where AI efforts often suffocate under organizational silos. 'Silos don't just slow innovation,' she added. 'They stall transformation.' Losing a top AI engineer doesn't just mean opening another job requisition — it sets off a chain reaction. Projects stall, morale dips and, perhaps worst of all, institutional knowledge walks out the door. 'Replacing technical professionals can cost between one-half to two times their annual salary,' said Glenn, citing Gallup. SHRM confirms these costs across industries, especially in high-skill domains like AI and cybersecurity. But the true impact isn't financial alone. 'Team morale deteriorates, skillset imbalances emerge, and product development suffers,' she warned. Machado put it bluntly: 'Failing to retain AI talent comes at a steep price, not just in turnover, but in missed relevance.' She compared it to the cautionary tales of Kodak and Blockbuster — companies that didn't fail for lack of talent, but for lack of leadership readiness. 'In this market, you either evolve or dissolve. There is no middle ground.' Machado's argument isn't exaggerated at all, according to the stats. In a 2024 Bain & Company survey, 75% of executives admitted they don't fully understand how to scale AI within their organizations. And that uncertainty at the top trickles down — creating friction, confusion and eventually, flight. So what makes AI talent stay? Both Glenn and Machado agree that it's not just about technical ability but about how leaders show up. 'The best leaders create environments of genuine autonomy,' Glenn said. 'They demonstrate problem-solving engagement, regardless of their technical depth, shield their teams from politics, balance accountability with empowerment and treat failure as an important part of the process.' For Machado, great leadership begins with trust and human connection. 'AI may run on data, but exceptional outcomes still run on trust,' she said. 'When leaders share purpose, invite diverse perspectives and celebrate progress over perfection, teams move from compliance to commitment.' In these types of environments, AI professionals don't just build better models — they build momentum, innovate and, most importantly, stay. The bottom line is that there's no AI strategy without a talent strategy — and no talent strategy without leadership. Yes, compensation still matters and the global shortage of AI professionals is real. But throwing more money at the problem won't fix a culture that's broken. Attracting and retaining AI talent is not just about who you hire, but more about how you lead. The AI talent gap, according to Machado, isn't simply a hiring problem — it's a leadership one. She added that 'this problem at its core is about trust: trust in your people, in your strategy and in your capacity to lead through change.' If AI companies want to stay competitive, the message from Glenn and Machado is that they'll need more than advanced models. They'll need leaders who can think forward, act with empathy and build environments where AI professionals can thrive. 'Innovation stalls when leadership fails. But with the right leadership? AI becomes a force multiplier, not a flight risk,' Glenn said.

Zawya
12-05-2025
- Business
- Zawya
Cassava Technologies and Zindi collaborate to showcase African Artificial Intelligence (AI) innovation
Cassava Technologies ( a global technology leader of African heritage, is pleased to announce that it has signed a Memorandum of Understanding (MOU) with Zindi, the largest professional network for data scientists in Africa, to deliver artificial intelligence (AI) solutions and GPU-as-a-Service (GPUaas) across the African continent. This partnership represents a significant step in accelerating digital transformation in Africa and will see the two organisations collaborate on several initiatives. These include using Cassava's GPUaaS capabilities for Zindi's AI solution development and identifying opportunities for both organisations to leverage one another's platforms and ecosystems. 'For Africa's AI ecosystem to grow and thrive, it is essential to provide platforms and resources for the continent's developers and start-ups. Combining our data centres' advanced GPU capabilities with Zindi's innovative data science platform creates a powerful foundation for digital advancement. Cassava is proud to support local stakeholders as they develop digital solutions to some of Africa's most pressing problems,' said Hardy Pemhiwa, President and Group CEO of Cassava Technologies. As Africa's pioneering AI challenge platform, Zindi collaborates with companies, non-profit organisations, and government institutions to develop, curate, and prepare data-driven challenges. This partnership underscores their mutual commitment to nurturing AI talent and innovation throughout the continent. 'Zindi is thrilled at the opportunity to partner with Cassava Technologies to strengthen African datasets, address local problems with locally-developed solutions, and help more African AI builders access the resources they need to succeed. Collaborating on the launch of a challenge specifically aimed at nurturing Africa's AI talent will not only expose entrepreneurs and innovative solutions; it will help build new skills and create employment opportunities,' said Zindi CEO and Co-Founder, Celina Lee. With the signing of this MOU, Cassava and Zindi are set to make significant inroads in Africa's AI landscape. The partnership supports Cassava's objective of providing world-class digital solutions and advancing responsible AI adoption, innovation, and growth in Africa. This follows its recent announcement of its plans to build Africa's first AI factory and the 2024 launch of Cassava's AI business. Distributed by APO Group on behalf of Cassava Technologies. About Cassava Technologies: Cassava Technologies is a global technology leader of African heritage providing a vertically integrated ecosystem of digital services and infrastructure enabling digital transformation. Headquartered in the UK, Cassava has a presence across Africa, the Middle East, Latin America and the United States of America. Through its business units, namely, Cassava AI, Liquid Intelligent Technologies, Liquid C2, Africa Data Centres, and Sasai Fintech, the company provides its customers' products and services in 94 countries. These solutions drive the company's ambition of establishing itself as a leading global technology company of African heritage. About Zindi: Zindi is the leading global AI community platform, established to make data science and artificial intelligence (AI) skills accessible to all. With a network of over 80,000 data scientists registered on the platform, from more than 180 countries worldwide, Zindi helps data practitioners of all levels learn, grow and access professional opportunities while building real-world solutions for global organisations.


Geeky Gadgets
10-05-2025
- Business
- Geeky Gadgets
Tested: Apple's $11,000 Mac Studio (2025) - Is It Worth the Cost?
The Apple Mac Studio 2025, featuring the advanced M3 Ultra chip, sets a new benchmark for desktop computing tailored to professionals. Designed with developers, data scientists, and power users in mind, this compact yet powerful machine delivers exceptional performance, particularly for AI workloads, multitasking, and data-intensive tasks. However, its premium price and lack of upgradability may limit its appeal to a niche audience that demands specialized capabilities. The video below from XDA gives us a details look at the $11,000 Mac Studio. Watch this video on YouTube. Exceptional Performance with the M3 Ultra Chip At the heart of the Mac Studio 2025 is the innovative M3 Ultra chip, a technological achievement that redefines performance standards. Boasting up to a 32-core CPU, an 80-core GPU, and an impressive 512 GB of unified memory, this chip is engineered to handle the most demanding professional workflows. Its unified architecture allows seamless memory sharing between the CPU and GPU, making it particularly well-suited for running large AI models locally. The M3 Ultra excels in both single-core and multi-core performance, surpassing many flagship x86 processors while maintaining superior energy efficiency. Even under heavy workloads, the Mac Studio consumes significantly less power than traditional x86-based systems, offering a compelling combination of performance and sustainability. For professionals who prioritize energy-efficient computing without compromising on speed or capability, the Mac Studio 2025 is a standout choice. Unified Memory Architecture: Transforming AI and Data Workflows One of the defining features of the Mac Studio 2025 is its unified memory architecture, which eliminates the inefficiencies of traditional systems where memory is divided between the CPU and GPU. With up to 512 GB of shared memory, the system enables faster and more efficient processing, particularly for tasks involving massive datasets or complex AI models. This architecture is a significant advantage for data scientists, developers, and AI researchers. It allows for the local execution of tasks that would typically require cloud-based resources, reducing latency and offering greater control over sensitive data. By streamlining workflows and eliminating bottlenecks, the Mac Studio enables professionals to tackle computationally intensive projects with ease. Comprehensive Connectivity for Professional Needs The Mac Studio 2025 is equipped with a robust array of connectivity options, making sure compatibility with a wide range of peripherals and allowing high-speed data transfers. The front panel provides two Thunderbolt 5 ports and an SD card slot for quick and convenient access. On the back panel, users will find: Four additional Thunderbolt 5 ports A 10 GB Ethernet port Two USB-A ports An HDMI 2.1 port A headphone jack For wireless connectivity, the Mac Studio supports Bluetooth 5.3 and Wi-Fi 6E, making sure reliable and fast connections. However, the absence of Wi-Fi 7 may be a drawback for those seeking the latest wireless standards. Despite this, the comprehensive connectivity options make the Mac Studio a versatile tool for professionals who rely on multiple devices and peripherals in their workflows. Optimized for Demanding Professional Workflows The Mac Studio 2025 is purpose-built to meet the needs of professionals who require exceptional computational power and multitasking capabilities. Developers will appreciate its ability to compile large codebases rapidly, while data scientists can use its capacity to process and analyze extensive datasets efficiently. Additionally, the machine excels in AI workloads, allowing the local execution of models that would otherwise depend on cloud infrastructure. Creative professionals, such as video editors and graphic designers, will also benefit from the Mac Studio's ability to handle high-resolution media and complex rendering tasks. However, its specialized features and high price point make it less suitable for general users or those with basic computing needs. Key Limitations to Consider While the Mac Studio 2025 offers impressive capabilities, it is not without its limitations. One of the most notable drawbacks is the lack of internal upgradability, which means users must carefully select their configuration at the time of purchase. Additionally, storage upgrades are prohibitively expensive, further emphasizing the importance of choosing the right specifications upfront. Although the M3 Ultra chip delivers robust GPU performance, it falls short of matching the power of high-end discrete GPUs, which may be a concern for professionals in fields like 3D rendering or advanced gaming. Combined with its steep starting price, these limitations make the Mac Studio a less appealing option for users who do not require its advanced features or specialized capabilities. A Specialized Tool for Professionals The Apple Mac Studio 2025 stands out as a compact yet powerful desktop designed for professionals working in AI, data science, and other demanding fields. Its unified memory architecture, energy efficiency, and exceptional performance make it a unique solution for specialized use cases. However, its high cost, lack of upgradability, and niche focus mean it is best suited for those who can fully use its capabilities. For developers, data scientists, and professionals seeking top-tier performance in a compact form factor, the Mac Studio 2025 is a worthy investment. For general users, more versatile and affordable alternatives may be a better fit. Uncover more insights about Apple Mac Studio 2025 in previous articles we have written. Source & Image Credit: XDA Filed Under: Apple, Guides, Reviews, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.