
The Top Nine Technology Trends Reshaping Life Sciences Supply Chains
From therapeutic complexity, FDA regulatory policy changes, geopolitical shocks and an AI race, life sciences supply chain leaders are forced to rewrite their playbook to stay relevant in the era of AI. Since my 2023 market outlook, the conversation has shifted from digital transformation to digital autonomy, whereby networks can sense, decide and act for themselves. I've pulled together the nine trends from my client engagements with the world's largest pharmaceutical companies, complemented by grounded analyst data, new surveys and live deployments. These trends set the agenda for the rest of the decade.
Nine Life Sciences Supply Chain Tech Trends
Traditional control towers relied on manually stitching together data and dashboards. That changes with agentic AI. Pharma clients that I advise are running AI agents to improve demand forecasting, optimize inventories and reduce lead times in real time. In McKinsey's 2024 State of AI survey, supply-chain and inventory management was the most cited function for material revenue gains from generative AI deployments, with 53% of adopters already seeing uplift.
Second-generation twins fuse process historians, ERP signals and risk feeds, enabling a virtual knock-down of change orders before a vial is filled. AstraZeneca's World Economic Forum Lighthouse site in Sweden logged a 56% productivity jump after scaling line-to-network twins along with other digital solutions. IDC analysts estimate "that by 2027, 35% of G2000 companies will employ supply chain orchestration tools featuring digital twin capabilities, with the aim of boosting supply chain responsiveness by 15%."
(Disclosure: AstraZeneca is a Microsoft client.)
Since ICH Q13 became final in 2023, the U.S. Food & Drug Administration (FDA) and European Medicines Agency (EMA) have moved CM from pilot to mainstream review to push the number of FDA-approved CM applications into the double digits. Shop floors are now AI-equipped with vision systems and autonomous mobile robots. For example, VisionNav Robotics' automated industrial vehicle forklifts enable CM in material handling through multilayer frame stacking, which improves efficiency.
The CGT modality of drugs is evolving fast and requires even faster supply chains to move this living medicine from a single patient and then return it to that same patient. Market analysts put the CGT supply-chain and cold-chain segment at about $1.5 billion today but forecast it to reach $4.4 billion by 2034. Moving goods to a transparent, modular and trustworthy point-of-care solution turns a historically linear, week-long CGT supply chain into a high-velocity loop that keeps patients confident that the right cells arrive on time, every time.
Leading pharma executives are treating optionality as essential among multiple tariff-agnostic sources as a strategic hedge. The pharma companies I advise are upgrading their digital backbone for increased tariff agility. GSK's CEO highlighted how diversified manufacturing footprints flatten the curve of tariff exposure, allowing the company to protect U.S. margins. Technology can mitigate tariff turbulence through the implementation of AI agents that continuously monitor tariffs, provide proactive what-if analyses and perform sub-second telemetry that can pivot production or sourcing before tariff costs hit the P&L.
(Disclosure: GSK is a Microsoft client.)
Pharma's next performance leap won't come from a shiny new robot; it'll come from tapping into all data sources. A "factory digital backbone' streams real-time shop-floor signals to AI agents that solve the operational issues that plague pharma. It's the nervous system enabling agentic AI to provide clean, trustworthy and continuous data flow for autonomous decision making in manufacturing. Eli Lilly, for example, is already turning raw telemetry into faster cycle times, higher yields and lower costs with predictive maintenance.
(Disclaimer: Eli Lilly is a Microsoft client.)
Nearly every high-growth therapy solution, from mRNA vaccines and CGTs to biologics like GLP-1, fails without a sustainable ultra-cold supply chain. Although forecast accuracy is important, a single temperature excursion can wipe out an entire lot in seconds and trigger shortages that no demand algorithm can undo. With increased focus on biologics and CGTs, temperature-controlled products grew 21% faster than non-cold-chain drugs between 2018 and 2024.
Regulators are digitizing. FDA Commissioner Dr. Makary outlined a goal of cutting approval times 'from 10 to 12 months to just one to two months (paywall)' using AI decision aids. Companies that align AI models and DSCSA data streams early can turn compliance from a monthly paperwork sprint into a minute-by-minute digital handshake with faster approvals and an iron-clad audit trail.
OT networks are vulnerable due to the amount of outdated technologies and the rapid digitization of factories that may open holes that didn't exist before. Gartner now warns that applications protected by public-key cryptography algorithms will be unsafe by 2029. NIST's final FIPS 203/204/205 standards give plants a certified toolkit for mitigating quantum computing risks.
Why All This Matters
Across all nine trends, one strategic mandate emerges: speed with trust. The winners of the next few years will run agentic AI on a real-time digital backbone with robust quantum-ready security. Understanding these trends will equip life sciences leaders with the knowledge to convert volatility into sustainable advantage.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
9 minutes ago
- Forbes
New Android Spyware Warning—Stop Using All These Apps Now
Security researchers have just issued a stark warning for hundreds of millions of Android users, as a 'frightening' new spyware threat has been discovered hiding on phones. They describe this as a 'special case' and urge users to take action now. The warning comes courtesy of Kaspersky, which has 'discovered a new malware campaign targeting Android users.' This 'spy' targets smartphones 'through messengers' by pretending to be an antivirus or banking security app. The attackers, Kaspersky says, are preying on the 'fear' of smartphone security threats that means 'many folks are ready to install any app that promises reliable protection from malware and scammers.' Ironically, this solution is actually the problem. When you install one of these apps, 'the fake antivirus imitates the work of a genuine one — scanning the device, and even giving a frightening number of 'threats found'.' In reality, this is the threat now 'spying on the owner of the infected smartphone.' The malware itself is LumaSpy, which can access your phone's microphone and camera to record audio and video, read all your texts, access your contacts, track your precise location and record whatever you are doing on screen. The malware has also been coded to steal photos stored on your phone, but this has not been enabled as yet. Critically, LumaSpy can be tasked to steal passwords stored in browsers, including Google Chrome. That's why I have warned users to delete passwords saved in browsers and use a dedicated, blue-chip password manager instead. LumaSpy will come at you via messages. 'Never download APK files from messengers,' Kaspersky says, 'even if they were sent to you by close friends. Better yet, disable the ability to install unknown applications,' which is easily done with Google's new Advanced Protection Mode in Android 16. The other key advice relates to Accessibility Services, the special permissions many malware-laced apps request to access sensitive data and functions on your phone. You should check the permissions granted to apps regularly and disable these in particular. The simplest advice, though, is to stop using any security or antivirus software on your phone unless it comes from a mainstream developer and ideally is paid for in some way. Stop using any that are not, and delete all of them from your phone.
Yahoo
15 minutes ago
- Yahoo
Hulu to Fully Combine With Disney Plus and Expand Globally: What We Know
More than a year after launching its "Hulu on Disney Plus experience" in the US, Disney announced plans Wednesday to completely integrate Hulu into its Disney Plus streaming service. Additionally, Hulu will become available internationally once the merger happens next year, the company shared in its third-quarter earnings report. The Hulu tile was added to Disney Plus in March 2024, allowing subscribers who have both services to watch Hulu content within the Disney Plus app. The Walt Disney Co. CEO Bob Iger said today that this fall, the Hulu tile will replace the Star tile on the service for international customers. He added that the new offering will give customers more choice and convenience. Pricing details and exact timeline were not discussed, nor do we know what new bundling options might open up down the road, but changes are on the way. "Over the coming months, we will be implementing improvements within the Disney Plus app, including exciting new features and a more personalized homepage," he said, "all of which will culminate with the unified Disney Plus and Hulu streaming app experience that will be available to consumers next year." Disney Plus viewers can watch movies and shows from brands like Star Wars and Marvel, a suite of live channels that includes ABC News and The Simpsons, and if they're subscribed to Hulu or ESPN, a selection of content from both services. According to an executive summary (PDF) shared ahead of Wednesday's earnings call, the merged version of the streaming app will offer "family programming, news, and industry-leading live sports content." Although Iger did not explicitly state whether Hulu will be phased out completely, he mentioned that the merger will result in "efficiencies when these are together. It will be on one tech stack, for instance, one tech platform." He hinted that the Hulu and Disney combo may result in new -- or more -- bundles for customers. "I imagine down the road, it may give us some price elasticity as well that we haven't had before," said Iger. "And it also provides us with a tremendous bundling experience because when you have the one app that has a significant amount of all of the Disney and the other Disney-branded programming with the general entertainment programming bundled, for instance, with the ESPN direct-to-consumer app." The media giant's new standalone sports streaming service -- dubbed ESPN -- will replace ESPN Plus and launch on Aug. 21 (rather than in the fall) and be included in current Disney bundle offerings with pricing that starts at $36 a month to watch with ads. Once the Hulu and Disney Plus merger happens next year, customers may see new streaming packages from the company. Solve the daily Crossword


Forbes
39 minutes ago
- Forbes
Handy Recap And Further Expansion Of How GPT-5 Impacts AI Mental Health Therapy
In today's column, I continue my exploration of how the recently released GPT-5 impacts the use of generative AI and large language models (LLMs) for performing AI-driven mental health therapy. First, I provide a brief recap of my previous discussion, which focused on the user or consumer side of AI-based mental health therapy (see my in-depth analysis at the link here). I am to quickly bring you up-to-speed on this evolving and rather controversial matter. In that prior discussion, I promised that I would follow up and focus on the therapist's side of things. Ergo, I will examine here how it is that therapists and mental health professionals are going to be impacted by the advent of GPT-5. You see, it is inherently a two-punch topic, namely, the use of AI for mental health by consumers and the use of AI by therapists in their mental health services. I've repeatedly predicted that we are inevitably moving from the traditional therapist-client dyad to the emerging triad of therapist-AI-client relationship, see my depiction at the link here. GPT-5 is going to move the needle in that regard. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. Therapists And AI Usage Many therapists and mental health professionals are opting to integrate AI into their practices and overtly use the AI as a therapeutic adjunct for their clients and patients (see my coverage at the link here). Even those that don't go down the route of incorporating AI are bound to encounter clients and patients who are doing so. Those clients and patients will often walk in the door with preconceived beliefs about how their therapy should go or is going, spurred and prodded by what AI has told them. In this sense, one way or another, therapists and mental health professionals are going to be impacted by the release of GPT-5. Right now, there are around 700 million weekly active users of ChatGPT. You can expect many will migrate over to GPT-5, plus, OpenAI is stridently pushing the adoption of GPT-5 and has indicated they plan to sunset their prior generative AI models. Consumers Using AI For Therapy I'd like to set the stage on how it is that generic generative AI and LLMs are typically used in an ad hoc way by consumers for mental health guidance when they are otherwise utilizing the AI for a wide variety of chores and miscellaneous tasks. Millions upon millions of people are using generic generative AI for their ongoing advisor on mental health considerations. You might find it of notable interest that the top-ranked use of contemporary generic generative AI and LLMs is to consult with the AI on mental health matters, see my coverage at the link here. This makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. Compared to using a human therapist, the AI usage is a breeze and readily undertaken. When I say that I am referring to generic generative AI, please know that there are non-generic versions of generative AI and LLMs that are customized specifically for undertaking therapeutic assessments and recommendations, see examples at the link here. I'm going to primarily be discussing generic generative AI, though many of these points can impact the specialized marketplace, too. GPT-5 Enters The Jungle I stridently assert that GPT-5 will undoubtedly bolster consumer use of AI for mental health purposes. GPT-5 is alluring due to various fluency improvements that have been built into OpenAI's latest AI offering. My expectation is that GPT-5 will notably exacerbate and stoke an already feverish pace of growth in people turning to AI for their therapeutic needs. To clarify, I am not suggesting that boosted use is necessarily a positive thing. An ongoing and vociferously heated debate concerns whether the use of generic generative AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome for society. See my analysis of the potential widespread impacts at the link here. If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to generic generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might be the worst destroyer of mental health in the history of humanity. The question becomes, what are therapists and mental health professionals going to do about this madcap rush to the use of generic AI for therapy? How Therapists Are Sizing Up Two major routes are taking place by today's therapists: Allow me to elaborate. One option is to dig your head into the sand and be a naysayer. Pretend that millions of people aren't using AI. Tell your prospective clients and patients that they aren't welcome if they are using AI, or that if they come on board, they will have to forsake their AI usage in that regard. Whether this business strategy will work is a mighty tall guess. Perhaps some people will cherish that their therapist is eschewing AI. That small segment of the population will be happy with such a therapist. On the other hand, the prevailing trend is in the opposite direction. People want their therapist to be up to date. AI is here. Everyone can see this. They would rather choose a therapist who acknowledges the emergence of AI and guides their clients in how to sensibly and suitably use AI. The prudent perspective is that therapists desiring a fruitful future are going to realize that they are better off by embracing AI rather than fighting AI. For more ins and outs, see my analysis at the link here. The Old And New Facets In GPT-5 Leading up to the release of GPT-5, there was an immense amount of wildly fantastical speculation about what GPT-5 would consist of. It's going to be artificial general intelligence (AGI), containing the entirety of human intelligence. Period, end of story. Whoa, some bellowed, it is going to actually be even better and consist of artificial superintelligence (ASI). The AI will be superhuman. It will outthink any human that has ever existed and that will ever come into existence. Sorry to break the sobering news, but GPT-5 is not AGI, nor is it even close to artificial superintelligence. GPT-5 is an incremental advancement over the prior OpenAI models. It is better in some ways than competing marketplace LLMs, and it is also less capable in some ways when compared head-to-head to available LLMs. See my detailed assessment of GPT-5 at the link here. The bottom line is that if you already know about other popular generative AI and LLMs, you are already ready to use GPT-5. The same nature of prompting applies. The same concerns about what AI can and cannot do are still at play. All those facets are inherited into GPT-5. Boom, drop the mic. There are, though, various crucial changes and differences that are significant, especially in the context of seeking AI mental health advice from GPT-5. Let's get into those vital elements. Auto-Switcher And Processing Time GPT-5 is actually a conglomeration of several submodels that were previously in OpenAI's potpourri of generative AI products. They upgraded their popular LLMs such as GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, etc. Those are essentially collected together under the umbrella of GPT-5. The reason this is important to know is that GPT-5 contains an auto-switcher that secretly examines each of your prompts and then computationally decides which of the GPT-5 submodels ought to be run. You have no control over this. It is entirely willy-nilly and up to the GPT-5 auto-switcher to route all prompts. It used to be that the user selected which of those available models to utilize. This involved knowing what you wanted to do and what each of those models could perform. It all depended on what you were looking to do. Some of the models were faster, some were slower. Some were deeper at certain classes of problems, others were shallower. Not only does the GPT-5 auto-switcher select GPT-5 submodels, but an aligned component determines how much run-time your prompt will be allotted. Maybe GPT-5 will guess suitably and give a prompt sufficient time to reach a solid answer. On the other hand, it is entirely possible that the run-time will be short-changed and, worse still, a less-than-best choice of the submodels will be selected. The answer generated could be woefully weak and incomplete. GPT-5 And Mental Health Analyses The new auto-switching and accompanying processing-time guesswork by GPT-5 is going to, at times, mess up the mental health analyses being performed by GPT-5. Again, I'm not saying that GPT-5 should necessarily be used for said purpose, and only pointing out that if it is used for therapy-related aspects, there is a wanton nature of what's going to occur. First, a submodel might be chosen by the auto-switcher that is less capable than a different one in the context of mental health. The compounding problem is that insufficient run-time might be allotted to the effort. The mental health advice that is generated by GPT-5 could be off-target or half-baked (of course, this is always possible, for all LLMs, in general). Second, the mental health advice could end up so varied that it becomes confusing to the user. They presumably won't realize that there are these underlying submodels being ping-ponged. Each prompt, even in the midst of a conversation or chat with GPT-5 will be getting this runaround. Plus, GPT-5 doesn't reveal its hidden shenanigans. The user might readily assume they are doing something that is causing all the mishmash. Here's the upshot for therapists and mental health professionals. If you are going to use GPT-5 as a tool for your own therapy-oriented double-checker or feedback mechanism, keep in mind that GPT-5 could be tossing you back and forth between various submodels. Be wary. You can also attempt to trick GPT-5 into using a particular submodel – see my prompting tips and techniques regarding GPT-5 at the link here. If you are going to advise your clients and patients to use GPT-5, or if they are doing so anyway, it would behoove you -- and them -- to inform them about this auto-switching boondoggle. They will then at least not be caught completely off-guard by a possible inconsistent set of responses. I'm sure they will be appreciative that you forewarned them. It also provides a kind of added benefit that you are looking out for their best interests and are up-to-speed on the latest AI. Writing Is Enhanced On the writing side of things, GPT-5 has improvements in a myriad of writing aspects. The ability to generate poems is enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus. I anticipate that written responses to mental health questions will likely be more robust. More involved. Often more complicated than they used to be under the prior OpenAI models. That's the good news. The somewhat bad news is that GPT-5 might produce rather dense and impenetrable responses, rather than being succinct and direct. Another possibility entails GPT-5 sliding into a poem-producing mode. Mental health advisement via poetry is probably not the best route to go. For therapists, you need to realize that a client or patient using GPT-5 is likely to come to you with a seemingly sophisticated mental health argument or narrative. The AI is doing its darnedest to try and look smart. You will need to carefully dissect what GPT-5 has generated and determine what makes sense and what is ill-advised. You can get GPT-5 to fall back to its old ways of writing. In a prompt, all that needs to be done is tell GPT-5 to write in the same manner as GPT-4 (see my prompting suggestion at the link here). It's up to you whether you want to tell your clients and patients about this. Some might find it useful to get GPT-5 to talk like it is GPT-4. Others might not care and will just proceed with the newly uplifted chatter. Lies And AI Hallucinations OpenAI claims that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is a misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth). I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here. From a mental health angle, an ongoing concern has been that the AI might lie to someone about a mental health issue or perhaps generate a zany response due to encountering an AI hallucination. A person seeking therapy via the AI is vulnerable to believing whatever the AI says. They might not be able to readily figure out that the advice being given is bogus, or worse, harmful to them. A presumed upbeat consideration is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The downbeat news is that it isn't zero. In other words, it is still going to lie and still going to hallucinate. This might happen on a less frequent basis, but nonetheless remains a chancy concern. If you want to try and get GPT-5 to lie less of the time, and have hallucinations less of the time, you can use a two-step prompt that will sternly instruct GPT-5 on this (see my prompting suggestion at the link here). Please know that no matter what you tell GPT-5, there is still a chance of it lying and hallucinating. In terms of clients and patients, your best bet is to emphasize repeatedly that whatever GPT-5 says, and whatever any LLM says, they must remain stoutly wary and alert. I'd suggest you mention this crucial point each time that you have a conversation with a client or patient and when the topic of using AI comes up. Pound away at the disturbing underbelly that AI lies and makes things up. Personas Are Coming To The Fore I've repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so. In the context of mental health, I showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here. OpenAI has indicated they are selectively making available a set of four new preset personas, consisting of Cynic, Robot, Listener, and Nerd. Each of those personas represents those names. The AI shifts into a mode reflecting those types of personalities. As a mental health professional, you ought to give serious consideration to making use of personas in GPT-5 for your own self-training and personal refinement. I'd recommend that you generally avoid the newly devised preset personas. Instead, create personas that have particular relevance to you. For example, you might craft a persona that will pretend to be a person with deep depression. You could then use this persona to hone your therapeutic prowess regarding depression in patients and clients. It can be quite useful. Plus, there is no danger in the sense that since it is just AI, you can try out various avenues to gauge what works and doesn't work. No harm, no foul. For my suggestions on how to prompt the invoking of a persona, see the link here. In terms of clients and patients, I would generally suggest that you do not lean them into using personas. Some people get lost in a persona and start to believe that the AI is real. That's not something to be encouraged. If a client or patient starts to use personas in GPT-5, it would be useful to have them indicate what the persona is and why they feel their activity is meritorious. Then, if feasible, wean them away from the use of personas. That's a case-by-case decision, and a consideration as a therapist that you'll need to studiously decide on. AI Mental Health Is In Flux I'll finish here with some hefty thoughts as a heads-up for you to consider. AI makers find themselves in quite a pickle. By allowing their AI to be used for mental health purposes, they are opening the door to serious legal liability, along with damaging reputational hits if their AI gets caught dispensing inappropriate guidance. So far, they've been relatively lucky and have not yet gotten severely stung by their AI serving in a therapist role. Meanwhile, new laws might put the kibosh on generic unregulated generative AI providing any semblance of mental health advisement. The recently enacted Illinois law restricting AI for mental health usage in Illinois puts the AI makers in quite a rough spot, see my discussion at the link here and the link here. Other states and the federal government might decide to enact similar laws. The Illinois law also applies stringently to therapists and mental health professionals. Overall, the restrictions come down to only allowing AI usage for mainly administrative chores by therapists, such as billing, scheduling, and the like. The use of AI as a therapeutic adjunct is almost entirely placed out of bounds. All-In Or All-Out Though I understand the logic of a veritable ban associated with therapists employing generic unregulated generative AI, doing so to protect consumers, I don't believe it is prudent. It seems like a head-in-the-sand proclamation and belies existing reality. It grossly belies the future growth in the global widespread adoption of AI that is inexorably barreling this way. As the famed trauma physician Gabor Maté astutely reminds us: 'The attempt to escape from pain, is what creates more pain.' Let's keep that in mind as attempts are made to deal with a complex problem in somewhat simplistic ways.