
Strategic Insights of Generative AI and Its Automotive Use Cases, 2025 Research Report - AI-Driven Voice Assistants to Become Standard in Vehicles by 2033
Generative AI (Gen AI) continues to create an impact across industries including automotive. In the next decade, Gen AI is expected to significantly influence automotive value chains, increase efficiency, productivity, and reduce wastes. In the supply chain, it will be used to optimize inventory and demand forecasting while in product development it can be used to generate design ideas, accelerate prototyping, testing and to shorten the time-to-market.
In manufacturing, it can be used to reduce bottlenecks on the assembly line and provide real time insights to production engineers to make informed decisions. In retail, it can be used to provide 24x7 sales support to dealerships executing human- like interactions with potential customers. Inside the vehicle, Gen AI- driven voice assistants will become standard features in the next decade.
Beyond the automotive lifecycle, Gen AI will make a significant contribution towards autonomous vehicle development and operation. Gen AI start-ups which focus on solutions for autonomous vehicles will crop up in the coming decade driving innovation and fast tracking the development of such vehicles.
Gen AI will support shared mobility by helping increase the efficiency of fleet management, ride booking, pricing, and route planning. In logistics, it will play a key role in choosing the optimal vehicle for a trip, plan routes based on real time traffic data, and customer support functions. Gen AI will also play a key role across various corporate functions including Finance, Human Resources, and Marketing.
Growth Drivers
The Rapid Technological Transformation in the Automotive Industry: The automotive industry has seen transformational shifts in the past decade, with automakers embracing innovative technologies to digitize operations. GenAI will augment this transformation.
The automotive industry has seen transformational shifts in the past decade, with automakers embracing innovative technologies to digitize operations. GenAI will augment this transformation. Endeavor to Enhance the In-Cabin Experience: Automakers are always on the lookout for ways in which they can enhance the in-cabin experience for their customers. GenAI will enable OEMs to offer next-generation human-machine interfaces through GenAI-powered voice assistants.
Automakers are always on the lookout for ways in which they can enhance the in-cabin experience for their customers. GenAI will enable OEMs to offer next-generation human-machine interfaces through GenAI-powered voice assistants. Improve Efficiency and Profitability: GenAI can help automate redundant tasks and, coupled with a voice assistant, provide real-time status updates on active tasks, thereby increasing the efficiency of decision-making. This leads to reduced operational tasks and a better bottom line.
GenAI can help automate redundant tasks and, coupled with a voice assistant, provide real-time status updates on active tasks, thereby increasing the efficiency of decision-making. This leads to reduced operational tasks and a better bottom line. Focus on Increasing the Quality of Customer Engagement: Companies intend to enhance customer engagement and satisfaction by leveraging new technologies. GenAI enables companies to increase the efficiency of customer support chatbots, facilitating 24/7 support in natural languages.
Growth Restraints
Data Privacy Concerns: Instances of sensitive company data leaking into the public domain through GenAI applications make companies uneasy and apprehensive about adopting this technology.
Instances of sensitive company data leaking into the public domain through GenAI applications make companies uneasy and apprehensive about adopting this technology. Issues with Reliability: GenAI tools have been found to hallucinate and give out factually incorrect information, which could be seen as an unreliable technology in the short term. As the technology matures, the accuracy will improve.
GenAI tools have been found to hallucinate and give out factually incorrect information, which could be seen as an unreliable technology in the short term. As the technology matures, the accuracy will improve. Regulatory Issues: Governments could view GenAI suspiciously over the ethics around the use of it by companies and common citizens, and the negative impact it can have on various facets of society. This could result in adverse regulations against the use of GenAI.
Governments could view GenAI suspiciously over the ethics around the use of it by companies and common citizens, and the negative impact it can have on various facets of society. This could result in adverse regulations against the use of GenAI. Practicality: For organizations to effectively leverage the power of GenAI, they need large datasets and significant computing power. This could be difficult to achieve for organizations in the short term.
Impact of the Top 3 Strategic Imperatives on GenAI in Automotive Market
Customer Value Chain Compression
Generative AI (GenAI) can automate customer interaction touchpoints that could not be automated efficiently earlier.
GenAI can understand customer requirements and respond accordingly, almost creating a humanlike conversation when coupled with a voice assistant.
Disruptive Technologies
GenAI is a disruptive technology that can ease the workload and support employees in an organization to achieve their goals faster and with greater accuracy and efficiency.
Industry Convergence
GenAI large language models (LLMs) offer numerous possible use cases for all industries, including automotive.
GenAI has the potential to accelerate innovations in next-generation technologies and products.
Scope of Analysis
Examine possible generative AI use cases in the automotive value chain.
Look at evolving use cases and analyze them to understand their impact on the future automotive value chain.
Break down the automotive value chain into subsegments and discuss potential applications.
Analyze generative AI's potential integration into existing automotive products and solutions.
Examine the challenges associated with generative AI in the workplace and use cases that highlight the concerns regarding this technology.
Derive growth opportunities and key takeaways stemming from the analysis.
Key Topics Covered:
Scope
Scope of Analysis
Why is it Increasingly Difficult to Grow?
The Strategic Imperative
Impact of the Top 3 Strategic Imperatives on GenAI in Automotive Market
Growth Environment: Transformation of GenAI in Automotive
Key Takeaways
Key Automotive Ecosystem Players Using GenAI
Key Automaker GenAI Use Case By Value Chain
GenAI Types and Impact on Automotive Use Case by 2030
Potential Applications of GenAI in Key Corporate Functions
Car Models with GenAI Voice Assistants
Price to Access GenAI Voice Assistants in Cars
GenAI in Automotive - Current Adoption vs Future Impact
Case Study - Valeo
Growth Generator in GenAI in Automotive
Growth Drivers
Growth Restraints
Overview of GenAI
Key Reasons for Organizations to Adopt GenAI
GenAI Vs Traditional AI
The Automotive GenAI Ecosystem - Key Stakeholders
Select GenAI Tools by Application
Most Popular GenAI Tools
IT Decision-Makers' Perception Towards GenAI
Challenges - Concerns Around GenAI in Organizations
Emerging AI Regulations by Key Regions
Overview of GenAI - Key Takeaways
GenAI in Supply Chain
Snapshot of Possible GenAI Applications in Automotive
Possible Use Cases of GenAI in Automotive Supply Chain
GenAI in Automotive Supply Chain - Certainty of Application
GenAI in Supply Chain - Key Takeaways
GenAI in Manufacturing
Possible Use Cases of GenAI in Automotive Product Design
Select GenAI Image Generator Tools
Case Study - Product Design-Toyota
Case Study - Ferrari
GenAI in Automotive Design - Certainty of Application
Possible Use Cases of GenAI in Automotive Production
GenAI in Automotive Production - Certainty of Application
GenAI Adoption by OEMs in Production
Case Study - Mercedes-Benz
Overview of GenAI in Manufacturing - Key Takeaways
GenAI in Automotive Retail
Possible Use Cases of GenAI in Automotive Retail
Case Study - Marketing: Lexus/Toyota
Automotive Retail - Sales Scenario Using GenAI
Case Study - Sales: Fullpath
Case Study - Sales: BMW + Accenture
Case Study - Customer Management: Ford
GenAI in Automotive Retail - Certainty of Application
Overview of GenAI in Retail - Key Takeaways
In-Vehicle GenAI
Possible Use Cases of In-Vehicle GenAI
OEMs with GenAI-Powered Voice Assistants
Case Study - Nio AI Voice Assistant
Case Study - Mercedes-Benz
Case Study - Linguistic Localization: KIA
Automotive Suppliers Focus on GenAI Voice Assistant
Case Study - Soundhound AI
Case Study - Continental & Google
Case Study - Cerence-Skoda
GenAI in the Vehicle Cabin - Certainty of Application
GenAI in the Vehicle Cabin - Key Takeaways
Beyond the Automotive Life Cycle
Possible Use Cases of GenAI in Autonomous Vehicles
Evolving GenAI Case Studies in Autonomous Driving
Case Study - Forvia Hella
Possible Use Cases of GenAI in Shared Mobility
Case Study - Turo
Possible Use Cases of GenAI in Logistics
GenAI Use Case Scenario in Logistics
GenAI Beyond Automotive Life Cycle - Key Takeaways
Growth Opportunity Universe
Growth Opportunity 1: Subscription Revenue Through Generative AI Powered Voice Assistants
Growth Opportunity 2: Usher in an Era of AI-driven Manufacturing
Growth Opportunity 3: Transform Customer Engagement
Appendix & Next Steps
Benefits and Impacts of Growth Opportunities
Next Steps
Companies Featured
Accenture
BMW
Cerence-Skoda
Continental
Ferrari
Ford
Forvia Hella
Fullpath
KIA
Lexus/Toyota
Mercedes-Benz
Nio
Soundhound AI
Toyota
Turo
Valeo
For more information about this report visit https://www.researchandmarkets.com/r/1h4uvw
About ResearchAndMarkets.com
ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


USA Today
14 minutes ago
- USA Today
Worried about AI at work? Avoid these 5 leadership mistakes with your team
Artificial intelligence may be transforming the workplace, but for many employees, it's fueling uncertainty instead of excitement. According to a 2025 Pew Research Center study, 52% of U.S. workers worry AI could disrupt or replace their jobs. And an August 2024 SHRM survey found that nearly half feel unprepared for automation, while 95% say they don't trust their organization to manage the shift in a way that benefits everyone. How managers address these concerns can make or break team morale and productivity. Experts say clear, honest communication is critical, but the wrong message can backfire, fueling fear instead of trust. Whether you're rolling out new tools or just starting the conversation, it's important to engage your team with transparency, context and empathy. Below, two human resources experts break down five common mistakes to avoid when discussing AI with your team and provide guidance on navigating the discussion more effectively. 1. Acting like it's no big deal According to the World Economic Forum, when managers dismiss or avoid discussing AI concerns, they often create bigger problems down the road. 'Business leaders can't bury their heads in the sand and hope for the best,' says Eric Mochnacz, director of operations at Red Clover HR in New Jersey. 'They must have up-front discussions about the benefits of AI in their business, the drawbacks, the potential impacts and the areas where they'll not allow AI usage.' Chad V. Sorenson, president of Florida-based Adaptive HR Solutions, agrees that direct communication is key. 'Employees may feel AI threatens their jobs and may question leaders' motives for introducing AI tools,' he explains. So, 'address the fear and explore how AI can augment workflows and streamline repetitive tasks rather than replace workers.' Takeaway: Don't downplay concerns about AI. Acknowledge employee fears openly and explain how AI will support, not replace, their work. 2. Throwing around 'AI' without defining it AI isn't just one thing. Mochnacz explains that there's a significant difference between using generative AI to enhance email communications and utilizing AI chatbots to manage all customer interactions. Without these distinctions, employees don't understand what to expect from their workplace changes or how they can remain relevant. 'AI is such a buzzword, and leaders haven't taken the time to define it and understand the differences,' says Mochnacz. 'I've been in meetings where people ask, 'Can you do this with AI?' or 'Everyone's talking about AI, so we have to do something with it.'' He emphasizes the importance of clarifying the fundamentals: Takeaway: Don't use 'AI' as a vague catchall. Clearly define what types of AI you're using, what they do and why they matter to your team. 3. Failing to explain the why 'Any time leaders announce a new program or procedure without employee buy-in, there could be fear, skepticism or anger,' Sorenson cautions. Instead of simply telling employees what's changing, explain why the company needs AI. How does it fit into broader business goals? This context helps employees understand their role in the transition rather than viewing it as a threat. 'Ongoing two-way feedback is critical for continued refinement of how teams use AI to improve workflows, processes and results,' notes Sorenson. Takeaway: Don't skip the context. Explain why AI is being adopted and how it supports your team's goals to build trust and buy-in. 4. Overhyping what AI can do 'Leaders promising that AI will handle everything don't have a clear understanding of its possibilities and limitations,' Sorenson says. For instance, AI can help employees understand benefit plans. But it can't handle nuanced harassment complaints or mental health concerns. 'AI systems must be trained to understand when a human must intervene,' he adds. Mochnacz says the problem gets worse when leaders promise capabilities that may never materialize. 'We have no idea what AI is going to be able to do in a month, three months or a year,' he emphasizes. So, it's better to focus on specific, tested use cases rather than grand predictions about AI replacing everything. Takeaway: Avoid making big promises. Focus on what AI can realistically do today, not speculative future capabilities. 5. Leaving people out of the process 'Whenever there's a business, industry or technology shift, involve those it may impact,' stresses Mochnacz. 'Have up-front, direct conversations with your people about their roles and how they see AI helping them be more effective.' A recent MIT Sloan working paper finds that the most successful generative AI deployments consistently involve frontline workers from the earliest stages through rollout. Drawing on over 50 in-depth interviews, MIT researchers demonstrate that when employees help define the problem, co-design workflows, experiment with tools, and shape fair transition policies, not only does adoption improve, but worker productivity and job quality also rise. The key here is framing AI as a collaboration partner rather than a threat. Mochnacz explains that when leaders present AI as a good reality for everyone, employees will engage with the technology. But when the message becomes "prove AI can't replace you," workers resist because it feels like an ultimatum. Takeaway: Don't make AI decisions in a vacuum. Engage employees early and frame AI as a tool to support their work, rather than compete with it. What successful AI communication looks like Sorenson says poor AI communication shows up in obvious ways. You might notice more pushback in meetings, higher employee turnover or a spike in anxious watercooler conversations. These signals suggest that employees feel excluded or uncertain and may be bracing for the worst. In contrast, when communication is clear and inclusive, team engagement improves. 'If your AI communication strategy is successful, employees should engage in the conversation,' Sorenson notes. 'They'll make suggestions to continue to refine its use, and demonstrate an increased productivity level.' Look for those signs of healthy adoption: employees asking questions, suggesting improvements and using AI to work more efficiently. When teams feel empowered, not threatened, you know you've struck the right balance. What is USA TODAY Top Workplaces 2025? Do you work for a great company? Each year, USA TODAY Top Workplaces, a collaboration between Energage and USA TODAY, ranks organizations across the U.S. that excel at creating a positive work environment for their employees. Employee feedback determines the winners. In 2025, over 1,500 companies earned recognition as top workplaces. Check out our overall U.S. rankings. You can also gain insights into top-ranked employers by checking out the links below.


Business Wire
14 minutes ago
- Business Wire
New Lantern Expands AI Radiology Platform with Mammography and PET/CT Viewers
SAN FRANCISCO--(BUSINESS WIRE)-- New Lantern, the first AI-native complete radiology solution, today announced the launch of two specialized viewer modes: the Mammography Viewer Mode and PET/CT Viewer Mode. These solutions address bottlenecks in breast imaging and breast and PET/CT imaging, delivering the company's signature sub-second load times and AI-powered automation, called Curie, to all imaging modalities. With these additions, the company now serves the full spectrum of imaging modalities, delivering the only complete AI radiology solution that eliminates the need for radiologists to navigate multiple platforms. Addressing Critical Market Gaps in Specialized Imaging Radiologists are drowning in slow, outdated imaging systems, where 3D mammography studies and priors can take minutes to load on legacy platforms compared to New Lantern's sub-second performance. With breast cancer screening rates increasing and a shortage of specialized radiologists, these bottlenecks impact patient care and physician productivity. 'Mammography software is stuck in 2000," said New Lantern Founder and CEO Shiva Suri. 'Radiologists waste more time waiting for images to load than actually interpreting them. New Lantern flips this script with the fastest pre-cached load times and moreover lets radiologists focus on diagnosis, while Curie automates reporting tasks which traditionally slow radiologists down.' Purpose-Built for Specialized Imaging Excellence New Lantern's Mammography Viewer Mode delivers a high-performance, diagnostic experience tailored specifically for breast imaging workflows. Optimized for both 2D and 3D mammography (tomosynthesis), the viewer includes: Multi-modality comparison (2D/3D mammography, ultrasound, MRI) Prior study synchronization for seamless comparison workflows AI-assisted annotation that integrate directly into reports in radiologist's language Built-in hanging protocols for screening and diagnostic workflows CAD overlay support for enhanced diagnostic confidence New Lantern's AI Radiology Resident Curie automates laterality and quadrant information in the report so radiologists can spend time focusing on more pertinent findings rather than documentation. The PET/CT Viewer Mode powers fused imaging with real-time scrolling, synchronized multiplanar views, and SUV calculations. Designed specifically for oncology workflows, the viewer features: Fused PET/CT display options One click lesion tracking with AI assisted reporting across multiple studies for longitudinal care, delivering significant time savings through automated prior navigation and tagging Synchronized multiplanar reconstruction (MPR) between current and priors for comprehensive analysis New Lantern's Mammography Viewer Mode and the PET/CT Viewer Mode are available today. Both solutions are integrated into the company's existing cloud-native platform, requiring no additional hardware or complex installations. Radiology practices interested in learning more about the new viewer modes can get in touch with the New Lantern team here. About New Lantern New Lantern is the AI copilot for radiologists. Founded in 2021, the company has built the first complete AI radiology suite that automates key imaging and reporting workflows. New Lantern is backed by investors including Benchmark, Afore Capital, Anthology Fund, Neo, SV Angel, Guillermo Rauch, Amjad Masad, Saji Wickramasekara, Jay Kreps, and Gokul Rajaram, among others. For more information, please visit


TechCrunch
14 minutes ago
- TechCrunch
OpenAI launches two ‘open' AI reasoning models
OpenAI announced Tuesday the launch of two open-weight AI reasoning models with similar capabilities to its o-series. Both are freely available to download from the online developer platform, Hugging Face, the company said, describing the models as 'state-of-the-art' when measured across several benchmarks for comparing open models. The models come in two sizes: a larger and more capable gpt-oss-120b model that can run on a single Nvidia GPU, and a lighter-weight gpt-oss-20b model that can run on a consumer laptop with 16GB of memory. The launch marks OpenAI's first 'open' language model since GPT-2, which was released more than five years ago. In a briefing, OpenAI said its open models will be capable of sending complex queries to AI models in the cloud, as TechCrunch previously reported. That means if OpenAI's open model is not capable of a certain task, such as processing an image, developers can connect the open model to one of the company's more capable closed models. While OpenAI open-sourced AI models in its early days, the company has generally favored a proprietary, closed-source development approach. The latter strategy has helped OpenAI build a large business selling access to its AI models via an API to enterprises and developers. However, CEO Sam Altman said in January he believes OpenAI has been 'on the wrong side of history' when it comes to open sourcing its technologies. The company today faces growing pressure from Chinese AI labs — including DeepSeek, Alibaba's Qwen, and Moonshot AI —which have developed several of the world's most capable and popular open models. (While Meta previously dominated the open AI space, the company's Llama AI models have fallen behind in the last year.) In July, the Trump Administration also urged U.S. AI developers to open source more technology to promote global adoption of AI aligned with American values. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise on August 7. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW With the release of gpt-oss, OpenAI hopes to curry favor with developers and the Trump Administration alike, both of which have watched the Chinese AI labs rise to prominence in the open source space. 'Going back to when we started in 2015, OpenAI's mission is to ensure AGI that benefits all of humanity,' said OpenAI CEO Sam Altman in a statement shared with TechCrunch. 'To that end, we are excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit.' (Photo by) Image Credits:Tomohiro Ohsumi / Getty Images How the models performed OpenAI aimed to make its open model a leader among other open-weight AI models, and the company claims to have done just that. On Codeforces (with tools), a competitive coding test, gpt-oss-120b and gpt-oss-20b score 2622 and 2516, respectively, outperformed DeepSeek's R1 while underperforming o3 and o4-mini. OpenAI's open model performance on codeforces (credit: OpenAI). On Humanity's Last Exam, a challenging test of crowd-sourced questions across a variety of subjects (with tools), gpt-oss-120b and gpt-oss-20b score 19% and 17.3%, respectively. Similarly, this underperforms o3 but outperforms leading open models from DeepSeek and Qwen. OpenAI's open model performance on HLE (credit: OpenAI). Notably, OpenAI's open models hallucinate significantly more than its latest AI reasoning models, o3 and o4-mini. Hallucinations have been getting more severe in OpenAI's latest AI reasoning models, and the company previously said it doesn't quite understand why. In a white paper, OpenAI says this is 'expected, as smaller models have less world knowledge than larger frontier models and tend to hallucinate more.' OpenAI found that gpt-oss-120b and gpt-oss-20b hallucinated in response to 49% and 53% of questions on PersonQA, the company's in-house benchmark for measuring the accuracy of a model's knowledge about people. That's more than triple the hallucination rate of OpenAI's o1 model, which scored 16%, and higher than its o4-mini model, which scored 36%. Training the new models OpenAI says its open models were trained with similar processes to its proprietary models. The company says each open model leverages mixture-of-experts (MoE) to tap fewer parameters for any given question, making it run more efficiently. For gpt-oss-120b, which has 117 billion total parameters, OpenAI says the model only activates 5.1 billion parameters per token. The company also says its open model was trained using high-compute reinforcement learning (RL) — a post-training process to teach AI models right from wrong in simulated environments using large clusters of Nvidia GPUs. This was also used to train OpenAI's o-series of models, and the open models have a similar chain-of-thought process in which they take additional time and computational resources to work through their answers. As a result of the post-training process, OpenAI says its open AI models excel at powering AI agents, and are capable of calling tools such as web search or Python code execution as part of its chain-of-thought process. However, OpenAI says its open models are text-only, meaning they will not be able to process or generate images and audio like the company's other models. OpenAI is releasing gpt-oss-120b and gpt-oss-20b under the Apache 2.0 license, which is generally considered one of the most permissive. This license will allow enterprises to monetize OpenAI's open models without having to pay or obtain permission from the company. However, unlike fully open source offerings from AI labs like AI2, OpenAI says it will not be releasing the training data used to create its open models. This decision is not surprising given that several active lawsuits against AI model providers, including OpenAI, have alleged that these companies inappropriately trained their AI models on copyrighted works. OpenAI delayed the release of its open models several times in recent months, partially to address safety concerns. Beyond the company's typical safety policies, OpenAI says in a white paper that it also investigated whether bad actors could fine-tune its gpt-oss models to be more helpful in cyber attacks or the creation of biological or chemical weapons. After testing from OpenAI and third-party evaluators, the company says gpt-oss may marginally increase biological capabilities. However, it did not find evidence that these open models could reach its 'high capability' threshold for danger in these domains, even after fine-tuning. While OpenAI's model appears to be state-of-the-art among open models, developers are eagerly awaiting the release of DeepSeek R2, its next AI reasoning model, as well as a new open model from Meta's new superintelligence lab.