Lyft is getting into the robotaxi game with a big Chinese company
Lyft is planning for thousands of robotaxis in Europe, starting in the UK and Germany by 2026.
This comes as the robotaxi race is heating up globally.
Lyft is teaming up with Chinese tech giant Baidu to bring its robotaxis to European streets.
The San Francisco-based ride-hailing company will deploy Baidu's sixth-generation Apollo Go self-driving robotaxis in Europe starting next year, it said in a Monday press release.
According to the release, Lyft will bring thousands of vehicles from Baidu across Europe in the following years. It said it would start with the United Kingdom and Germany in 2026, following regulatory approvals.
"By integrating Baidu's cutting-edge autonomous driving technology with Lyft's platform reach and operational expertise, we're excited to deliver safer, greener, and more efficient mobility solutions to more users," Robin Li, Baidu's CEO and cofounder, said in the release.
Lyft CEO David Risher added in the release that the partnership will bring the benefits of autonomous vehicles, such as safety, reliability, and privacy, to "millions of Europeans."
Baidu, a Chinese internet search engine giant, launched its Apollo Go electric autonomous vehicles in 2020. According to its website, Apollo Go provides autonomous ride-hailing services in 11 Chinese cities. And earlier this year, Apollo Go announced it would expand its robotaxis to Dubai and Abu Dhabi in 2026.
According to a July 2024 report by the Chinese state-affiliated news outlet Global Times, a 10-kilometre Apollo Go ride in Wuhan city costs between 4 and 16 Chinese yuan, or $0.60 to $2.20.
On July 31, Lyft announced its acquisition of Freenow, a ride-hailing service operating in nine European countries, in a move to expand into the European market.
robotaxi dominance.
On July 31, Musk wrote on X that riders could hail a Tesla in the San Francisco Bay Area and Austin.
However, a July report from HSBC analysts found that it could take years for driverless taxis to become profitable, and the market was "widely overestimated."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Epoch Times
a few seconds ago
- Epoch Times
Behind Iran's Ballistic Missile Programs, Decades of Support From Chinese Nationals
A variety of international sanctions are in place against Iran's nuclear project and missile program, but undercutting the effectiveness of these restrictions is the extensive and longstanding assistance Tehran is receiving from Chinese nationals. Despite decades of legal and diplomatic action by the United States and other countries to impose export controls and non-proliferation agreements, U.S. court indictments and other government documentation show that Chinese business networks deliver products critical for sustaining Iran's missile development and production.

E&E News
a few seconds ago
- E&E News
Chinese ‘fast fashion' company slapped with $1.15M greenwashing fine in Italy
Italy's antitrust agency has fined the Chinese online fast-fashion platform Shein €1 million ($1.15 million) for misleading green claims, it announced Monday. 'The well-known brand, operating in the 'fast' and 'superfast fashion' sectors, adopted a misleading communication strategy regarding the characteristics and environmental impact of its clothing products,' wrote Italian competition authority AGCM in a press release. AGCM said that Infinite Styles Services Co. Ltd., a company that operates Shein's website in Europe, has shared environmental claims that were 'in some instances, vague, generic, and/or overly emphatic, and in others, misleading or omissive.' Advertisement Shein promotes its 'evoluSHEIN by design' collection and the use of 'green fibers' as more sustainable.


TechCrunch
a few seconds ago
- TechCrunch
The EU AI Act aims to create a level playing field for AI innovation. Here's what it is.
The European Union's Artificial Intelligence Act, known as the EU AI Act, has been described by the European Commission as 'the world's first comprehensive AI law.' After years in the making, it is progressively becoming a part of reality for the 450 million people living in the 27 countries that comprise the EU. The EU AI Act, however, is more than a European affair. It applies to companies both local and foreign, and it can affect both providers and deployers of AI systems; the European Commission cites examples of how it would apply to a developer of a CV screening tool, and to a bank that buys that tool. Now, all of these parties have a legal framework that sets the stage for their use of AI. Why does the EU AI Act exist? As usual with EU legislation, the EU AI Act exists to make sure there is a uniform legal framework applying to a certain topic across EU countries — the topic this time being AI. Now that the regulation is in place, it should 'ensure the free movement, cross-border, of AI-based goods and services' without diverging local restrictions. With timely regulation, the EU seeks to create a level playing field across the region and foster trust, which could also create opportunities for emerging companies. However, the common framework that it has adopted is not exactly permissive: Despite the relatively early stage of widespread AI adoption in most sectors, the EU AI Act sets a high bar for what AI should and shouldn't do for society more broadly. What is the purpose of the EU AI Act? According to European lawmakers, the framework's main goal is to 'promote the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.' Yes, that's quite a mouthful, but it's worth parsing carefully. First, because a lot will depend on how you define 'human centric' and 'trustworthy' AI. And second, because it gives a good sense of the precarious balance to maintain between diverging goals: innovation vs. harm prevention, as well as uptake of AI vs. environmental protection. As usual with EU legislation, again, the devil will be in the details. How does the EU AI Act balance its different goals? To balance harm prevention against the potential benefits of AI, the EU AI Act adopted a risk-based approach: banning a handful of 'unacceptable risk' use cases; flagging a set of 'high-risk' uses calling for tight regulation; and applying lighter obligations to 'limited risk' scenarios. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise on August 7. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW Has the EU AI Act come into effect? Yes and no. The EU AI Act rollout started on August 1, 2024, but it will only come into force through a series of staggered compliance deadlines. In most cases, it will also apply sooner to new entrants than to companies that already offer AI products and services in the EU. The first deadline came into effect on February 2, 2025, and focused on enforcing bans on a small number of prohibited uses of AI, such as untargeted scraping of internet or CCTV for facial images to build up or expand databases. Many others will follow, but unless the schedule changes, most provisions will apply by mid-2026. What changed on August 2, 2025? Since August 2, 2025, the EU AI Act applies to 'general-purpose AI models with systemic risk.' GPAI models are AI models trained with a large amount of data, and that can be used for a wide range of tasks. That's where the risk element comes in. According to the EU AI Act, GPAI models can come with systemic risks; 'for example, through the lowering of barriers for chemical or biological weapons development, or unintended issues of control over autonomous [GPAI] models.' Ahead of the deadline, the EU published guidelines for providers of GPAI models, which include both European companies and non-European players such as Anthropic, Google, Meta, and OpenAI. But since these companies already have models on the market, they will also have until August 2, 2027, to comply, unlike new entrants. Does the EU AI Act have teeth? The EU AI Act comes with penalties that lawmakers wanted to be simultaneously 'effective, proportionate and dissuasive' — even for large global players. Details will be laid down by EU countries, but the regulation sets out the overall spirit — that penalties will vary depending on the deemed risk level — as well as thresholds for each level. Infringement on prohibited AI applications leads to the highest penalty of 'up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher).' The European Commission can also inflict fines of up to €15 million or 3% of annual turnover on providers of GPAI models. How fast do existing players intend to comply? The voluntary GPAI code of practice, including commitments such as not training models on pirated content, is a good indicator of how companies may engage with the framework law until forced to do so. In July 2025, Meta announced it wouldn't sign the voluntary GPAI code of practice meant to help such providers comply with the EU AI Act. However, Google soon after confirmed it would sign, despite reservations. Signatories so far include Aleph Alpha, Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, and OpenAI, among others. But as we have seen with Google's example, signing does not equal a full-on endorsement. Why have (some) tech companies been fighting these rules? While stating in a blog post that Google would sign the voluntary GPAI code of practice, its president of global affairs, Kent Walker, still had reservations. 'We remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI,' he wrote. Meta was more radical, with its chief global affairs officer Joel Kaplan stating in a post on LinkedIn that 'Europe is heading down the wrong path on AI.' Calling the EU's implementation of the AI Act 'overreach,' he stated that the code of practice 'introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.' European companies have expressed concerns as well. Arthur Mensch, the CEO of French AI champion Mistral AI, was part of a group of European CEOs who signed an open letter in July 2025 urging Brussels to 'stop the clock' for two years before key obligations of the EU AI Act came into force. Will the schedule change? In early July 2025, the European Union responded negatively to lobbying efforts calling for a pause, saying it would still stick to its timeline for implementing the EU AI Act. It went ahead with the August 2, 2025, deadline as planned, and we will update this story if anything changes.