logo
Have thoughts on e-scooter safety, parking in Austin? Here's how to weigh in

Have thoughts on e-scooter safety, parking in Austin? Here's how to weigh in

Yahoo18-03-2025

AUSTIN (KXAN) — The city of Austin is in the process of revisiting its rules surrounding electric scooters and e-bikes — and wants community input into how its micromobility program looks moving forward.
City leaders launched a shared e-scooter and e-bike survey to gather public intel into the usage and availability, safety and parking in Austin. An open house will be held on March 24 from 4:30 p.m. to 6:30 p.m. at the Austin Public Library's Carver branch, located at 1161 Angelina Street in east Austin.
'We want to know how you use e-scooters and e-bikes, your thoughts on safety and what you think about parking the devices,' city officials wrote in an Austin Mobility newsletter blurb Monday. 'The goal is to ensure Austin's transportation system gives everyone safe and more convenient ways to get around town.'
Currently, Austin is home to two shared mobility providers: Bird and Lime. Bird operates a fleet of 3,000 scooters in town, while Lime's Austin fleet includes 3,700 scooters and 180 e-bikes.
Last March, the Austin Transportation and Public Works Department unveiled several changes to scooter use in Austin, including a citywide reduction that capped e-scooters in Austin from 8,700 devices to 6,700 e-scooters and a downtown cap reduction from 4,500 to 2,250. The department flagged last spring concerns with serious injuries and safety issues linked to e-scooter users.
Despite the overall and downtown device caps, officials noted exceptions for larger scale events, such as South by Southwest Conference & Festivals as well as the Austin City Limits Music Festival. Data provided by Lime Tuesday found an 86% increase in ridership in Austin during this year's SXSW, with more than 139,000 trips logged by riders between March 7 and March 15.
E-SCOOTER NEWS: Austin's e-scooter limit sees decline in ridership, less clutter
'Austin's festivals drive some of Lime's highest ridership weeks ever, and we went all out to prepare for festival season this year with record-breaking results,' said Chris Betterton, Lime's senior operations manager, in the release. 'We credit the city of Austin for their close collaboration in creating a transportation plan that helped attendees and residents get where they were going safely and sustainably. We'll take all the lessons learned this year and apply them to make next year's festival season even better.'
Following the spring 2024 fleet size reduction, new findings in December revealed a slight downturn in citywide trips but improvements in minimizing device clutter on city sidewalks, curbs and right of ways, officials said.
'The new fleet cap has had minimal impact on the individual vendors while leading to less clutter on downtown sidewalks and increasing trips per device,' the December memo read in part.
Back in August 2024, an audit from the Austin Auditor's Office highlighted faults in the city's e-scooter crash data tracking system, noting the city had 'a lack of complete and reliable data' that, in turn, impacts the city's ability to make safety changes or recommendations. Those faults impacted the city's capability of detecting how many e-scooter crashes have happened, possible trends and patterns linked to crashes and properly offering educational outreach or recommending rule changes.
RELATED: Austin audit finds faults in city's e-scooter crash data
The audit recommended the Austin Transportation and Public Works Department work to define terms like e-scooter, collision and crash in its protocol and collaborate with APD and ATCEMS to 'establish standardized coding for e-scooter crashes, with the goal to enhance safety related data.'
It also suggested the city's transportation department continue to meet with e-scooter vendors on a monthly basis to discuss continuous or growing problems, outline possible solutions for minimizing those issues and communicating with them on any e-scooter operational changes at the city level. Those recommendations came with a suggested implementation deadline of March 2025.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This Indian GenAI Startup Is Reshaping Dubbing and Lip Sync
This Indian GenAI Startup Is Reshaping Dubbing and Lip Sync

Forbes

timea day ago

  • Forbes

This Indian GenAI Startup Is Reshaping Dubbing and Lip Sync

Co-founders of NeuralGarage after their SXSW win in March 2025. Ever watched a film that felt odd because you watched the dubbed version? The visuals of lip-syncing often do not match what you hear, right? The Indian startup Neural Garage offers a solution that is an AI-powered one and addressees the long-standing problem of "visual discord" in dubbing. In an exclusive interview, Mandar Natekar, Co-founder and CEO, NeuralGarage shares details on the technology - Visual Dub - which fixes lip-sync, and facial expressions for dubbed content. It even works with changes in script. In this exclusive interview, Natekar explains how the technology works by perfectly synchronizing actors' lip movements and facial expressions with the dubbed audio. The attempt creates an authentic and immersive viewing experience, eliminating the visual awkwardness often found in traditional dubbing. Earlier this year, the world's first movie with AI-powered visual dubbing - Swedish sci-fi adventure film Watch the Skies - released in theatres. The Los Angeles-based movie-making AI firm Flawless worked on the visual dub for the English-dubbed version. NeuralGarage's Visual Dub also works on facial expressions and lip movements for dubbed versions without any fresh shoots. Asked about the ways his innovative technology helps enhance the experience of watching dubbed versions of world cinema, Natekar says, 'We've also developed our own voice cloning technology. Let us say, there's a Tom Cruise film that has been dubbed in Hindi. Obviously, Tom Cruise lines will get dubbed by a Hindi dubbing artist - but he does not sound like Tom Cruise. Apart from ensuring that the lip-sync matches the Hindi version, we can even make the Hindi dubbing artist sound like Tom Cruise.' 'With our lip-sync technology and our voice cloning technology, we can actually now make the dubbed content look and sound absolutely natural as if it has been shot and filmed in the language of the audio itself.' SXSW win In March 2025, NeuralGarage created history when they won the SXSW Pitch Competition becoming the first Indian startup to bag the award. NeuralGarage's Visual Dub technology won in "Entertainment, Media, Sports & Content" category. Recalling the moment, Mandar Natekar says, 'SXSW is one of the most prestigious platforms when it comes to entertainment worldwide. This is a platform where people in the business join talent from across the world, including Hollywood. You get to meet people from Paramount, Universal, Warner Brothers, business executives and actors, directors…..all of them come to the festival. This competition highlights some of the best startups in the world that could contribute to the entertainment industry. Winning meant we were judged by a jury that consisted of people in the business and people in the investor ecosystem - very big VCs were presented and that is the kind of intense validation for the technology we built, both in terms of the potential use cases and also in terms of the potential business valuation.' 'Winning the award gives us a lot of credibility. Being in the US makes us more easily marketable - the entire entertainment industry is kind of located here. Now SXSW award gives us instant credibility. We've been getting queries from some of the largest studios in the world and broadcast operations on how we can work together, ever since the awards." Challenges of building NeuralGarage Recalling his early career days, Natekar says, 'As a co-founder of the company - and I have three other co-founders - there are challenges. I spent more than 22 years in the entertainment business in India before co-founding my own. The startup world is totally different from the corporate life. The startup world is completely DIY - you have to do everything yourself. It has been a very interesting adventure - unlike the corporate life where you work to fulfill somebody else's dream, here you have the chance to turn your own dreams into reality and create your own legacy. There are ups and downs, but they are part and parcel of life. Some days you wake up thinking you'll win the world. Some day you go to bed thinking 'Man, is it all worth it?' But then you wake up in the morning and again restart." 'It is all very interesting. It's been four years now since we started up. And in the last one year since we've put our technology out, we've seen massive success and validation. We got selected by AWS and Google in their global accelerators. We got selected by TechCrunch to participate in TechCrunch Battlefield in SF last October. We also won the L'Oreal Big Bang Beauty Tech Innovation Competition, then this win at SXSW recently. Our ambition is to build software in India that can actually create a global brand. And we are on our way there.' A few years ago, right in the middle of raising funds for his startup, Mandar Natekar faces major medical and personal hurdles. He refuses to revisit the time and delve on the hardships, but agrees to share what he learnt from the period of struggle. "I'll tell you my biggest learning - in your life, there are three very strong pillars of any successful person. The first one is obviously your own determination and thought process while the second one is family. The third pillar is health. You have to ensure that all of these pillars are on very, very strong foundation. You have to nurture all of these pillars. If anything goes wrong in any one of these three, it can cause massive upheaval in your life. Suggestions for aspiring tech startups founders 'I tell people to always chase dreams. If you think that you have a compelling idea that can change the world, work on it. And there is no better time to start on anything you want to do than now. Generally, people procrastinate - 'I'll build this after five years' but these plans don't work. If you are passionate about something and have a compelling idea that you want to bring to the world, do it now. There is no better time than this moment. If you base your decision-making on goalposts, you will always be calculating,' Natekar signs off with his bits of suggestions for aspiring founders of tech startups all across. (This conversation has been edited and condensed for clarity.)

Meta Says Its New AI Model Understands Physical Rules Like Gravity
Meta Says Its New AI Model Understands Physical Rules Like Gravity

CNET

time4 days ago

  • CNET

Meta Says Its New AI Model Understands Physical Rules Like Gravity

A new generative AI model Meta released this week could change how machines understand the physical world, opening up opportunities for smarter robots and more, the company said. The new open-source model, called Video Joint Embedding Predictive Architecture 2, or V-JEPA 2, is designed to help artificial intelligence understand things like gravity and object permanence, Meta said. "By sharing this work, we aim to give researchers and developers access to the best models and benchmarks to help accelerate research and progress," the company said in a blog post, "ultimately leading to better and more capable AI systems that will help enhance people's lives." Current models that allow AI to interact with the physical world rely on labeled data or video to mimic reality, but this approach emphasizes the logic of the physical world, including how objects move and interact. The model could allow AI to understand concepts like the fact that a ball rolling off of a table will fall. Meta said the model could be useful for devices like autonomous vehicles and robots by ensuring they don't need to be trained on every possible situation. The company called it a step toward AI that can adapt like humans can. One struggle in the space of physical AI has been the need for significant amounts of training data, which takes time, money and resources. At SXSW earlier this year, experts said synthetic data -- training data created by AI -- could help prepare a more traditional learning model for unexpected situations. (In Austin, the example used was the emergence of bats from the city's famed Congress Avenue Bridge.) Meta said its new model simplifies the process and makes it more efficient for real-world applications because it doesn't rely on all of that training data. The next steps for world models include training models that are capable of learning, reasoning and planning across different time and space scales, making them better at breaking down complicated tasks. Multimodal models, that can use other senses like audio and touch in addition to vision, will also help future AI models understand the real world.

Meta Says Its New AI Model Can Understand the Physical World
Meta Says Its New AI Model Can Understand the Physical World

CNET

time5 days ago

  • CNET

Meta Says Its New AI Model Can Understand the Physical World

Meta says a new generative AI model it released Wednesday could change how machines understand the physical world, opening up opportunities for smarter robots and more. The new open-source model, called V-JEPA 2 for Video Joint Embedding Predictive Architecture 2, is designed to help AI understand things like gravity and object permanence, Meta said. Current models that allow AI to interact with the physical world rely on labelled data or video to mimic reality, but this approach emphasizes the logic of the physical world, including how objects move and interact. The model could allow AI to understand concepts like the fact that a ball rolling off of a table will fall. Meta said the model could be useful for devices like autonomous vehicles and robots by ensuring they don't need to be trained on every possible situation. The company called it a step toward AI that can adapt like humans can. One struggle in the space of physical AI has been the need for significant amounts of training data, which takes time, money and resources. At SXSW earlier this year, experts said synthetic data -- training data created by AI -- could help prepare a more traditional learning model for unexpected situations. (In Austin, the example used was the emergence of bats from the city's famed Congress Avenue Bridge.) Meta said its new model simplifies the process and makes it more efficient for real-world applications because it doesn't rely on all of that training data.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store