
Tired of subway delays? The MTA wants to fix that by strapping Google smartphones to New York trains
Rob Sarno has been with the New York City's Metropolitan Transit Authority (MTA) for 14 years. As assistant chief track officer, he assists maintenance and emergency response — which also meant teaching artificial intelligence systems what a damaged rail sounds like last year.
For a few months starting in September, he helped a pilot program between the MTA and Google Public Sector, the search giant's division that works with government agencies and educational institutions. The project involves retrofitting Google's Pixel smartphones to certain subway cars to collect sounds and other data and feed it into Google's Cloud. The data is then analyzed to spot patterns that could indicate track defects before they become a problem.
'By being able to detect early defects in the rails, it saves not just money but also time – for both crew members and riders,' Demetrius Crichlow, New York City Transit president, said in a statement released February 27.
New York is just one major city to have implemented AI in the hopes of improving transit systems. In 2023, infrastructure consulting firm Aecom completed a pilot program for the New Jersey Transit system that used the technology to analyze customer flow and crowd management, and in 2024, the Chicago Transit Authority (CTA) uses AI to enhance security by detecting guns. Also in 2024, Beijing introduced a facial recognition system to be used in place of transit tickets and cards to reduce lines during rush hours.
The pilot program between the MTA and Google — dubbed TrackInspect — is just the latest indication that companies are exploring whether the technology can make transit more efficient, although whether such an initiative will ever be deployed widely remains to be seen.
TrackInspect which was announced last week, started as a proof-of-concept developed by Google Public Sector in partnership with its Rapid Innovation Team for the MTA at no cost, according to the transit agency. However, it's uncertain whether the project will expand into a permanent program since it's unclear how much it would cost the MTA, which already needs billions of dollars to complete existing projects.
Google has partnered with other transportation agencies in the past. The tech giant has developed a chat box for the Chicago's CTA, launched direct data integration for Amtrak departure and arrival times and has partnered with tech providers Passport and ParkMobile to connect street parking meters to Google Maps.
But the MTA's reach is massive; it's the country's largest public transit system with 472 subway stations and 237 local bus routes, according to MTA data. In 2024, the agency says there were more than 1 billion trips on the subway.
Yet service disruptions continue to be a problem for the aging 120-year-old transit system. There was a total of 38,858 total delays in September; 39,492 in October; 36,971; in November and 42,862 delays in December last year, according to data from the MTA. Chicago's transit system, by comparison, only experienced about 200 delays each month with a wait time of 10 minutes or more in September, November and December. But only two of Chicago's subways run 24 hours a day, unlike in New York, where most of the train lines runs at all hours every day or have other train lines fill in the gaps.
The goal of a program like TrackInspect is to figure out how to cut down on service disruptions.
Between September 2024 and January 2025, six Google Pixel smartphones with 'standard, off-the-shelf plastic cases' were installed on four R46 subway cars — better known as the cars with the orange and yellow seats. The smartphones collected 335 million sensor readings, 1 million GPS locations and 1,200 hours of audio, according to the MTA.
The smartphones, which were located inside and underneath the subway cars, detected subtle sounds and vibrations using sensors and microphones. The smartphones located inside cars had their native microphones disabled and did not capture audio or customers' conversations, only vibrations, whereas the smartphones outside the cars had additional attached microphones, according to the MTA. New York City Transit inspectors would examine areas highlighted by the AI system, manually check for issues and then feed those findings into the model to train it, the MTA said.
The system highlighted 'areas that were above a certain threshold for decibels,' which could indicate a defect, according to Sarno. His role involved listening to clips ranging from five to 30 minutes and marking snippets that could signal an issue.
'Maybe a loose ball, maybe a loose joint, maybe a battered rail,' he said.
When asked why the devices were retrofitted on older models instead of newer ones, Sarno said the MTA typically uses older car models when making modifications in case there are any unwanted effects.
The MTA chose the A line because its cars go above and below ground. It also has areas with new construction, which provided a baseline for the MTA, according to Sarno. And there is no shortage of disruptions on the A line: Data from New York Open Data — an online portal where city agencies provide raw data to promote transparency — shows there were 2,252 delays in September, 2,368 in October, 2,643 in November and 2,572 in December. But not all delays were caused by mechanical or track problems; factors like crew availability, people on the track and construction played a much bigger role in setbacks.
After NYCT track inspectors examined the tracks in person, they compared their findings with Sarno and the system's discoveries.
'That's how we were teaching the model,' Sarno said. If his estimate based on the audio captured by Google's phones matched the inspector's findings, that was considered a positive prediction, and the AI model would be taught accordingly. Sarno said that his own positive prediction success rate was about 80 percent.
In addition to capturing and analyzing data for potential issues, the TrackInspect program included an AI system based on Google's Gemini model that inspectors could use 'to ask questions about maintenance history, protocols, and repair standards, with clear, conversational answers,' according to the MTA.
The TrackInspect system identified 92 percent of the defect locations found by the MTA's inspectors and is considered to have been a success, a Google Public Sector spokesperson told CNN, adding that other transit systems have already expressed interest in similar programs.
New York Open Data showed that certain types of delays, such as those related to braking issues, rail and roadbed problems and service delivery, decreased on the A line from September to December. But it's too soon to tell whether the pilot contributed to that change without further analysis, the MTA said.
The trial with Google may be over, but the MTA isn't finished yet. It's now trying to court other companies with technology that could help develop track improvement software.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


WIRED
32 minutes ago
- WIRED
This Chatbot Tool Pays Users $50 a Month for Their Feedback on AI Models
Jun 13, 2025 7:00 AM On Yupp, chatbot users earn cash by saying which of two prompts they prefer—info that has great value to the AI companies running the models Photo-Illustration: Wired Staff; Yupp/Getty Images To show off how easy it is for users to earn money by using his new chatbot platform, Pankaj Gupta offers to cash out $1 worth of Yupp credits, sending it to me over Venmo or PayPal. I'm talking with Gupta in the WIRED office during a prelaunch demo of Yupp, which comes out of stealth mode today. Journalistic ethics forbid accepting gifts from sources, so I politely decline. He proceeds to send it over PayPal to his Stanford alumni email. Gupta is the CEO of Yupp, which is free to use and available globally. The website looks similar to other generative AI tools like ChatGPT. There's a prompt box, a way to attach files, and a log of past conversations. The main difference is that every time users ask Yupp a question, they'll see two answers, generated by two different models and displayed side by side. Yupp routes prompts to a pair of LLMs, choosing from a pool of over 500 models that includes products from leading US generative AI companies like OpenAI, Google, and Anthropic, as well as international releases, like models from Alibaba, DeepSeek, and Mistral. After looking over the two answers, users pick the response they like best, then provide feedback explaining why. For their effort, they earn a digital scratch card with Yupp credits. "You're not being employed, but you can make a little bit of money,' says Gupta. In my testing, the Yupp credits on the scratch cards typically ranged from zero to around 250, though they occasionally went higher. Every 1,000 credits can be exchanged for $1. Users can cash out a maximum of $10 a day and $50 a month. Not sure where to start while testing this web app, I turned to the range of pre-written topics flickering beneath Yupp's prompt bar, which spanned from news topics, like David Hogg leaving the DNC, to ideas for image-creation prompts, like generating a crochet-looking surfer. (Yupp's models can generate text or images.) I eventually chose to have the bots explain different perspectives on the current Los Angeles protests. I was interested in how it would pull from news reports and other sources to generate the analysis about a political issue. Yupp notified me that generating this answer would cost 50 of my 8,639 Yupp credits; users have to spend credits to make credits on Yupp. It generated two answers, one from Perplexity's Sonar, on the left side, and one from an 'AI agent' for news built by Yupp, on the right. AI agents are buzzy right now; they're basically task-based AI programs that can perform a string of simple operations on your behalf when given a single prompt. The output based on Perplexity's model answered the question citing five online sources, including CBS News and a YouTube video uploaded by the White House titled 'Third-World Insurrection Riots on American Soil.' The other answer, generated by the news agent, cited twice as many sources, including the socialist magazine Jacobin and MSNBC. In addition to having more sources, the answer on the right side included more context about what Los Angeles mayor Karen Bass has been doing. I clicked the button saying I preferred that generation and gave my feedback, which Yupp anonymizes before aggregating. A shiny card resembling a lottery scratcher ticket popped up afterwards, and I used my mouse to scratch it off. I got a measly 68 credits for that feedback, not exactly a windfall. But since I spent 50 credits to run the prompt, it put me up by 18 credits. After about an hour of messaging with the chatbot about different topics and giving my feedback on the models, the total points accrued equaled about $4. The cash-out options include PayPal and Venmo, but also cryptocurrencies like Bitcoin and Ethereum. 'Crypto and stablecoin allow us to instantly reach anywhere in the world,' Gupta says. While I didn't earn much money, the free outputs did include answers generated by newly released models that are often locked behind subscription paywalls. If someone wants to use a free chatbot and doesn't mind the friction of providing feedback as the web interface flips between models, Yupp could be worth trying out. During the demo, Gupta asked Yupp where the WIRED office was located. Both models spit out wrong answers initially, though subsequent tries got it right. Still, he sees the side by side outputs as potentially helpful for users who are concerned about AI-generated errors, which are still quite prevalent, and want another point of comparison. ''Every AI for everyone' is kind of our tagline,' says Gupta. 'We have organized all the AI models we can find today.' Yupp's website encourages developers to reach out if they want their language or image model added to the options. It doesn't currently have any deals with AI model builders, and provides these responses by making API calls. Every time someone uses Yupp they are participating in a head-to-head comparison of two chatbot models, and sometimes getting a reward for providing their feedback and picking a winning answer. Basically, it's a user survey disguised as a fun game. (The website has lots of emoji.) He sees the data trade off in this situation for users as more explicit than past consumer apps, like Twitter—which he's quick to tell me that he was the 27th employee at and now has one of that company's cofounders, Biz Stone, as one of his backers. 'This is a little bit of a departure from previous consumer companies,' he says. 'You provide feedback data, that data is going to be used in an anonymized way and sent to the model builders.' Which brings us to where the real money is at: Selling human feedback to AI companies that desperately want more data to fine tune their models. 'Crowdsourced human evaluations is what we're doing here,' Gupta says. He estimates the amount of cash users can make will add up to enough for a few cups of coffee a month. Though, this kind of data labeling, often called reinforcement learning with human feedback in the AI industry, is extremely valuable for companies as they release iterative models and fine tune the outputs. It's worth far more than the bougiest cup of coffee in all of San Francisco. The main competitor to Yupp is a website called LMArena, which is quite popular with AI insiders for getting feedback on new models and bragging rights if a new release rises to the top of the pack. Whenever a powerful model is added to LMArena, it often stokes rumors about which major company is trying to test out its new release in stealth. 'This is a two-sided product with network effects of consumers helping the model builders,' Gupta says. 'And model builders, hopefully, are improving the models and submitting them back to the consumers.' He shows me a beta version of Yupp's leaderboard, which goes live today and includes an overall ranking of the models alongside more granular data. The rankings can be filtered by how well a model performs with specific demographic information users share during the sign-up process, like their age, or on a particular prompt category, like health-care related questions. Near the end of our conversation, Gupta brings up artificial general intelligence—the theory of superintelligent, human-like algorithms—as a technology that is imminent. 'These models are being built for human users at the end of the day, at least for the near future,' he says. It's a fairly common belief, and marketing point, among people working at AI companies, despite many researchers still questioning whether the underlying technology behind large language models will ever be able to produce AGI. Gupta wants Yupp users, who may be anxious about the future of humanity, to envision themselves as actively shaping these algorithms and improving their quality. 'It's better than free, because you are doing this great thing for AI's future,' he says. 'Now, some people would want to know that, and others just want the best answers.' And even more users might just want extra cash and be willing to spend a few hours giving feedback during their chatbot conversations. I mean, $50 is $50.


Android Authority
34 minutes ago
- Android Authority
This forgotten Google app let me explore Tokyo like an open world game, and it's surreal
Calvin Wankhede / Android Authority When I'm planning a trip to a new destination, I try to get a feel for the place before I leave — especially if I'm going abroad. Not that I'm trying to mitigate culture shock, but it's helpful to know what the areas I'm visiting actually look like. I prefer knowing how walkable the area around my hotel is and what kind of restaurants are nearby ahead of time. Google Maps is of course my first choice for this task, and I've spent hours mindlessly tapping away. But there's a better way. Nearly a decade ago, Google quietly released an app that lets you roam around the entire world and actually feel like you're standing in it. I'm talking about Google Earth VR, an app designed for the first generation of PC headsets but still works on modern hardware. It may not be the most accessible version of Google Earth, but it's a one-of-a-kind experience that has stuck with me and still offers first-person perspective immersion like nothing else out there. Google Earth, but you're the main character What makes Google Earth VR so special? The app wastes no time to demonstrate — the welcome tour opens with you suspended high above the Arches National Park in Utah at the crack of dawn. Press a button and time accelerates until the sun rises fully and the red rocks beneath are bathed in a warm glow. You get a few seconds to gaze at this endless vista surrounding you, but it's not long before you're dropped in the middle of Tokyo and surrounded by tall buildings instead. Finally, it shuttles you through a series of notable landmarks around the world, each at different times of day. By the end of the tour, you're familiarized with the controls and left to explore. This is where Google Earth VR shines — you can fly yourself up to float above entire countries or shrink down to ground level and strain your neck to see the top of super tall buildings. You have the option to rotate the world around you, change perspectives, or gradually drift through a city like a drone. It's all smooth movement and you always feel like you're in control. Google Earth VR lets you stand in the middle of a dense city or fly through it like a drone. The Earth VR app also really comes into its own when you're in a city where Google has collected detailed 3D imagery like Tokyo, New York, and Paris. Scale yourself down to street level and it feels like you're physically there, just without the crowds or traffic around you. But even the best photogrammetry looks like an AI-generated mess when you're within spitting distance of it. Buildings turn mushy, cars look melted, and fine detail disappears. Luckily then, the folks at Google employed a genius solution — moving a controller up to your head switches you into Street View. There's no better way to explain it than to say it's like standing in a Street View photo. You're at human height, free to look around with your head naturally. Better yet, you can teleport a few feet at a time to the next street view image or cross the street to see a different perspective. You can 'walk' through an entire city this way, ironically emulating an open world game. Street View in VR: A game changer for travel If you're unable to travel due to mobility limitations, Google Earth VR might be the closest alternative out there — and not in a gimmicky way. It offers a level of presence that flat screens simply can't match. From your own home, you can stand at the base of the Eiffel Tower, hover over the Golden Gate Bridge, or drift above the Sydney Opera House. The app includes a built-in list of famous landmarks that you can visit instantly. Even better, you're not stuck with whatever lighting conditions the real world had when the imagery was captured. With a flick of your wrist, you can rotate the sun's position in the sky to change the time of day, casting long shadows from buildings or nailing the desolate look in a desert at midnight. Google Earth VR lets you visit real world addresses, pop into Street View, and move around in human scale. But even as a frequent traveler, Google Earth VR is incredibly handy. This is because it lets you input any address, just like the Maps app on your phone, so you can quickly zero in on a particular location. If I want to see what the walk from my hotel to the subway looks like for my upcoming trip, I can simply fly down to the ground level and enter Street View. I did exactly this before leaving for Malaysia last year and walking around in Street View helped me realize that the city was far more car-centric than its Asian neighbors. It also helped me realize that one route to my hotel was much more accessible than another. With this information, I picked a different mode of transport that required some more walking but didn't force me to cross an eight-lane road. Of course, you can't get a true feel of the neighborhood without live traffic or pedestrian activity — this sadly isn't Microsoft Flight Simulator with its moving cars. But you can still glean a lot about a place just by looking for clues in the environment. Are shops open during the day? Is the sidewalk well maintained? The list goes on. For even the fundamentals of trip planning, Google Earth VR beats passively browsing maps on a phone. And even though the app hasn't received any major updates over the years, it pulls the latest Street View images from Google's servers. Any businesses or storefronts you see in the app, you will likely still encounter in the real world. See the Earth before Google sunsets it forever Calvin Wankhede / Android Authority I first tried Google Earth VR years ago on an original Oculus Rift, back when true virtual reality was still a novelty and required drilling three infrared cameras into my wall. Still, the sense of scale and freedom it offered was impressive. But I more or less forgot about it — until I picked up a Meta Quest 3 last year and decided to revisit my old Oculus game library. Google Earth VR doesn't run natively on the Quest, though. You'll need a VR-capable PC with a decent GPU and either a USB-C cable or decent router for wireless streaming. The idea is that your PC renders the game and streams the output to the headset. It may seem like a janky solution, but it's the only practical way for most people to experience this app in 2025. Google Earth VR hasn't been updated in years, and I fear it's on borrowed time. Setup complexity and hardware cost aside, Google Earth VR has never looked better than on the current crop of headsets. The improved visual fidelity makes it an almost surreal experience that I think everyone should experience. And yet, the app almost sits on the verge of abandonment today. I was a bit surprised that the app still pulls in live 3D and Street View data from Google's servers, but that access could stop at any moment. So if you're even remotely curious and can still get your hands on the hardware, go see the world while it's still online.


Android Authority
an hour ago
- Android Authority
Google Home's latest bug: Setting an alarm for this time is nearly impossible
Ryan Haines / Android Authority TL;DR A recent Google Home bug prevents smart speakers and displays from setting alarms for 12:30am. Google Assistant-equipped Home devices like the Nest Hub and the Nest Audio recognize standard alarm command phrasing, like 'Okay Google, set an alarm for twelve thirty am,' but they set the alarm for 12:30 p.m. instead. Saying 'zero zero thirty' aloud appears to be the only working time input for setting an alarm for 12.30am. Google Home is no stranger to more than the occasional bug like bricked Nest Hubs and gimped Thermostat commands. To Google's credit, the company does roll out updates that fix the issues, but these issues are often embarrassing to have occurred in the first place. You can now add another bug to the list, with Google Home smart speakers and smart displays strangely unable to set a very specific alarm. Reddit user ReddBroccoli infuriatingly points out that their Google Assistant-equipped Nest Hub fails to set an alarm for 12.30am. Strangely, no matter which variation you try, the Nest Hub will set an alarm for 12.30pm. Here's a video from the Reddit user showing their Google Home smart display's inability to set an alarm for 12.30am: I tried out the basic alarm command 'Hey Google, set an alarm for twelve thirty am' on my Google Nest Audio, and sure enough, Google Assistant keeps setting an alarm for 12.30pm. Even saying 'half-past midnight' for the time doesn't work. The only command that works for this particular hour is saying 'zero zero thirty' out loud. You'd think the smart speaker/display is set to accept time in only the 24-hour military time format, but my Nest Audio replies back, 'Okay, alarm set for 12.30am,' acknowledging and using the 12-hour time system. The Reddit user mentions the command used to work, so something changed in the backend recently. We've contacted Google for comments on the above issue. We'll keep you updated when we learn more. Until then, we recommend using your phone to set an alarm, as your Google Assistant-equipped smart home clearly isn't up to the task. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.