
What to know before getting into a driverless taxi
A Waymo drives past Washington Post tech columnist Geoffrey Fowler in San Francisco. (Photo by Amy Osborne for The Washington Post)
The idea of a car that drives itself might feel futuristic. But for an increasing number of Americans, that future is already here. Waymo and other companies have announced plans to expand in cities across the U.S.
And the Trump administration has signaled that it wants to pave the way for autonomous vehicle companies to expand. Last month, it loosened rules around driverless cars.
Martine Powers talks with Washington Post tech columnist Geoffrey Fowler and reporter Lisa Bonos about what it's like to live in a city full of driverless cars – and what you should know before getting in one.
Today's show was produced by Emma Talkoff. It was edited by Lucy Perkins and mixed by Sam Bair.
Subscribe to The Washington Post here.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Epoch Times
an hour ago
- Epoch Times
Broadcom Shares Fall Despite Better-Than-Expected Earnings
Broadcom Inc., whose shares are trading just below a new all-time high, narrowly beat Wall Street expectations on June 5 after the Silicon Valley chipmaker and AI infrastructure giant reported its fiscal second-quarter earnings following the market's close in New York. For the period ended May 4, the Palo Alto, California-based company


Washington Post
an hour ago
- Washington Post
American Vanguard: Q1 Earnings Snapshot
NEWPORT BEACH, Calif. — NEWPORT BEACH, Calif. — American Vanguard Corp. (AVD) on Friday reported a first-quarter loss of $8.5 million, after reporting a profit in the same period a year earlier. On a per-share basis, the Newport Beach, California-based company said it had a loss of 30 cents. Losses, adjusted for non-recurring costs, came to 24 cents per share.


Forbes
an hour ago
- Forbes
How AI Can Decode The Hidden Stories In Immigration Applications
Raghu Para is a tech exec with over 15 years of progressive experience in software, artificial intelligence and machine learning. getty Picture this: You've spent years gathering documents, filling out forms and waiting for your immigration decision. Meanwhile, the officer reviewing your case is buried under a mountain of paperwork, armed with the patience of a kindergarten teacher and the attention span of a detective running on espresso. This is the modern immigration system—a finely tuned cocktail of bureaucracy, backlogs and burnout. Governments want meticulous vetting. Applicants wait so long they could've binged Living Undocumented on Netflix. And the kicker? Much of the work is mind-numbingly repetitive. Officers aren't just reviewing facts—they're decoding intent. Is this a legit work visa? A bona fide asylum claim? They play legal detective, scan for red flags and occasionally channel TSA energy—unpacking a grandma's suitcase only to find a single, compliant three-ounce shampoo bottle. Could AI help? Sure. But the question is—can it understand human intent without making a mess? Let's address the customs officer in the room: AI in immigration is controversial. On the bright side, processing times are expedited, costs can be reduced and the risk of human error deciding anyone's fate can be mitigated. But let's not hand it a rubber stamp just yet. AI bias is real. It can reject perfectly good applications like it's giving out Halloween candy—and worse, hallucinate fake laws like the "Deportation Reform Act of 2065." That's not a typo—it's fiction. So what's the answer? Let tech sit at the desk—but humans still hold the stamp. As an AI researcher who's navigated the anxiety-inducing immigration process myself, I can tell you the challenge isn't building smart algorithms—it's building guardrails that stop them from going off-script. When designed carefully, AI can be the ultimate sidekick—bringing technical muscle and just enough empathy to keep things human. Today's systems aren't the clunky chatbots that used to ask, "Did you mean refugee or retirement visa?" before crash-landing on a 404 page from the Bush administration. Modern systems combine machine precision with human oversight, making a huge difference. Here's how: One of the biggest delays in immigration comes from verifying intent. AI now uses natural language processing to read between the lines, flag inconsistencies and detect fraud. Take a framework like Agent-Driven Semantic Analysis & Intent Detection (ADS-ID)—a multi-agent model I helped design: • The "document detective" deciphers even the messiest handwriting (think doctor's prescription, but worse). • The "legal scholar" cross-references case law better than your cousin with a law degree and zero follow-through. • The "consistency checker" spots contradictions like "You were working in Canada while attending school in Mexico?" Okay, time traveler. Other approaches use deep learning trained on millions of past cases or hybrid models combining logic rules and machine learning. But the solutions always keep humans in the loop to interpret AI's findings. And yes, there are challenges. • The Black Box Problem: If an AI makes a decision, it needs a "Kindergarten Explanation Layer"—something even a five-year-old (or a policy analyst) can understand. • Biased Training Data: Immigration decisions are rooted in decades of judgment calls—many flawed. We need AI that can recognize, adapt to and correct for that. • Constant Policy Change: Immigration rules can change faster than the promises of a politician. AI needs regular policy memos just to keep up. Some fraud is obvious, but some fraud is sneaky. AI can help by analyzing digital footprints for inconsistencies and flagging suspicious patterns in application histories. The U.K.'s Whitehall system, for instance, used AI to detect sham marriages, though critics noted it sometimes flagged real couples, too. Embassies are starting to use AI to estimate wait times based on application type, country of origin and historical data. It's not flawless, but it beats refreshing your status page 37 times a day. Officers often deal with documents in rare dialects, bad translations or messy handwriting. Systems should evolve to support the human dynamic, ensuring officers operate like collaborators who use AI as a sounding board—not make them "overrulers" who distrust any algorithmic suggestion or rubber-stampers who approve whatever AI says. AI in immigration isn't perfect. It can hallucinate laws, mishandle sensitive data that deserves Fort Knox-level security or reject a case with a digital shrug: "Too complicated, goodbye." That's why we need transparency, human oversight—and maybe a big red "Don't Panic" button. But if we build with care, the future is promising. We'll see AI that analyzes video (Photoshop weddings won't cut it), uses quantum computing (finally faster than a clerk on dial-up) and sends real-time updates that don't leave you in "Pending" purgatory. Immigration is—and always will be—a human business. But with AI as a savvy, respectful assistant, officers can focus on what really matters: serving people, not pushing paper. Ultimately, the goal is to ensure it's faster, fairer and insightful so families can reunite while processes flow. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?