logo
AI travel startup Airial has built a tool that can plan your holiday in seconds

AI travel startup Airial has built a tool that can plan your holiday in seconds

Mint4 days ago
Planning a holiday often means juggling flights, hotels, local transport and activities across multiple websites or paying a premium to a travel agency to do it for you. Now, a wave of AI travel startups is working to change that. With artificial intelligence gaining traction in the travel industry, companies are racing to build tools that can handle end-to-end trip planning.
One such player is Airial, a travel-tech startup founded by two former Meta engineers, Archit Karandikar and Sanjeev Shenoy, that promises to simplify the process. Its AI-powered platform creates personalised itineraries within seconds, covering everything from bookings to restaurant suggestions, all in one place.
Airial lets users input basic travel details, such as starting and ending locations, to generate a full itinerary. The platform includes flights, hotel bookings, local transport, restaurant suggestions, and tourist attractions. Users can either set their preferences up front or make changes after viewing the suggested plan.
The interface offers an overview of the entire trip, with clickable segments for daily plans. Users can explore each activity to view location, reviews, suggestions and alternatives. The tool also includes a map view that displays all the places scheduled for the day, helping users understand the distance and time required to travel between points.
It also accounts for transit time, wait periods at stations and the possibility of nearby day trips. In addition, the platform can answer queries about specific places and tailor recommendations to user preferences.
Social media and creator integration
One of Airial's newer features allows users to add content from creators. A user can link a blog, TikTok or Instagram Reel and add locations mentioned in the content to their itinerary. The tool can also surface relevant TikTok videos based on the user's destination and preferences.
Additionally, Airial supports trip sharing and collaborative planning. It has included cars and buses in multi-city travel options. Users can view trips created by friends and make modifications, bringing a social element to the planning process.
Founded by former Meta engineers
Airial was founded by Archit Karandikar and Sanjeev Shenoy, who were college friends in India. Karandikar previously worked in engineering roles at Meta, Google, and Waymo, focusing on AI-based products. Shenoy worked at Meta as well, with the Instagram Reels team.
The founders said their shared interest in travel led them to build a product that could function like a detailed digital travel agent.
'Most platforms just help you build a rough plan. We focus on logistics, connecting dozens of APIs and factoring in multiple parameters like transfer time, hotel proximity, and transit availability,' said Karandikar in an interview with TechCrunch.
The startup's AI model is based in part on research from a DeepMind paper called AlphaGeometry, which focuses on solving complex geometry problems. Airial combines this inference method with large language models (LLMs) to create personalised travel plans.
Airial has raised $3 million in seed funding led by Montage Ventures, with participation from South Park Commons, Peak XV (formerly Sequoia India) and angel investors from companies like Meta, Dropbox, and UiPath.
The company claims to have 'tens of thousands' of monthly users. For now, its focus is on user growth rather than monetisation. In the near future, Airial plans to launch iOS and Android apps and add vertical search options for hotels, activities and influencer-generated content.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What is Artificial Super-intelligence? Former Google CEO Eric Schmidt warns AI will soon outsmart humanity and we are not ready
What is Artificial Super-intelligence? Former Google CEO Eric Schmidt warns AI will soon outsmart humanity and we are not ready

Time of India

time2 hours ago

  • Time of India

What is Artificial Super-intelligence? Former Google CEO Eric Schmidt warns AI will soon outsmart humanity and we are not ready

Former Google CEO Eric Schmidt warns that Artificial Super-intelligence (ASI) could surpass collective human intelligence within six years. Speaking on the Special Competitive Studies Project podcast, Schmidt said society is vastly unprepared for this shift. He called ASI 'underhyped,' citing how current AI systems already outperform humans in programming and reasoning through recursive self-improvement. Eric Schmidt predicts AI will soon replace most programmers and surpass human intelligence. On a recent podcast, he described the near-future rise of Artificial Super Intelligence (machines smarter than all of humanity combined) as both imminent and underestimated. Tired of too many ads? Remove Ads "Within a year, programmers may be obsolete" AGI in five years, ASI in six? The 'San Francisco Consensus' Tired of too many ads? Remove Ads 'There's no language for what's coming' The path ahead: revolutionary or ruinous? In a world consumed by conversations around AI ethics , job losses, and automation, former Google CEO Eric Schmidt is raising an alarm—not about what we already know, but about what we don't yet understand. On a recent episode of the Special Competitive Studies Project podcast, Schmidt declared that Artificial Super-intelligence (ASI)—a term that's still absent in most public discourse—is rapidly approaching, and society is dangerously with conviction and urgency, Schmidt laid out a roadmap that reads more like science fiction than emerging reality. Within the next 12 months, he believes, most programming jobs could be replaced by AI. Not only that, AI systems will be able to outpace the brightest graduate-level mathematicians in structured reasoning tasks like advanced math and the core of this shift is what he calls recursive self-improvement—AI systems that write their own code using protocols like Lean, making them exponentially more efficient with each iteration. As Schmidt explained: 'Ten to twenty percent of the code in research labs like OpenAI and Anthropic is now being written by AI itself.'Schmidt anticipates that within three to five years, the tech world will cross the threshold of Artificial General Intelligence (AGI)—a system that can match human creativity and reasoning across disciplines. But it's what comes next that he finds truly refers to ASI, or Artificial Super-intelligence, as a leap beyond individual human intellect—something that could soon exceed the collective intelligence of all humans. 'This occurs within six years, just based on scaling,' he said, citing a growing consensus among Silicon Valley's top thinkers—what he terms the 'San Francisco Consensus.'Yet, unlike most headlines that exaggerate the risks of AI, Schmidt's stance is paradoxically sobering because it highlights how little attention this seismic shift is ASI being potentially the most transformative force in human history, Schmidt believes it is severely under-discussed. 'People do not understand what happens when you have intelligence at this level, which is largely free,' he said. The worry, for Schmidt, isn't just about what AI can do—but about how unprepared our legal, ethical, and governance systems are to accommodate it.'There's no language for what happens with the arrival of this,' Schmidt warned. 'This is happening faster than our society, our democracy, our laws will interact.'As AI continues its meteoric rise, the predictions made by Eric Schmidt pose a dual challenge. On one hand, humanity stands on the brink of a new technological renaissance; on the other, we risk spiraling into uncharted waters without a map.'Super Intelligence isn't a question of if, but when,' Schmidt seems to say—and the fact that we're not talking about it enough may be the biggest threat of or not society is ready, Artificial Superintelligence is no longer a distant theory. According to one of tech's most influential figures, it's knocking at our door. And if we don't start preparing, we might not be the ones answering.

What is Artificial Super-intelligence? Former Google CEO Eric Schmidt warns AI will soon outsmart humanity and we are not ready
What is Artificial Super-intelligence? Former Google CEO Eric Schmidt warns AI will soon outsmart humanity and we are not ready

Economic Times

time2 hours ago

  • Economic Times

What is Artificial Super-intelligence? Former Google CEO Eric Schmidt warns AI will soon outsmart humanity and we are not ready

Synopsis Former Google CEO Eric Schmidt warns that Artificial Super-intelligence (ASI) could surpass collective human intelligence within six years. Speaking on the Special Competitive Studies Project podcast, Schmidt said society is vastly unprepared for this shift. He called ASI 'underhyped,' citing how current AI systems already outperform humans in programming and reasoning through recursive self-improvement. Bloomberg Eric Schmidt predicts AI will soon replace most programmers and surpass human intelligence. On a recent podcast, he described the near-future rise of Artificial Super Intelligence (machines smarter than all of humanity combined) as both imminent and underestimated. In a world consumed by conversations around AI ethics, job losses, and automation, former Google CEO Eric Schmidt is raising an alarm—not about what we already know, but about what we don't yet understand. On a recent episode of the Special Competitive Studies Project podcast, Schmidt declared that Artificial Super-intelligence (ASI)—a term that's still absent in most public discourse—is rapidly approaching, and society is dangerously with conviction and urgency, Schmidt laid out a roadmap that reads more like science fiction than emerging reality. Within the next 12 months, he believes, most programming jobs could be replaced by AI. Not only that, AI systems will be able to outpace the brightest graduate-level mathematicians in structured reasoning tasks like advanced math and coding. At the core of this shift is what he calls recursive self-improvement—AI systems that write their own code using protocols like Lean, making them exponentially more efficient with each iteration. As Schmidt explained: 'Ten to twenty percent of the code in research labs like OpenAI and Anthropic is now being written by AI itself.' Schmidt anticipates that within three to five years, the tech world will cross the threshold of Artificial General Intelligence (AGI)—a system that can match human creativity and reasoning across disciplines. But it's what comes next that he finds truly staggering. He refers to ASI, or Artificial Super-intelligence, as a leap beyond individual human intellect—something that could soon exceed the collective intelligence of all humans. 'This occurs within six years, just based on scaling,' he said, citing a growing consensus among Silicon Valley's top thinkers—what he terms the 'San Francisco Consensus.' Yet, unlike most headlines that exaggerate the risks of AI, Schmidt's stance is paradoxically sobering because it highlights how little attention this seismic shift is receiving. Despite ASI being potentially the most transformative force in human history, Schmidt believes it is severely under-discussed. 'People do not understand what happens when you have intelligence at this level, which is largely free,' he said. The worry, for Schmidt, isn't just about what AI can do—but about how unprepared our legal, ethical, and governance systems are to accommodate it. 'There's no language for what happens with the arrival of this,' Schmidt warned. 'This is happening faster than our society, our democracy, our laws will interact.' View this post on Instagram A post shared by SB Media (@ As AI continues its meteoric rise, the predictions made by Eric Schmidt pose a dual challenge. On one hand, humanity stands on the brink of a new technological renaissance; on the other, we risk spiraling into uncharted waters without a map. 'Super Intelligence isn't a question of if, but when,' Schmidt seems to say—and the fact that we're not talking about it enough may be the biggest threat of all. Whether or not society is ready, Artificial Superintelligence is no longer a distant theory. According to one of tech's most influential figures, it's knocking at our door. And if we don't start preparing, we might not be the ones answering.

Technobabble: We need a whole new vocabulary to keep up with the evolution of AI
Technobabble: We need a whole new vocabulary to keep up with the evolution of AI

Mint

time7 hours ago

  • Mint

Technobabble: We need a whole new vocabulary to keep up with the evolution of AI

The artificial intelligence (AI) news flow does not stop, and it's becoming increasingly obscure and pompous. China's MiniMax just spiked efficiency and context length, but we are not gasping. Elon Musk says Grok will 'redefine human knowledge," but is that a new algorithm or just hot air? Andrej Karpathy's 'Software 3.0" sounds clever but lacks real-world bite. Mira Murati bet $2 billion on 'custom models," a term so vague it could mean anything. And only by testing Kimi AI's 'Researcher" did we get why it's slick and different. Technology now sprints past our words. As machines get smarter, our language lags. Buzzwords, recycled slogans and podcast quips fill the air but clarify nothing. This isn't just messy, it's dangerous. Investors chase vague terms, policymakers regulate without definitions and the public confuses breakthroughs with sci-fi. Also Read: An AI gadget mightier than the sword? We're in a tech revolution with a vocabulary stuck in the dial-up days. We face a generational shift in technology without a stable vocabulary to navigate it. This language gap is not a side issue. It is a core challenge that requires a new discipline: a fierce scepticism of hype and a deep commitment to the details. The instinct to simplify is a trap. Once, a few minutes was enough to explain breakthrough apps like Google or Uber. Now, innovations in robotics or custom silicon resist such compression. Understanding OpenAI's strategy or Nvidia's product stack requires time, not sound-bites. We must treat superficial simplicity as a warning sign. Hot areas like AI 'agents' or 'reasoning layers' lack shared standards or benchmarks. Everyone wants to sell a 'reasoning model,' but no one agrees on what that means or how to measure it. Most corporate announcements are too polished to interrogate and their press releases are not proof of defensible innovation. Extraordinary claims need demos, user numbers and real-world metrics. When the answers are fuzzy, the claim is unproven. In today's landscape, scepticism is not cynicism. It is discipline. This means we must get comfortable with complexity. Rather than glossing over acronyms, we must dig in. Modern tech is layered with convenient abstractions that make understanding easier, but often too easy. A robo-taxi marketed as 'full self-driving' or a model labelled 'serverless' demands that we look beneath the surface. Also Read: Productivity puzzle: Solow's paradox has come to haunt AI adoption We don't need to reinvent every wheel, but a good slogan should never be an excuse for missing what is critical. The only way to understand some tools is to use them. A new AI research assistant, for instance, only feels distinct after you use it, not when you read a review of what it can or cannot accomplish. In this environment, looking to the past or gazing towards the distant future is a fool's errand. History proves everything and nothing. You can cherry-pick the dot-com bust or the advent of electricity to support any view. It's better to study what just happened than to force-fit it into a chart of inevitability. The experience of the past two years has shattered most comfortable assumptions about AI, compute and software design. The infographics about AI diffusion or compute intensity that go viral on the internet often come from people who study history more than they study the present. It's easier to quote a business guru than to parse a new AI framework, but we must do the hard thing: analyse present developments with an open mind even when the vocabulary doesn't yet exist. Also Read: Colleagues or overlords? The debate over AI bots has been raging but needn't The new 'Nostradami' of artificial intelligence: This brings us to the new cottage industry of AI soothsaying. Over the past two years, a fresh crop of 'laws' has strutted across conference stages and op-eds, each presented as the long-awaited Rosetta Stone of AI. We're told to obey Scaling Law (just add more data), respect Chinchilla Law (actually, add exactly 20 times more tokens) and reflect on the reanimated Solow Paradox (productivity still yawns, therefore chatbots are overrated). When forecasts miss the mark, pundits invoke Goodhart's Law (metrics have stopped mattering) or Amara's Law (overhype now, under-hype later). The Bitter Lesson tells us to buy GPUs (graphic processing units), not PhDs. Cunningham's Law says wrong answers attract better ones. Our favourite was when the Victorian-era Jevons' Paradox was invoked to argue that a recent breakthrough wouldn't collapse GPU demand. We're not immune to this temptation and have our own Super-Moore Law; it has yet to go viral. Also Read: AI as infrastructure: India must develop the right tech These laws and catchphrases obscure more than they reveal. The 'AI' of today bears little resemblance to what the phrase meant in the 1950s or even late 2022. The term 'transformer," the architecture that kicked off the modern AI boom, is a prime example. Its original 2017 equation exists now only in outline. The working internals of today's models—with flash attention, rotary embeddings and mixture-of-experts gating—have reshaped the original methods so thoroughly that the resulting equations resemble the original less than general relativity resembles Newton's laws. This linguistic mismatch will only worsen as robotics grafts cognition onto actuators and genomics borrows AI architecture for DNA editing. Our vocabulary, built for a slower era, struggles to keep up. Also Read: Rahul Matthan: AI models aren't copycats but learners just like us Beneath the noise, a paradox remains: staying genuinely current is both exceedingly difficult and easier than ever. It's difficult because terminology changes weekly and breakthroughs appear on preprint servers, not in peer-reviewed journals. However, it's easier because we now have AI tools that can process vast amounts of information, summarize dense research and identify core insights with remarkable precision. Used well, these technologies can become the most effective way to understand technology itself. And that's how sensible investment in innovation begins: with a genuine grasp of what's being invested in. The author is a Singapore-based innovation investor for GenInnov Pte Ltd

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store