logo
CEO Brian Chesky says Airbnb is going to become an AI-first app with agents that can book trips for you

CEO Brian Chesky says Airbnb is going to become an AI-first app with agents that can book trips for you

Not only does Airbnb want to be the "everything app" — where users can book literally everything, from accommodations to experiences and services — it also wants to do the booking for you.
Brian Chesky, cofounder and CEO of Airbnb, laid out his vision for the travel app's AI-powered future during the company's second-quarter earnings call on Wednesday. Airbnb beat revenue expectations for quarter two and announced a $6 billion stock buyback, but said it expected slower growth in Q3. The stock was down more than 6% after-hours.
"Over the next couple of years, I think what you're going to see is Airbnb becoming an AI-first application," Chesky said on the call with analysts.
He added that currently "almost none" of the top 50 apps in the App Store are AI apps, with the notable exception of OpenAI's ChatGPT. But he predicted that soon every one of them will be AI apps, either AI startups or pre-generative AI apps that successfully transform into "native AI" apps. That's the transformation that he says is underway at Airbnb.
Chesky said Airbnb's approach to utilizing AI has differed from some other travel companies in that they have not focused on using AI to offer travel planning and inspiration. Instead, the company has rolled out AI in customer service, with a custom agent built on 13 different models and trained on tens of thousands of conversations.
As a result of the AI customer service chatbot, he said Airbnb has reduced the number of hosts and guests who need to contact a human agent by 15%.
Chesky said the AI agent is going to become more personalized throughout the next year and that it will be able to take more actions on behalf of the user.
"It will not only tell you how to cancel your reservation, it will know which reservation you want to cancel. It can cancel it for you and it can be agentic as in it can start to search and help you plan and book your next trip," he added.
Airbnb declined to provide additional comment when reached by Business Insider.
In February, Chesky said on Airbnb's quarter four earnings call that he thought it was still too early to use AI for trip planning, but that he believed AI would eventually have a "profound impact on travel."
Chesky also said in February that he wants to make Airbnb the Amazon of travel, or a one-stop shop for "all of your traveling and living needs."
Airbnb in May announced it was relaunching its Experiences business and launching Services, which allows users to book on-site professionals like photographers or massage therapists.
On the call Wednesday, Chesky said he was "very bullish" on Experiences and that the feedback so far has been positive.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques
Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques

Forbes

time19 minutes ago

  • Forbes

Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques

In today's column, I provide GPT-5 prompt engineering tips and techniques that will aid in getting the best outcomes when using this newly released generative AI. I'm sure that just about everyone by now knows that OpenAI finally released GPT-5, doing so after a prolonged period of immense and wildly fantastical speculation about what it would be like. Well, now we know what it is (see my in-depth review of GPT-5 at the link here). Bottom line is that GPT-5 is pretty much akin to all the other generative AI and large language models (LLMs) when it comes to doing prompting. The key is that if you want to ensure that GPT-5 works suitably for your needs, you must closely understand how GPT-5 differs from prior OpenAI AI products. GPT-5 has distinctive features and functionality that bring forth new considerations about composing your prompts. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Top-notch prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI. Prompting Is Still Tried And True The first place to begin when assessing GPT-5 from a prompt engineering perspective is that prompts are still prompts. Boom, drop the mic. I say that somewhat irreverently. Here's the deal. There was prior conjecture that perhaps GPT-5 would turn the world upside down when it came to using prompts. The floated ideas of how GPT-5 might conceivably function were astounding and nearly out of this world ('it will read your mind', 'it will know what you want before you even know', etc.). The truth is now known. GPT-5 is essentially a step-up from ChatGPT and GPT-4, but otherwise you do prompting just like you've done all along. There isn't a new kind of magical way to write prompts. You are still wise to compose prompts as you've been doing since the early days of contemporary generative AI. To clarify, I am emphasizing that you should astutely continue to write clearly worded prompts. Be direct. Don't be tricky. Write prompts that are long enough to articulate your question or task at hand. Be succinct if possible. Definitely don't be overly profuse or attempt to be complicated in whatever your request is. And so on. Those are all golden rules and remain perfectly intact when using GPT-5. I am confident that all the prompt engineering specialized techniques that I've previously covered will generally work appropriately with GPT-5. Some might require a tweak or minor refinement, but otherwise, they are prudent and ready to go (see my list at the link here). Auto-Switching Can Be A Headache We can next consider how to artfully try and accommodate GPT-5 via composing prompts that GPT-5 will efficiently and effectively act on. The biggest aspect that entails both good news and bad news about GPT-5 is that OpenAI decided to include an auto-switcher. This is a doozy. It will require you to potentially rethink some of your prompting since it is quite possible that GPT-5 isn't going to make the right routing decisions on your behalf. Allow me a moment to explain the quandary. It used to be that you would have to choose which of the various OpenAI AI products you wanted to use for a particular situation at hand. There had been an organic expansion of OpenAI's prior models in the sense that there have been GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, and so on. When you wanted to use OpenAI's AI capabilities, you had to select which of those available models you wanted to utilize. It all depended on what you were looking to do. Some were faster, some were slower. Some were deeper at certain classes of problems, others were shallower. It was a smorgasbord that required you to pick the right one as suitable for your task at hand. The onus was on you to know which of the models were particularly applicable to whatever you were trying to do. It could be a veritable hit-and-miss process of selection and tryouts. GPT-5 now has uplifted those prior versions into new GPT-5 submodels, and the overarching GPT-5 model makes the choice of which GPT-5 submodel might be best for whatever problem or question you happen to ask. The good news is that depending on how your prompts are worded, there is a solid chance that GPT-5 will select one of the GPT-5 submodels that will do a bang-up job of answering your prompt. The bad news is that the GPT-5 auto-switcher might choose a less appropriate GPT-5 submodel. Oops, your answer will not be as sound as if the more appropriate submodel had been chosen. Worse still, each time that you enter a prompt or start a new conversation, the GPT-5 auto-switcher might switch you to some other GPT-5 submodel, back and forth, doing so in a wanton fashion. It can make your head spin since the answers potentially will vary dramatically. Craziness In Design The average user probably won't realize that all these switcheroo mechanics are happening behind the scenes. I say that because GPT-5 doesn't overtly tell you that it is taking these actions. It just silently does so. I appreciate that the designers apparently assumed that no one would care or want to know what is going on under the hood. The problem is that those who are versed in using AI and are up-to-speed on prompting are being bamboozled by this hidden and secreted behavior. A savvy user can almost immediately sense that something is amiss. Frustratingly, GPT-5 won't let you directly control the auto-switching. You cannot tell the AI to use a particular submodel. You cannot get a straight answer if you ask GPT-5 which submodel it intends to use on your prompt. It is perhaps like trying to get the key to Fort Knox. GPT-5 refuses to play ball. The marketplace has tweeted vociferously that something needs to be done about this lack of candor by GPT-5 regarding the model routing that is occurring. Sam Altman sent out a tweet on X that suggested they are going to be making some changes on this aspect (see his X posting of August 8, 2025). The thing is, we can applaud the desire to have a seamless, unified experience, but it is similar to having an automatic transmission on a car. Some users are fine with an automatic transmission, but other, more seasoned drivers want to know what gear the car is in and be able to select a gear that they think is most suitable for their needs. Prompting GPT-5 For Routing As the bearer of bad news, I should also add that the auto-switching comes with another said-to-be handy internal mechanism that decides how much processing time will be undertaken for your entered prompt. Again, you have no particular say in this. It could be that the prompt gets tons of useful processing time, or maybe the time is shortchanged. You can't especially control this, and the settings are not within your grasp (as an aside, to some degree, if you are a developer and are using the API, you have more leeway in dealing with this; see the OpenAI GPT-5 System Card for the technical details). Let me show you what I've been doing about this exasperating situation. First, here is a mapping of the prior models to the GPT-5 submodels: The GPT-5 submodels are considered successors and depart from the earlier models in various ways. That being said, they still are roughly on par as to the relative strengths and weaknesses that previously prevailed. I will show you what I've come up with to try and sway the GPT-5 auto-switcher. Prompting With Aplomb Suppose I have a prompt that I believe would have worked best on GPT-4o. But I am using GPT-5, thus I am not using GPT-4o, plus OpenAI has indicated that it will sunset the prior models, so you might as well get used to using GPT-5. Darned if you cannot simply tell GPT-5 to use gpt-5-main (i.e., realizing that gpt-5-main is now somewhat comparable to GPT-4o, per my chart above). The AI will either tell you it doesn't function that way or might even imply that it will do as you ask, yet it might do something else. Bow to the power of the grand auto-switcher. This eerily reminds me of The Matrix. Anyway, we need to somehow convince GPT-5 to do what we want, but we must do so with aplomb. Asking straightaway isn't a viable option. The need to sway the AI is our best option at this ugly juncture. In the specific case of my wanting to use gpt-5-main, here is a prompt that I use and seems to do the trick (much of the time): It appears that by emphasizing the nature of what I want GPT-5 to do, it seems possible to sway the direction that the auto-switcher will route my next prompt. Not only will I possibly get the submodel that I think is the best choice for the prompt, observe that I also made a big deal about the depth of reasoning that ought to take place. This potentially helps to kick the AI into giving an allotment of processing time that it, by enigmatic means, would have perhaps inadvertently shortcut (OpenAI refers to processing time as so-called 'thinking time' – an anthropomorphizing of AI that I find to be desperate and despairing). I am not saying this sway-related prompting is a guaranteed result. After trying a bunch of times, it seemed to be working as hoped for. I came up with similar prompts for each of the other GPT-5 submodels. If there is enough interest expressed by readers, I will do a follow-up with those details. Be on the watch for that upcoming coverage. On a related note, I will also soon be covering the official GPT-5 Prompting Guide that OpenAI has posted, along with their Prompt Optimizer Tool. Those are aimed primarily at AI developers and not especially about day-to-day, ordinary prompting in GPT-5. Watch Out That Writing Is Enhanced On the writing side of things, GPT-5 has improvements in a myriad of writing aspects. The ability to generate poems is enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus. My guess is that the everyday user won't discern much of a difference. For a more seasoned user, you are bound to notice that the writing has gotten an upgrade. I suppose it is something like getting used to a third grader and now being conversational with a sixth grader. Or something like that. I use this prompt to get GPT-5 to be closer to the way it was in the GPT-4 series: That seems to get me the kind of results that I used to see. It is not an ironclad method, but it generally works well. I realize that some people are going to scream loudly that I ought not to suggest that users revert to the GPT-4 writing style. We all should accept and relish the GPT-5 writing style. Are we going backwards by asking for GPT-5 to speak like GPT-4? Maybe. I grasp the angst. It's up to you, and I'm not at all saying that everyone should use this prompting tip. Please use it at your personal discretion. Lies And AI Hallucinations OpenAI claims that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is yet another misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth). I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here. A presumed upbeat consideration is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The downbeat news is that it isn't zero. In other words, it is still going to lie and still going to hallucinate. This might happen on a less frequent basis, but nonetheless remains a chancy concern. Here is my prompt to help try and further reduce the odds of GPT-5 lying to you: Here is my prompt to help further reduce the odds of GPT-5 incurring a so-called hallucination: My usual caveats apply, namely, these aren't surefire, but they seem to be useful. The crucial motto, as always, still is that if you use generative AI, make sure to remain wary and alert. One other aspect is that you would be shrewd to use both of those prompts so that you can simultaneously try to strike down the lying and the hallucinations. If you only use one of those prompts, the other unresolved side will potentially arise. Try to squelch both. It's your way of steering out of the range of double trouble. Personas Are Coming To The Fore I've repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so. For example, you might tell AI to pretend to be Abraham Lincoln. The AI will respond based on having pattern-matched on the writings of Lincoln and the writings about Lincoln. It is instructive and useful for students and learners. I even showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here. OpenAI has indicated they are selectively making available a set of four new preset personas, consisting of Cynic, Robot, Listener, and Nerd. Each of those personas represents those names. The AI shifts into a mode reflecting those types of personalities. The good news is that I hope this spurs people to realize that personas are a built-in functionality and easily activated via a simple prompt. It doesn't take much work to invoke a persona. Here is my overall prompt to get a persona going in GPT-5: Use personas with due caution. I mention this because some people kind of get lost in a conversation where the AI is pretending to be someone. It isn't real. You aren't somehow tapping into the soul of that actual person, dead or alive. Personas are pretenses, so keep a clear head accordingly. Prompt Engineering Still Lives I hope that these important prompting tips and insights will boost your results when using GPT-5. One last comment for now. You might know that some have fervently claimed that prompt engineering is a dying art. No one will need to write prompts anymore. I've discussed in great depth the automated prompting tools that try to do the prompting for you (see my aforementioned list of prompt engineering strategies and tactics). They are good and getting better, but we are still immersed in the handwriting of prompts and will continue down this path for quite a while to come. GPT-5 abundantly reveals that to be the case. A final remark for now. It has been said that Mark Twain made a wry comment that when a newspaper reported him as deceased, he said that the audacious claim was a bit exaggerated. That was smarmily tongue-in-cheek. I would absolutely say the same about prompt engineering. It's here. It isn't disappearing. Keep learning about prompting. You'll be glad that you spent the prized time doing so.

Sam Altman says the AI talent war is a bet that a ‘medium-sized handful of people' will make superintelligence breakthroughs
Sam Altman says the AI talent war is a bet that a ‘medium-sized handful of people' will make superintelligence breakthroughs

Yahoo

timean hour ago

  • Yahoo

Sam Altman says the AI talent war is a bet that a ‘medium-sized handful of people' will make superintelligence breakthroughs

Amid the cutthroat war for AI talent, tech giants are offering astronomical sums to lure a tiny pool of top engineers from rivals. OpenAI CEO Sam Altman expects the market for these geniuses to remain intense, but estimated there are 'many thousands of people' capable of making key discoveries in superintelligence who could conceivably be found. The amounts of money being offered to hire AI geniuses is mind-boggling, as tech giants like Meta, Microsoft, Google and OpenAI fight over a tiny talent pool in their race to achieve the next breakthrough. And the cutthroat competition doesn't look like it will ease anytime soon. 'Definitely this is the most intense talent market I have seen in my career,' OpenAI CEO Sam Altman told CNBC on Friday. 'But if you think about the economic value being created by these people and how much we're spending on compute, you know, maybe the market stays like this. I'm not I'm not totally sure what's gonna happen, but it is a crazy intense comp for a very small number of people right now.' Exactly how small is that group of people, and what do they know that others don't, CNBC's Andrew Sorkin asked. 'The bet, the hope is they know how to discover the remaining ideas to get to superintelligence—that there are going to be a handful of algorithmic ideas and, you know, medium-sized handful of people who can figure them out,' Altman replied. That would help explain the astronomical amounts companies are willing to spend to poach AI talent, with one offer reportedly topping $1 billion. Altman said in June that Meta had been making 'giant offers to a lot of people on our team,' some totaling '$100 million signing bonuses and more than that [in] compensation per year.' Meta is also investing $14.3 billion in Scale and hired the startup's CEO, Alexandr Wang, for a superintelligence team. While immense fortunes are being thrown at a handful of top engineers, Altman estimated the number of people smart enough to make superintelligence breakthroughs is actually much, much larger. 'I bet it's much bigger than people think, but you know some companies in the space have decided that they're going to go after a few shiny names,' he told CNBC. 'I think there's probably many thousands of people that we could find and probably tens of thousands or hundreds of thousands of people in the world that are capable of doing this kind of work.' This story was originally featured on

David Sacks says the doomsday scenario of AI wiping out jobs is 'overhyped'
David Sacks says the doomsday scenario of AI wiping out jobs is 'overhyped'

Yahoo

time3 hours ago

  • Yahoo

David Sacks says the doomsday scenario of AI wiping out jobs is 'overhyped'

Artificial general intelligence hinges on the idea that AI can reach human levels of reasoning. White House AI and crypto czar David Sacks says we're not quite there yet. The investor said AI still needs human input in order to provide "meaningful business value." David Sacks, the White House AI and crypto czar, is throwing cold water on the AGI hype train. AI companies are racing to achieve AGI, or artificial general intelligence, commonly considered a form of AI that can reach human levels of reasoning. The continued advancement of AI has led some to believe that the technology will lead to a large-scale wipeout of jobs or even worse outcomes, like human extinction. Sacks, a tech investor who has supported major companies such as Airbnb, Facebook, and Uber, wrote in an X post on Saturday that AI hasn't progressed as quickly as many have predicted — specifically, the idea that AI will "self-improve" and rapidly achieve "godlike superintelligence" has been blown out of proportion. "None of this is to gainsay the progress. We are seeing strong improvement in quality, usability, and price/performance across the top model companies. This is the stuff of great engineering and should be celebrated," Sacks wrote in his post. "It's just not the stuff of apocalyptic pronouncements. Oppenheimer has left the building." One of the doomsday scenarios Sacks rejected in his post is the fear that AI will lead to massive job losses. The investor said that's yet to pan out since AI relies on a lot of human input for prompts and for verification. "This means that apocalyptic predictions of job loss are as overhyped as AGI itself," he said. "Instead, the truism that 'you're not going to lose your job to AI but to someone who uses AI better than you' is holding up well." Sacks isn't the only AGI naysayer. Google Brain's cofounder Andrew Ng said at a June Y Combinator talk that "AGI has been overhyped" and that "there'll be a lot of things that humans can do that AI cannot." Google CEO Sundar Pichai said in a Lex Fridman podcast that he likes to use the term AJI, or "artificial jagged intelligence," to describe the current phase of AI — one that is remarkably intelligent but can still make basic mistakes. Sacks did not respond to a request for comment. Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store