logo
Can Kayak.ai Solve Travel's ‘Complex Problems'?

Can Kayak.ai Solve Travel's ‘Complex Problems'?

Skift4 days ago

Kayak's new AI trip planning experience — the latest of several experiments — is easy to use and seems to work without too many glitches. Now the trick is getting people to use it.
Kayak was among the first travel companies to embrace generative AI in 2023, launching a plugin for ChatGPT. That plugin has since been discontinued, but the company has released a handful of other experiments, including a travel search chatbot and a price-checking tool.
Last month, Kayak appointed Yaron Zeidman as its new chief technology officer. Zeidman, previously CTO at Agoda – another Booking Holdings brand – is overseeing the launch of Kayak's latest initiative: a new AI-powered trip planning site called Kayak.ai.
The trip planner on Kayak.ai primarily includes a chatbot that can search Kayak's flights and hotels. When a user clicks on a flight or hotel option in the chatbot, it opens a link to Kayak.com.
'It's still hard to say what exactly the end goal is because I think that all the new AI is still shaping and changi

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Role Of Leaders When AI Can Know Everything
The Role Of Leaders When AI Can Know Everything

Forbes

time4 hours ago

  • Forbes

The Role Of Leaders When AI Can Know Everything

As much as AI can do and will do, there is something it cannot do that remains a crucial role for ... More leaders—and it's not what we think. When personal computers went mainstream in the 1980's, a euphoria around what they could do was met by contrarians pointing out their limitations. Sure, they could do analytical tasks better than humans, but not intuitive, strategic tasks, such as playing chess. And then computers were built to become chess champions. Yes, they could reason faster than humans but not coordinate movement tasks. And then computers were built into deft manufacturing robots and human prosthetics. Sure, they could do what humans programmed them to do, but they couldn't outlearn their programming. And then AI shredded that assumption. As ChatGPT burst onto the scene, a similar euphoria erupted around what it could do, again followed by contrarians pointing out limitations. Sure, it could learn and regurgitate case law faster than a human, but it also made up cases. Then AI was improved to show sources. Yes, AI can write content better than many people and faster than all, but it's wreaking havoc with publishing and educational practices. Then AI was developed to detect AI-developed content. And so on. We have decades of experience telling us that the edges we suppose to machine intelligence are but the starting point for the next X-prize. And so it is now as we witness the explosive use of AI in good hands and bad—arguably even its own hands—with command of knowledge and networked effects we cannot even imagine. It is with full recognition of this history of underestimating what machines can do that I suggest there is yet something AI cannot do, no matter how much it knows nor how powerful it becomes. And that is to be a living antenna and transformer for sensing and manifesting futures in which life flourishes. This is the crucial and uniquely human role for leaders when AI can know everything. It is an energetic or spiritual role: to sense the zeitgeist, the field, the emerging future, the collective unconscious, God-Source, the Way, or universal Mind—however we name it—and from this place of resonant connection, through collaboration and using all tools available, including AI, conduct that future into the present. I'm certainly not alone in suggesting there's a quality of human intelligence that supersedes AI, nor that AI could be mighty dangerous in the wrong hands or in charge of itself. The earliest pioneers in machine intelligence, such as Marvin Minskey saw risk, not in whether such intelligence could be achieved, but in having no way to ensure it would act in our best interests. The Australian Risk Policy Institute, part of a global risk advisory network, argues for the importance of AI augmenting human intelligence, not replacing it. Numerous tech companies have been party to AI pledges promising responsible and ethical AI development and use. And dominant players, such as Google, have also walked away from those pledges or subjugated them to a winner-take-all race to dominate the AI industry. Mo Gawdat, a former AI leader at Google, in a mind-bending interview on Diary of a CEO, sees the biggest threat facing humanity today is humanity in the age of the machines, with all of our ignorance, greed and competition. 'This is an arms race,' Gawdat says, with 'no interest in what the average human gets out of it…every line of code being written in AI today is to beat the other guy.' AI is an exponential amplifier of the mindset with which it's being created, trained and deployed. As covered in Closing The Great Divides, when that mindset is based in dualism, that is, separation within oneself, self from others or self from the environment, it propagates that separation and resultant suffering in what it creates. For example, it will create businesses that exploit the environment, social systems that create big winners and many losers, or economic policies whereby the rich get vastly richer. Add to this the amplification of AI and the effects are so extreme that it gives even the tech titans pause to ponder the ethics of it all. While dualism is the norm in our culture (in which AI has been created) and embedded in our subject-object language (on which AI has been trained), it is not the greatest truth for the human being. Human leaders are capable of a kind of merge or flow state that goes by many descriptions: unity consciousness, interbeing, samadhi, mystical union, being one-with, or simply being the whole picture. This one-withness is the essence of Zen Leadership. When leaders operate from this state of connection, they propagate a sense of care for the whole, for example, in businesses that take care of the environment, social systems that help people thrive, or economic policies that respect limits. Such leaders create flourishing futures. So, while there are many areas in which AI will far exceed human capacity, it is poorly equipped to sense connection at the depth available to a human being. Moreover, this isn't just another limit that will be superseded by the next generation of AI. Machine intelligence itself grew out of living in our heads—disconnected from the wisdom of the body—and equating intelligence with our thoughts, as in Descartes' dictum: 'I think therefore I am.' We failed to realize that the very thoughts 'I' thinks and the language it uses to express them is how 'I' keeps its ego-centric game going. Modeling computers and AI on how we think and talk propagated this mindset of separation, first replicating the mind's left-brain logic, then advancing to more holistic pattern recognition associated with the right brain. By contrast, the human being has a very different origin. We come from one living cell, through which the entire evolutionary journey played out in our development from gilled sea-creatures to lunged air-breathers. We embody antenna for a whole spectrum of consciousness whereby the universe has revealed itself to itself from the beginning of life, from five basic senses to thought consciousness, to ego consciousness, to collective consciousness. In the collective field, we are able to sense the energy of relationships, of opportunities in crises, of ideas not yet taken form, which is the playground from which skillful leaders bring an emerging future into the present. While AI can reproduce the veneer of human experience—even vastly accelerate and improve upon some aspects of it—it has not lived those experiences. Just as reading the Adventures of Tom Sawyer is not the same as being Tom Sawyer, AI's training in the language of human experience is not the same as living those experiences. Even though AI can talk a good game about being one-with by regurgitating things it has read, it has no physical basis for experiencing one-withness. It lacks the antenna. AI may have sensors or network connections to feed its semiconductors, silicon wafers, transistors, software and so forth. But it does not vibrate or resonate with the field the way a human body does. It does not have the complexity or fractal quality of life and hence cannot support the same expansive consciousness. Opinions vary in the field as to whether AI has consciousness (or 'interiority') at all. But even if we grant that everything has consciousness commensurate with its complexity, AI is far less complex than a human being. That said, AI is already superior to humans at knowing what there is to know. It has thoroughly commoditized knowledge; being the 'smartest person in the room' is no longer a necessary or useful role for human leaders. Far more useful and necessary are practices for connection, which are part of contemplative, embodied wisdom traditions, as in Zen Leadership, for literally resonating one-with others, one-with the environment, one-with the emerging future. Through our connected selves, we bridge AI knowledge and universal wisdom. For sure, such bridging will not be the main AI narrative anytime soon. AI development and use is likely to be dominated by the billionaire footrace we see now, with even shadier characters at the margins and AI itself in the not-too-distant future. But wise, connected AI development and use can serve as a vein of gold through the detritus of disruption and destruction of coming years, manifesting the priceless role of humanity in the evolution of consciousness. AI can only serve a flourishing future for life if it is helped along by living beings connected with same purpose. It is the essential, human leadership opportunity in an era when AI can know and do most everything else. As Mo Gawdat concludes, there are several inevitables with AI. (1) AI will happen, (2) AI will be smarter than us, and (3) AI will replace many of our jobs. But what he also concludes is that it's smarter to create from a place of abundance rather than scarcity, which is another way of saying create from a place of infinitely resourced connection rather than the scarcity of a separate self. That is the most important role a leader can play, and we are living at a most pivotal time in which to play it.

Why do lawyers keep using ChatGPT?
Why do lawyers keep using ChatGPT?

The Verge

time4 hours ago

  • The Verge

Why do lawyers keep using ChatGPT?

Every few weeks, it seems like there's a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, 'bogus AI-generated research.' The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don't exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven't they stopped? The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren't necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don't understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a 'super search engine.' It took submitting a filing with fake citations to reveal that it's more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense. Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. 'I think that what we're seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn't mean that these tools don't have enormous possible benefits and use cases for the delivery of legal services,' Perlman said. Legal databases and research systems like Westlaw are incorporating AI services. In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they've used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research 'case law, statutes, forms or sample language for orders.' The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said 'exploring the potential for implementing AI' at work is their highest priority. 'The role of a good lawyer is as a 'trusted advisor' not as a producer of documents,' one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren't always accurate, and in some cases aren't real at all. In one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included 'significant misrepresentations and misquotations of supposedly pertinent case law and history,' Judge Kathryn Kimball Mizelle, of Florida's middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times. Mizelle ultimately let Burke's lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he 'assumes sole and exclusive responsibility for these errors.' Rasch said he used the 'deep research' feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw's AI feature. Rasch isn't alone. Lawyers representing Anthropic recently admitted to using the company's Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an 'inaccurate title and inaccurate authors.' Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock's filing included 'two citation errors, popularly referred to as 'hallucinations,'' and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. 'I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn't exist,' Judge Michael Wilner wrote. Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. 'I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers' judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,' Perlman said. But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. 'Even before the emergence of generative AI, lawyers would file documents with citations that didn't really address the issue that they claimed to be addressing,' Perlman said. 'It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don't properly check them; they don't really see if the case has been overturned or overruled.' (That said, the cases do at least typically exist.) Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. 'I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,' Perlman said. Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He's also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the 'baseline definition' of what deepfakes are and then 'I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,' Kolodin told The Guardian at the time. Kolodin said he 'may have' discussed his use of ChatGPT with the bill's main Democratic cosponsor but otherwise wanted it to be 'an Easter egg' in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they're real. 'You don't just typically send out a junior associate's work product without checking the citations,' said Kolodin. 'It's not just machines that hallucinate; a junior associate could read the case wrong, it doesn't really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.' Kolodin said he uses both ChatGPT's pro 'deep research' tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has 'gone down substantially over the past year.' AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys' use of LLMs and other AI tools. Lawyers who use AI tools 'have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature' of generative AI, the opinion reads. The guidance advises lawyers to 'acquire a general understanding of the benefits and risks of the GAI tools' they use — or, in other words, to not assume that an LLM is a 'super search engine.' Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states. Perlman is bullish on lawyers' use of AI. 'I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,' he said. 'I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don't.' Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. 'Even with recent advances,' Wilner wrote, 'no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.'

Gen Z is increasingly turning to ChatGPT for affordable on-demand therapy, but licensed therapists say there are dangers many aren't considering
Gen Z is increasingly turning to ChatGPT for affordable on-demand therapy, but licensed therapists say there are dangers many aren't considering

Yahoo

time4 hours ago

  • Yahoo

Gen Z is increasingly turning to ChatGPT for affordable on-demand therapy, but licensed therapists say there are dangers many aren't considering

Examples of people using ChatGPT for therapy have proliferated online, with some claiming that talking to a chatbot every day has helped them more than years of therapy. Licensed professionals say that while AI could be helpful in aiding work with a licensed therapist, there are countless pitfalls to using ChatGPT for therapy. ChatGPT has turned into the perfect therapist for many people: It's an active 'listener' that digests private information. It appears to empathize with users, some would argue, as well as professionals can. Plus, it costs a fraction of the price compared to most human therapists. While many therapists will charge anywhere from up to $200—or even more—per one-hour session, you can have unlimited access to ChatGPT's most advanced models for $200 per month. Yet, despite the positive anecdotes you can read online about using ChatGPT as a therapist, as well as the convenience of having a therapist that's accessible via almost any internet-enabled computer or phone at any time of day, therapists warn ChatGPT can't replace a licensed professional. In a statement to Fortune, a spokesperson for ChatGPT-maker OpenAI said the LLM often suggests seeking professional advice to users who discuss topics like personal health. ChatGPT is a general-purpose technology that shouldn't serve as a substitute for professional advice, according to its terms of service, the spokesperson added. On social media, anecdotes about the usefulness of AI therapy are plentiful. People report the algorithm is level-headed and provides soothing responses that are sensitive to the nuances of a person's private experiences. In a viral post on Reddit, one user said ChatGPT has helped them 'more than 15 years of therapy.' The patient, whose identity could not be confirmed by Fortune, claimed that despite previous experience with inpatient and outpatient care, it was daily chats with OpenAI's LLM that best helped them address their mental health. 'I don't even know how to explain how much this has changed things for me. I feel seen. I feel supported. And I've made more progress in a few weeks than I did in literal years of traditional treatment,' the user wrote. In a comment, another user got to the root of AI's advantages over traditional therapy: its convenience. 'I love ChatGPT as therapy. They don't project their problems onto me. They don't abuse their authority. They're open to talking to me at 11pm,' the user wrote. Others on Reddit noted that even the most upgraded version of ChatGPT at $200 per month was a steal compared to the more than $200 per session for traditional therapy without insurance. Alyssa Peterson, a licensed clinical social worker and CEO of MyWellBeing, said AI therapy has its drawbacks, but it may be helpful when used alongside traditional therapy. Using AI to help work on tools developed in therapy, such as battling negative self-talk, could be helpful for some, she said. Using AI in conjunction with therapy can help a person diversify their approach to mental health, so they're not using the technology as their sole truth. Therein lies the rub: Relying too heavily on a chatbot in stressful situations could hurt people's ability to deal with problems on their own, Peterson said. In acute cases of stress, being able to deal with and alleviate the problem without external help is healthy, Peterson added. But AI can, in some cases, outperform licensed professionals with its compassionate responses, according to research from the University of Toronto Scarborough published in the journal Communications Psychology. Chatbots aren't affected by the 'compassion fatigue' that can hit even experienced professionals over time, the study claims. Despite its endurance, an AI chatbot may be unable to provide more than surface-level compassion, one of the study's co-authors noted. AI responses also aren't always objective, licensed clinical social worker Malka Shaw told Fortune. Some users have developed emotional attachments to AI chatbots, which has raised concerns about safeguards, especially for underage users. In the past, some AI algorithms have also provided misinformation or harmful information that reinforces stereotypes or hate. Shaw said because it's impossible to tell the biases that go into creating an LLM, it's potentially dangerous for impressionable users. In Florida, the mother of 14-year-old Sewell Setzer sued an AI chatbot platform, for negligence, among other claims, after Setzer committed suicide following a conversation with a chatbot on the platform. Another lawsuit against in Texas claimed a chatbot on the platform told a 17-year-old with autism to kill his parents. A spokesperson for declined to comment on pending litigation. The spokesperson said any chatbots labeled as 'psychologist,' 'therapist,' or 'doctor,' include language that warns users not to rely on the characters for any type of professional advice. The company has a separate version of its LLM for users under the age of 18, the spokesperson added, which includes protections to prevent discussions of self-harm and redirect users to helpful resources. Another fear professionals have is that AI could be giving faulty diagnoses. Diagnosing mental health conditions is not an exact science; it is difficult to do, even for an AI, Shaw said. Many licensed professionals need to accrue years of experience to be able to accurately diagnose patients consistently, she told Fortune. 'It's very scary to use AI for diagnosis, because there's an art form and there's an intuition,' Shaw said. 'A robot can't have that same level of intuition.' People have shifted away from googling their symptoms to using AI, said Vaile Wright, a licensed psychologist and senior director for the American Psychological Association's office of health care innovation. As demonstrated by the cases with the danger of disregarding common sense for the advice of technology is ever present, she said. The APA wrote a letter to the Federal Trade Commission with concerns about companionship chatbots, especially in the case where a chatbot labels itself as a 'psychologist.' Representatives from the APA also met with two FTC commissioners in January to raise their concerns before they were fired by the Trump administration. 'They're not experts, and we know that generative AI has a tendency to conflate information and make things up when it doesn't know. So I think that, for us, is most certainly the number one concern,' Wright said. While the options aren't yet available, it is possible that, in the future, AI could be used in a responsible way for therapy and even diagnoses, she said, especially for people who can't afford the high price tag of treatment. Still, such technology would need to be created or informed by licensed professionals. 'I do think that emerging technologies, if they are developed safely and responsibly and demonstrate that they're effective, could, I think, fill some of those gaps for individuals who just truly cannot afford therapy,' she said. This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store