logo
#

Latest news with #languageModels

Booking Holdings Hedges Google Disruption, Wants to 'Be Close' to AI Hyperscalers
Booking Holdings Hedges Google Disruption, Wants to 'Be Close' to AI Hyperscalers

Skift

time2 days ago

  • Business
  • Skift

Booking Holdings Hedges Google Disruption, Wants to 'Be Close' to AI Hyperscalers

Booking Holdings CFO Ewout Steenbergen wants the company to "be close" to all of the major large language model developers to hedge its bets on the evolution of travel search. "We think it's really important to be close to that world to understand what is happening, be their partner, doing joint product development," Steenbergen said at the Bank of America Global Technology Conference Wednesday. "Because ultimately, those might become more leads-generating platforms, replacing traditional search. A

AI search optimization: Turning Search Engines to Answer Engines
AI search optimization: Turning Search Engines to Answer Engines

Hospitality Net

time2 days ago

  • Business
  • Hospitality Net

AI search optimization: Turning Search Engines to Answer Engines

There's a fundamental shift in how we think about getting information from the internet: Search engines are being replaced by Answer Engines. (Did you know that this is just an excerpt from the complete and free newsletter that is available here? Sent out once a week, original viewpoints, insights and interesting things to read.) For the past few decades, we've optimized content for search. Someone types in a query, the search engine offers a list of links. Basically search engines act like a sort of filtering mechanism of the data on the internet. But now, increasingly, people don't want options of what is out there. They want the answer, one synthesized response. The algorithm doesn't suggest where to look. It tells you what to know (well, hopefully people will apply some critical thinking to the LLM and ask where the data came from). This changes how content needs to be built. In the age of search, content was created to attract clicks to make sure you're included in the filter. In the age of answers, content must bedesigned to provide answers. I am not an SEO expert, but the shift in mindset from search engine optimization to answer engine optimization is the key. FAQs are a good example of optimizing for this model, but maybe they need a lot more content in each answer with more examples? Each answer you write might be the exact snippet a language model feeds to a user in response to a query (oh how I don't want to be the webspam team of the various AI model makers right now). This doesn't mean we throw away beautiful imagery or storytelling, especially in hospitality where emotional resonance still drives conversions, reviews and great visual assets will remain important IMO. But we should ask: what part of this page answers something? There's no perfect playbook yet. This isn't classic SEO. But it's becoming clear: we need to move from optimizing content to be found, to designing content so it becomes the answer. It's less about filtering the internet for links of possible answers and more about training the machines to use the content as the answer. And the earlier we start shifting our content mindset, the better prepared we'll be when those AI agents become the default front door to our websites. Thanks for reading. Subscribe for free to receive new posts. About me: I'm a fractional CMO for large travel technology companies helping turn them into industry leaders. I'm also the co-founder of a hotel news media that is unsensational, factual and keeps hoteliers updated on the industry. View source

Researchers develop more efficient language model control method
Researchers develop more efficient language model control method

Entrepreneur

time4 days ago

  • Business
  • Entrepreneur

Researchers develop more efficient language model control method

A team of researchers has successfully developed a more efficient method to control the outputs of large language models (LLMs), addressing one of the key challenges in artificial intelligence text... This story originally appeared on Calendar A team of researchers has successfully developed a more efficient method to control the outputs of large language models (LLMs), addressing one of the key challenges in artificial intelligence text generation. The breakthrough enables more effective guidance of LLMs to produce text that adheres to specific structures while maintaining accuracy. The new approach focuses on controlling language model outputs to adhere to predetermined structures, such as programming languages, while eliminating errors that commonly plague AI-generated content. This advancement represents a significant step forward in making AI language tools more reliable for specialized applications. Improving Structural Adherence in AI Text Generation The research addresses a fundamental issue with large language models: their tendency to generate text that deviates from required formats or contains errors when tasked with producing structured content. By implementing more effective control mechanisms, the researchers have developed a system that maintains structural integrity throughout the generation process. For programming languages specifically, this advancement could reduce the frequency of syntax errors and logical flaws that often appear in code generated by AI systems. The method ensures that the language model adheres to the programming language's rules while generating functional code. Technical Approach and Implementation While specific technical details of the method were not fully outlined, the approach appears to involve guiding the language model's generation process more precisely than previous methods. Rather than simply prompting the model and hoping for correctly structured output, the new system actively steers the generation process to maintain compliance with predefined rules. This control mechanism works by: Monitoring the model's outputs in real-time Applying constraints that keep text generation within acceptable parameters Correcting potential errors before they appear in the final output Practical Applications The improved control method opens up new possibilities for utilizing large language models in fields that require strict adherence to specific formats. Some potential applications include: Software Development: Generating error-free code that adheres to the syntax rules of specific programming languages can make AI coding assistants more reliable for developers. Data Formatting: Creating structured data outputs like JSON, XML, or CSV files with perfect adherence to format specifications. Technical Documentation: Producing documentation that follows industry-standard formats without introducing structural errors. Scientific Research: Generating properly formatted research papers or reports that adhere to publication guidelines. Future Research Directions This advancement likely represents an early step in a broader effort to make large language models more controllable and reliable. Future research may expand on this work by: Developing more sophisticated control mechanisms that can handle increasingly complex structural requirements. Reducing the computational overhead associated with implementing these controls, making them more accessible for wider use. Extending the approach to handle multiple types of structured outputs simultaneously. The research highlights the growing focus on not just making AI language models more powerful, but also more precise and controllable. As these systems become increasingly integrated into professional workflows, the ability to guarantee structured, error-free outputs becomes critical. For industries that rely on structured data and formatted text, this development may signal a shift toward more practical and reliable AI assistance tools that can consistently follow rules while maintaining the creative and analytical capabilities that make large language models valuable. The post Researchers develop more efficient language model control method appeared first on Calendar.

Scammers can exploit your data from just 1 ChatGPT search
Scammers can exploit your data from just 1 ChatGPT search

Fox News

time28-05-2025

  • Health
  • Fox News

Scammers can exploit your data from just 1 ChatGPT search

ChatGPT and other large language models (LLMs) have become amazing helpers for everyday tasks. Whether it's summarizing complex ideas, designing a birthday card or even planning your apartment's layout, you can get impressive results with just a simple prompt. But as helpful as these AI tools are, their convenience comes with hidden risks, especially when it comes to your personal privacy. If you haven't tried an LLM like ChatGPT before, here's the gist: They're advanced language processors that chat with you through text. No special commands or coding needed, just type what you want to know or do, and they respond. For example, asking "Why is the conclave kept secret?" will get you a detailed explanation in seconds. This simplicity is what makes LLMs so useful, but it also opens the door to risks. Instead of harmless questions, someone could ask for a detailed profile on a person, and the model might generate a surprisingly thorough report. While these tools have safeguards and often refuse certain requests, clever phrasing can sometimes bypass those limits. Unfortunately, it doesn't take much effort for someone to use ChatGPT to gather personal information about you. But don't worry, there are ways to protect yourself from this kind of digital snooping. These AI tools don't just pull information out of thin air. They need to access real online sources to work. In other words, your data is already out there on the internet; AI tools just make it easier to find. And if you look at the sources, most of the information you wouldn't want shared online, like your address, relatives and so on, is made public by people-search sites. Other sources include social media, like LinkedIn and Facebook, as well as public databases. But none of them are as invasive as people-search sites. Let's see what you can do to limit how much of your information is exposed online. To effectively safeguard your personal information from being exposed or misused, it's important to follow these steps and adopt key precautions. Although not all people-search sites are required to offer it, most of them do provide an option to request an opt-out. But that comes with a few challenges. Where to start: Identifying people-search sites that expose your personal information There are hundreds of people-search sites registered in the U.S. Going through each and every one is, realistically speaking, impossible. You'll need to narrow your search somehow. Using AI tools: How to find and list data broker sites with your personal data Use AI tools and ask them to run a deep search on yourself. It's not a perfect or complete solution; LLMs tend to shorten their responses to save resources. But it will give you a good starting point, and if you keep asking for more results, you should be able to put together a decent list of people-search sites that might have your profile. Submitting opt-out requests: How to remove your information from people-search sites Now, you'll have to go through each of these people-search sites and submit opt-out requests. These usually aren't complicated, but they're definitely time-consuming. The opt-out forms are typically located at the bottom of each site, in the footer. The naming can vary from "Do Not Sell My Info" to "Opt-Out" or something similar. Each people-search site is a little different. Opting out of every people-search site that exposes your personal information is a mammoth task. I've discussed it in more detail here. Alternatively, you can automate this process. Data removal services are real-time and energy savers when it comes to protecting your personal information online. The way these services work is simple. They send hundreds of data removal requests on your behalf to people-search sites you might not even know exist but are still exposing your data. And with some services, the process goes even further than that. People-search sites aren't the only places exposing your personal information without your knowledge. In fact, they're just a small part of the larger data broker industry. There are marketing, health, financial, risk and many other types of data brokers trading your information. Your data is a commodity they use to make a profit, often without you even realizing it. Data removal services have taken on the challenge of fighting this threat to your privacy. They continuously scour the web, looking for your profiles. This way, you can just sign up and let them handle the work in the background. And here's the best part: They take about 10 minutes to set up, roughly the same time it takes to opt out of a single people-search site. And that's it. The removal process is entirely automated and requires little to no effort on your part. With this small initial effort, you may save yourself from privacy-related risks, including scams and even identity theft. But what if your data is exposed on a people-search site not covered by any data removal service? Every removal service out there has limitations on the number of data brokers it supports. It's not about a lack of effort; it's mostly because brokers are generally unwilling to cooperate, to put it mildly. But there's a way to address this issue without going back to manual opt-outs. The top names in the data removal industry now offer custom removals. In simple terms, this means you can ask them to remove your personal information from websites not currently covered by their standard plans. The catch is that you'll need to do the research yourself and point out which sites are exposing your data. It's not as convenient as having everything done automatically, but it's a relatively minor inconvenience for the sake of your online privacy. Being mindful of the information you provide to AI tools is the first and most crucial step in protecting your privacy. Don't share sensitive details such as your full name, home address, financial information, passwords or any other personal data that could be used to identify or harm you or others. Protecting your AI accounts from unauthorized access helps keep your interactions and data safe. Always use strong, unique passwords and consider using a password manager to generate and store those complex passwords. Enable multifactor authentication whenever possible to add an extra layer of security. Regularly review your account permissions and remove access for any devices or applications you no longer use. Get more details about my best expert-reviewed password managers of 2025 here. Adjusting your social media privacy settings can greatly reduce the amount of personal information available to data brokers. Make your profiles private, limit who can see your posts and be selective about accepting friend or follower requests. Periodically audit your privacy settings and remove any unnecessary third-party app connections to further minimize your exposure. Protecting your devices with strong antivirus software adds an essential layer of security against digital threats. Antivirus programs defend against malware, phishing and identity theft. Be sure to choose reputable software and regularly update it to stay protected against the latest threats. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices. Using a dedicated email address for opt-outs and online sign-ups helps reduce spam and protects your primary email. This practice also makes it easier to track which sites and services have your contact information. If your alias email becomes compromised, you can quickly change it without disrupting your main accounts. See my review of the best secure and private email services here. Get a free scan to find out if your personal information is already out on the web. Large language models like ChatGPT are transforming how we work, create and solve problems, but they also introduce new privacy and security risks that can't be ignored. As these tools become more powerful and accessible, it's up to each of us to take proactive steps to safeguard our personal information and understand where our data might be exposed. By staying alert and making use of available privacy tools, we can enjoy the benefits of AI while minimizing the risks. Should OpenAI be held legally accountable when its tools are used to collect or expose private data without consent? Let us know your experience or questions by writing us at Your story could help someone else stay safe. For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels: Answers to the most-asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.

Grok's ‘white genocide' meltdown nods to the real dangers of the AI arms race
Grok's ‘white genocide' meltdown nods to the real dangers of the AI arms race

CNN

time20-05-2025

  • CNN

Grok's ‘white genocide' meltdown nods to the real dangers of the AI arms race

It's been a full year since Google's AI overview tool went viral for encouraging people to eat glue and put rocks on pizza. At the time, the mood around the coverage seemed to be: Oh, that silly AI is just hallucinating again. A year later, AI engineers have solved hallucination problems and brought the world closer to their utopian vision of a society whose rough edges are being smoothed out by advances in machine learning as humans across the planet are brought together to… Just kidding. It's much worse now. The problems posed by large language models are as obvious as they were last year, and the year before that, and the year before that. But product designers, backed by aggressive investors, have been busy finding new ways to shove the technology into more spheres of our online experience, so we're finding all kinds of new pressure points — and rarely are they as fun or silly as Google's rocks-on-pizza glitch. Take Grok, the xAI model that is becoming almost as conspiracy-theory-addled as its creator, Elon Musk. The bot last week devolved into a compulsive South African 'white genocide' conspiracy theorist, injecting a tirade about violence against Afrikaners into unrelated conversations, like a roommate who just took up CrossFit or an uncle wondering if you've heard the good word about Bitcoin. XAI blamed Grok's unwanted rants on an unnamed 'rogue employee' tinkering with Grok's code in the extremely early morning hours. (As an aside in what is surely an unrelated matter, Musk was born and raised in South Africa and has argued that 'white genocide' was committed in the nation — it wasn't.) Grok also cast doubt on the Department of Justice's conclusion that ruled Jeffrey Epstein's death a suicide by hanging, saying that the 'official reports lack transparency.' The Musk bot also dabbled in Holocaust denial last week, as Rolling Stone's Miles Klee reports. Grok said on X that it was 'skeptical' of the consensus estimate among historians that 6 million Jews were murdered by the Nazis because 'numbers can be manipulated for political narratives.' Manipulated, you say? What, so someone with bad intentions could input their own views into a data set in order to advance a false narrative? Gee, Grok, that does seem like a real risk. (The irony here is that Musk, no fan of traditional media, has gone and made a machine that does the exact kind of bias-amplification and agenda-pushing he accuses journalists of doing.) The Grok meltdown underscores some of the fundamental problems at the heart of AI development that tech companies have so far yada-yada-yada'd through anytime they're pressed on questions of safety. (Last week, CNBC published a report citing more than a dozen AI professionals who say the industry has already moved on from the research and safety-testing phases and are dead-set on pushing more AI products to market as soon as possible.) Let's forget, for a moment, that so far every forced attempt to put AI chatbots into our existing tech has been a disaster, because even the baseline use cases for the tech are either very dull (like having a bot summarize your text messages, poorly) or extremely unreliable (like having a bot summarize your text messages, poorly). First, there's the 'garbage in, garbage out' issue that skeptics have long warned about. Large language models like Grok and ChatGPT are trained on data vacuumed up indiscriminately from across the internet, with all its flaws and messy humanity baked in. That's a problem because even when nice-seeming CEOs go on TV and tell you that their products are just trying to help humanity flourish, they're ignoring the fact that their products tend to amplify the biases of the engineers and designers that made them, and there are no internal mechanisms baked into the products to make sure they serve users, rather than their masters. (Human bias is a well-known problem that journalists have spent decades protecting against in news by building transparent processes around editing and fact-checking.) But what happens when a bot is made without the best of intentions? What if someone whats to build a bot to promote a religious or political ideology, and that someone is more sophisticated than whoever that 'rogue employee' was who got under the hood at xAI last week? 'Sooner or later, powerful people are going to use LLMs to shape your ideas,' AI researcher Gary Marcus wrote in a Substack post about Grok last week. 'Should we be worried? Hell, yeah.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store