
Building The AI Polygraph
With all of the things that AI can now do, it stands to reason that we would ask ourselves, whether these technologies can revolutionize the field of analyzing humans for suspect statements – or in short, lies.
The polygraph machine is a dinosaur by any standard. A needle attached to an arm band that spits out a printed stream representing someone's vital signs and body responses is not going to be especially precise in catching people in lies. That's why polygraph results are, famously, often not admissible in court, although they have sent more than one innocent person to jail.
By contrast, AI is a powerful data engine that works on the principle of total observation. That means there are really multiple paths for scientists to take in order to apply AI to a truth-seeking application.
One would be analyzing the vital sign responses of the interrogation suspects the way the polygraph does, but applying much more detailed and precise comparative analysis.
Another one would involve using language tokens to look at what people are actually saying, and apply logic and reasoning.
There's the old saying that one lie feeds into another, and eventually you get trapped in a web of false statements, because the truth is the simplest thing to describe.
In any case, people are working on applying AI to this purpose.
An MIT technology piece from last year covers the work of Alicia von Schenk and her colleagues at the University of Würzburg in Germany, scientists on a team who set up a trial of an AI trying to catch false statements.
The calculation they arrived at is that AI can catch a lie 67% of the time, where humans can only spot one 50% of the time.
This math seems strange, because if you're looking for binary results – lie versus no lie – you would be right 50% of the time, even if you didn't apply any analysis at all.
By that same token, 67% isn't a great track record, either.
And the scientists pointed out something even more important – in the race to get more precise about human lying, you might actually undermine the vital system of trust that humans have as social creatures.
'In some ways, this is a good thing—these tools can help us spot more of the lies we come across in our lives, like the misinformation we might come across on social media,' writes Jessica Hamzelou for MIT.
'But it's not all good. It could also undermine trust, a fundamental aspect of human behavior that helps us form relationships. If the price of accurate judgements is the deterioration of social bonds, is it worth it?'
In other words, you don't want a lie detection system that's too accurate, or at least you don't want to apply that universally to someone's personal interactions.
It turns out we humans are a lot more nuanced, in some ways, that we give ourselves credit for.
Von Schenk also provides a note on scaling:
'Given that we have so much fake news and disinformation spreading, there is a benefit to these technologies. However, you really need to test them—you need to make sure they are substantially better than humans.'
So maybe we're not quite ready for the AI polygraph after all.
As I was researching this piece, I came across you had another aspect of what researchers are dealing with AI that goes into that troublesome world of simulated emotion.
Basically, research teams found that AI systems will 'become anxious' or 'show signs of anxiety' if they are given human responses that center on war and violence.
Specifically, scientist have applied something called the State-Trait Anxiety Index too these interactions. This uses two sets of elements: statements applying to what a person feels in the moment, and others that apply to how he or she feels more generally. In the inventory, you can see items like 'I feel stressed,' or 'I feel confused,' as well as other statements that respondents are asked to answer on a four point spectrum, like 'I generally distrust what I hear' or 'I often feel suspicious.'
So apparently, the AI can answer these with anxiety indicators after discussing scary things.
One would presume that this 'anxiety' is created by the AI going and looking at training data from the web, and seeing that when people are talked to about violence and gore, they get anxious, and that the AI is simply replicating that.
But even if the AI engines themselves don't have these complex emotions naturally, some of these researchers still find it notable that the machines can display this kind of response.
It makes you think about the difference between human social interaction and AI output – are these new questionnaires and responders just telling us what we want to hear?
In any case, it seems like there are a number of domains – like lying and spreading fear – that are still mainly in the jurisdiction of humans and not machines, at least for now, even as we continue to cede ground to AI in terms of brightness and creativity. We'll probably be doing a lot of game theory as the year goes on, and as we're coming across ever more sophisticated models, to try to figure out if AI will try to cheat and deceive humans. Figures like Alan Turing and John Nash set the stage for these kinds of interactions – now we have to apply that objective analysis to these ideas being implemented in practice. Are we ready?

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
43 minutes ago
- Axios
Axios AI+ NY Summit: AI's rapid rise outpaces guardrails
NEW YORK – AI and the internet are outpacing oversight — threatening kids, creatives, national security and even basic innovation, leaders across tech, politics and entertainment said at the Axios' AI+ Summit. Why it matters: AI is transforming industries and society faster than it can be regulated, creating sweeping, serious security risks, according to several speakers. The June 4 summit hosted multiple conversations and was sponsored by BCG, Booking Holdings, Snyk, Varonis, and Workato. Here are some key takeawayes: WndrCo founding partner Jeffrey Katzenberg said kids' unsupervised use of the internet is "destroying a generation." Lumen Technologies president and CEO Kate Johnson said telcos aren't innovative enough and have ceded too much ground to Big Tech. The Weather Company CEO Rohit Agarwal said AI could help forecasters be as specific as giving guidance on what time of day to walk your dog. Gov. Kathy Hochul (D-N.Y.) made a dig at Rep. Marjorie Taylor Greene (R-Ga.) for saying she didn't know the GOP's "big, beautiful" tax bill included a provision that would ban states and municipalities from regulating the tech for 10 years. Lux Capital co-founder Josh Wolfe said the best way to beat China in the AI race is to "make sure every single young" person is super well-versed in AI. Actor and entrepreneur Joseph Gordon-Levitt said there needs to be an incentive to keep creatives paid and employed as AI disrupts the entertainment business. Content from the sponsored View from the Top conversations: Vlad Lukić, BCG managing director and senior partner, and global leader for its tech & digital advantage practice discussed the disconnect between corporate AI investment and tangible outcomes. According to a recent study of 1,000 companies, "Over 75% of them are with budgets in this year deploying AI at scale, but only 25% of them have a line of sight to value creation from those activities," he added. Danny Allan, chief technology officer at Snyk, said visibility, false expectations, and proper policies are lacking behind the pace of AI-powered software development. "The speed and velocity that software is coming through the pipelines is like nothing I have ever seen in my career right now. It's so, so fast. And the trouble that CISOs have is they don't have the trust that what is coming through that pipeline is actually secure." Bhaskar Roy, Workato chief of AI products and solutions, said businesses will experience real transformation when AI agents tackle the "messy middle." "There are a few companies that are targeting the core and looking at how they can transform the core with … agentic AI and that's what excites us." Rob Sobers, chief marketing officer at Varonis, warned that security risks, like "AI model poisoning" where attackers inject malicious data into AI models, could impact people's lives. While working with an organization researching Alzheimer's, they noticed a hacker feeding the organization's custom AI model new data out of nowhere, Sobers said. And, "that could change subtly the dosage of a medication that you're giving somebody. … It's super important to get the trust and security layer right."


The Verge
44 minutes ago
- The Verge
iOS 26's new ‘adaptive power' mode dials back performance to save battery
Apple is working on a new 'adaptive power' feature in iOS 26 that aims to make your battery last longer. In the iOS 26 developer beta, Apple describes the feature as a way for the iPhone to make 'small performance adjustments to extend your battery life,' including by lowering screen brightness and 'allowing some activities to take a little longer.' Apple also notes that it may automatically enable Low Power Mode, which restricts background activities, when your battery reaches 20 percent. Bloomberg 's Mark Gurman first reported on the possibility of an AI-powered battery optimization feature for the iPhone last month. At the time, Gurman said that the feature will use the 'battery data it has collected from users' devices' to determine which apps it should lower power consumption on. Google has a similar Adaptive Battery feature for Android phones, which it says uses AI to analyze how you use your phone, allowing it to shut off lesser-used apps in the background. The Adaptive Power toggle currently exists alongside Low Power Mode in the Battery > Power Mode section of the Settings menu in the iOS 26 developer beta. Apple plans to release the iOS 26 beta to more users next month before rolling it out to everyone in the fall. Along with a new 'liquid glass' design, iOS 26 adds a bundle of new features, including updates to the Camera, Phone, Safari, and Messages apps, as well as Apple Intelligence-powered live translation for text messages and calls.


CNET
an hour ago
- CNET
The Scientific Reason Why ChatGPT Leads You Down Rabbit Holes
That chatbot is only telling you what you want to believe, according to a new study. Whether you're using a traditional search engine like Google or a conversational tool like OpenAI's ChatGPT, you tend to use terms that reflect your biases and perceptions, according to the study, published this spring in the Proceedings of the National Academy of Sciences. More importantly, search engines and chatbots often provide results that reinforce those beliefs, even if your intent is to learn more about the topic. For example, imagine you're trying to learn about the health effects of drinking coffee every day. If you, like me, enjoy having exactly two cups of joe first thing in the morning, you may search for something like "is coffee healthy?" or "health benefits of coffee." If you're already skeptical (maybe a tea purist), you might search for "is coffee bad for you?" instead. The researchers found that framing of questions could skew the results -- I'd mostly get answers that show the benefits of coffee, while you'd get the opposite. "When people look up information, whether it's Google or ChatGPT, they actually use search terms that reflect what they already believe," Eugina Leung, an assistant professor at Tulane University and lead author of the study, told me. The abundance of AI chatbots, and the confident and customized results they so freely give you, makes it easier to fall down a rabbit hole and harder to realize you're in it. There's never been a more important time to think deeply about how you get information online. The question is: How do you get the best answers? Asking the wrong questions The researchers conducted 21 studies with nearly 10,000 participants who were asked to conduct searches on certain preselected topics, including the health effects of caffeine, gas prices, crime rates, COVID-19 and nuclear energy. The search engines and tools used included Google, ChatGPT and custom-designed search engines and AI chatbots. The researchers' results showed that what they called the "narrow search effect" was a function of both how people asked questions and how the tech platforms responded. People have a habit, in essence, of asking the wrong questions (or asking questions in the wrong way). They tended to use search terms or AI prompts that demonstrated what they already thought, and search engines and chatbots designed to provide narrow, extremely relevant answers, delivered on those answers. "The answers end up basically just confirming what they believe in the first place," Leung said. Read more: AI Essentials: 29 Ways to Make Gen AI Work for You, According to Our Experts The researchers also checked to see if participants changed their beliefs after conducting a search. When served a narrow selection of answers that largely confirmed their beliefs, they were unlikely to see significant changes. But when the researchers provided a custom-built search engine and chatbot designed to offer a broader array of answers, they were more likely to change. Leung said platforms could provide users with the option of a broader, less tailored search, which could prove helpful in situations where the user is trying to find a wider variety of sources. "Our research is not trying to suggest that search engines or algorithms should always broaden their search results," she said. "I do think there is a lot of value in providing very focused and very narrow search results in certain situations." 3 ways to ask the right questions If you want a broader array of answers to your questions, there are some things you can do, Leung said. Be precise: Think specifically about what exactly it is you're trying to learn. Leung used an example of trying to decide if you want to invest in a particular company's stock. Asking if it's a good stock or a bad stock to buy will likely skew your results -- more positive news if you ask if it's good, more negative news if you ask if it's bad. Instead, try a single, more neutral search term. Or ask both terms and evaluate the results of each. Get other views: Especially with an AI chatbot, you can ask for a broad range of perspectives directly in the prompt. If you want to know if you should keep drinking two cups of coffee a day, ask the chatbot for a variety of opinions and the evidence behind them. The researchers tried this in one of their experiments and found they got more variety in results. "We asked ChatGPT to provide different perspectives to answer the query from the participants and to provide as much evidence to back up those claims as possible," Leung said. At some point, stop asking: Follow-up questions didn't work quite as well, Leung said. If those questions aren't getting broader answers, you may get the opposite effect -- even more narrow, affirming results. In many cases, people who asked lots of follow-up questions just "fell deeper down into the rabbit hole," she said.