logo
Building The AI Polygraph

Building The AI Polygraph

Forbes27-04-2025

With all of the things that AI can now do, it stands to reason that we would ask ourselves, whether these technologies can revolutionize the field of analyzing humans for suspect statements – or in short, lies.
The polygraph machine is a dinosaur by any standard. A needle attached to an arm band that spits out a printed stream representing someone's vital signs and body responses is not going to be especially precise in catching people in lies. That's why polygraph results are, famously, often not admissible in court, although they have sent more than one innocent person to jail.
By contrast, AI is a powerful data engine that works on the principle of total observation. That means there are really multiple paths for scientists to take in order to apply AI to a truth-seeking application.
One would be analyzing the vital sign responses of the interrogation suspects the way the polygraph does, but applying much more detailed and precise comparative analysis.
Another one would involve using language tokens to look at what people are actually saying, and apply logic and reasoning.
There's the old saying that one lie feeds into another, and eventually you get trapped in a web of false statements, because the truth is the simplest thing to describe.
In any case, people are working on applying AI to this purpose.
An MIT technology piece from last year covers the work of Alicia von Schenk and her colleagues at the University of Würzburg in Germany, scientists on a team who set up a trial of an AI trying to catch false statements.
The calculation they arrived at is that AI can catch a lie 67% of the time, where humans can only spot one 50% of the time.
This math seems strange, because if you're looking for binary results – lie versus no lie – you would be right 50% of the time, even if you didn't apply any analysis at all.
By that same token, 67% isn't a great track record, either.
And the scientists pointed out something even more important – in the race to get more precise about human lying, you might actually undermine the vital system of trust that humans have as social creatures.
'In some ways, this is a good thing—these tools can help us spot more of the lies we come across in our lives, like the misinformation we might come across on social media,' writes Jessica Hamzelou for MIT.
'But it's not all good. It could also undermine trust, a fundamental aspect of human behavior that helps us form relationships. If the price of accurate judgements is the deterioration of social bonds, is it worth it?'
In other words, you don't want a lie detection system that's too accurate, or at least you don't want to apply that universally to someone's personal interactions.
It turns out we humans are a lot more nuanced, in some ways, that we give ourselves credit for.
Von Schenk also provides a note on scaling:
'Given that we have so much fake news and disinformation spreading, there is a benefit to these technologies. However, you really need to test them—you need to make sure they are substantially better than humans.'
So maybe we're not quite ready for the AI polygraph after all.
As I was researching this piece, I came across you had another aspect of what researchers are dealing with AI that goes into that troublesome world of simulated emotion.
Basically, research teams found that AI systems will 'become anxious' or 'show signs of anxiety' if they are given human responses that center on war and violence.
Specifically, scientist have applied something called the State-Trait Anxiety Index too these interactions. This uses two sets of elements: statements applying to what a person feels in the moment, and others that apply to how he or she feels more generally. In the inventory, you can see items like 'I feel stressed,' or 'I feel confused,' as well as other statements that respondents are asked to answer on a four point spectrum, like 'I generally distrust what I hear' or 'I often feel suspicious.'
So apparently, the AI can answer these with anxiety indicators after discussing scary things.
One would presume that this 'anxiety' is created by the AI going and looking at training data from the web, and seeing that when people are talked to about violence and gore, they get anxious, and that the AI is simply replicating that.
But even if the AI engines themselves don't have these complex emotions naturally, some of these researchers still find it notable that the machines can display this kind of response.
It makes you think about the difference between human social interaction and AI output – are these new questionnaires and responders just telling us what we want to hear?
In any case, it seems like there are a number of domains – like lying and spreading fear – that are still mainly in the jurisdiction of humans and not machines, at least for now, even as we continue to cede ground to AI in terms of brightness and creativity. We'll probably be doing a lot of game theory as the year goes on, and as we're coming across ever more sophisticated models, to try to figure out if AI will try to cheat and deceive humans. Figures like Alan Turing and John Nash set the stage for these kinds of interactions – now we have to apply that objective analysis to these ideas being implemented in practice. Are we ready?

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Ohio State University going all in on AI to build ‘intuition': Provost
Ohio State University going all in on AI to build ‘intuition': Provost

Yahoo

time9 minutes ago

  • Yahoo

Ohio State University going all in on AI to build ‘intuition': Provost

(NewsNation) — As artificial intelligence continues to be hotly debated in the United States, one college is going all in on letting the capability of computational systems perform tasks typically associated with human intelligence. Ravi Bellamkonda, provost and executive vice president at Ohio State University, joined 'NewsNation Live' to discuss the school's 'AI Fluency Initiative' beginning this fall. The 'AI Fluency Initiative' will be required of all students at the school. The hope is that the program will help students become bilingual, meaning fluent in their major field of study and applying AI in that area. 'We've been using all sorts of tools to augment learning, augment our understanding of the world,' said Bellamkonda. 'There's no question we live in a data-rich world now, from politics, to news, to manufacturing, to medicine, to all these fields of study and interest for our students. We have lots and lots of data. And the power of AI is to synthesize that data and make meaning out of it.' Regulating AI for ten years in 'big, beautiful bill' no help: Tech expert Pew Research Center said that in 2024, teens reported using AI twice as much as they did the year before. Over half say they think it's okay to use AI for research, while nearly 30% acknowledge it's acceptable to use AI for math problems. Less than 20% added that it's okay to use AI to write an essay. 'We really want to build intuition, judgment, and ethical understanding,' Bellamkonda said. 'When is it appropriate? When is it not appropriate? When is it hallucinating? When is it not? And we feel like having that conversation in the classroom with a professor in the context of the subject they are learning is really important for us.' High school students create AI-detecting app to help teachers Apple says it has a new research paper out that says artificial intelligence might not be as intelligent as some want it to be. The company's new research paper claims that so-called 'large reasoning models' and 'large language models' such as ChatGPT give up when trying to solve complex problems. Their researchers say this is proof of limited capabilities that keep artificial intelligence from reasoning the way you or I do. 'We had the same fear, if you remember, when we had calculators,' added Bellamkonda. 'We were afraid that people would store formulas in there and not really understand. That's the challenge we have as educators now is to still make sure that our students have a deep understanding of the subject and they're not just letting AI do all the thinking.' How AI is shaping industries across the US Bloomberg also reported that Mark Zuckerberg, founder of Facebook, is seeking a team to build an AI that could reason on the same level as a human. He wants to hire around 50 people for the project. This comes after Meta delayed the release of a big new artificial intelligence model last month over concerns it wasn't good enough. In addition, St. Petersburg, Florida, is installing AI or so-called 'smart signals' that can connect with tech in some newer vehicles. That tech can alert the driver about upcoming hazards and traffic conditions, such as flooding or a pedestrian in the area. The city is looking to invest more than $1 million in the project. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

How AI Is Exposing The Great SaaS Mediocrity Machine
How AI Is Exposing The Great SaaS Mediocrity Machine

Forbes

time15 minutes ago

  • Forbes

How AI Is Exposing The Great SaaS Mediocrity Machine

Close-up of a person's hand holding an iPhone and using Google AI Mode, an experimental mode ... More utilizing artificial intelligence and large language models to process Google search queries, Lafayette, California, March 24, 2025. (Photo by Smith Collection/Gado/Getty Images) The software-as-a-service industry has grown into a three hundred billion dollar colossus, powering everything from Fortune 500 customer relationships to your neighborhood coffee shop's inventory system. Yet beneath this impressive facade, a growing chorus of industry insiders suggests something troubling: much of this growth may have been built on a foundation of what the late anthropologist David Graeber termed "bullshit jobs" – roles that add little genuine value but consume enormous resources. Now, as artificial intelligence begins automating many of the tasks that filled these positions, the emperor's new clothes are becoming visible to all. David Graeber's 2018 book "Bullshit Jobs: A Theory" argued that modern capitalism had spawned entire categories of meaningless work – jobs that even their holders secretly believed were pointless. His taxonomy included "flunkies" (who exist mainly to make superiors feel important), "duct tapers" (who solve problems that shouldn't exist), and "box tickers" (who create the appearance of meaningful activity). Sound familiar to anyone who's worked in SaaS lately? Consider the typical Series B software company today: layers of growth marketers optimizing conversion funnels that users abandon, customer success managers managing relationships with customers who don't renew, and product managers shipping features that solve no real problems. Industry veteran Charles Grant Powell-Thompson recently wrote that the sector has become "a bloated ecosystem of recycled playbooks, unoriginal hires, and shallow growth hacks." At the heart of what Powell-Thompson identifies as SaaS mediocrity lies an over-reliance on playbooks. Visit any early-stage startup or public SaaS company, and you'll encounter identical practices: OKRs, product-led growth strategies, "land and expand" sales models, and user journey funnels copied wholesale from previous companies. "Product teams build features to meet roadmap commitments, not user needs," Powell-Thompson observes. "Sales teams push demos and discount paths based on generic conversion data. Marketing teams copy HubSpot's inbound strategy from 2014 and declare victory after publishing fifteen listicles." The 2021 wave of startups flooding the "Notion for X" and "Figma for Y" space exemplifies this template thinking. Nearly all adopted identical growth strategies: bottom-up freemium entry with vague "community" layers. Most failed because they misunderstood what made the originals successful – deep product design and obsessive iteration, not surface-level copying. Perhaps nowhere is the bullshit jobs phenomenon more visible than in SaaS hiring patterns. The industry has developed what Powell-Thompson calls "the SaaS hiring loop" – continuously recycling talent from the same pool of failed or plateaued startups. Competence gets assumed based on LinkedIn logos rather than demonstrated outcomes. "A growth marketer who scaled vanity metrics at one mediocre tool is hired to repeat the cycle elsewhere," Powell-Thompson notes, "without ever proving they can build sustainable customer retention or profit." This creates a carousel of recycled talent carrying identical playbooks and assumptions but rarely delivering results that justify their roles. The industry favors "SaaS-native" professionals who speak fluent ARR and OKR but don't question fundamentals or challenge assumptions. The venture capital boom masked much of this inefficiency. When Zendesk launched Zendesk Sell in 2018 – an acquisition of Base CRM – the company spent years trying to wedge it into a Salesforce competitor before sunsetting it in 2023. Evernote, once beloved, spent a decade chasing premium users with poorly built features while ignoring performance and user experience. These weren't scams – they were built by intelligent people. But as Powell-Thompson points out, they 'hired too fast, grew too shallow, and learned too late that real markets don't behave like pitch decks. When capital dried up in 2022-2023, widespread layoffs revealed how many SaaS positions had been consuming resources without contributing meaningful value'. Artificial intelligence isn't just changing how software works; it's revealing which human roles actually mattered in the first place. When AI can generate marketing copy instantly, what value does the growth hacker who spent weeks A/B testing subject lines provide? When algorithms can analyze user behavior patterns in real-time, what's the point of analysts who took days to produce similar insights? The numbers are telling the story. SaaS companies have eliminated hundreds of thousands of positions since 2022, yet many report improved productivity metrics. This suggests that much of what looked like essential work may have been what Graeber would recognize as elaborate theater. Graeber's categories map remarkably well onto modern SaaS organizational charts: The Flunkies: Business development representatives who exist primarily to make actual salespeople feel important, spending days sending LinkedIn messages nobody reads. The Duct Tapers: Customer success managers whose primary function involves fixing problems created by poorly designed products or misaligned sales promises. The Box Tickers: Growth marketers who obsess over vanity metrics like email open rates while customer churn rates remain stubbornly high. The Taskmasters: Middle managers who exist solely to manage other managers, creating elaborate OKR frameworks that nobody follows. Powell-Thompson identifies a particularly troubling aspect of SaaS culture: the near-complete lack of self-awareness. Workers confuse busywork with impact, treating twelve-slide strategy decks and Miro boards as substitutes for real execution. The industry has developed what he calls "a language of legitimacy that sounds impressive but says little." Phrases like "We're building an extensible platform for enterprise workflows" or "Our ICP is mid-market GTM teams in the post-series B range" mask fundamental confusion about customers and value propositions. LinkedIn amplifies this dynamic. The average SaaS employee presents themselves as management guru, life coach, and visionary simultaneously, while often unable to ship a bug fix, close a sale, or explain their product's API in plain English. Ironically, as AI eliminates some forms of meaningless work, it may be creating others. A new class of "AI prompt engineers" has emerged, with some commanding six-figure salaries for sophisticated search operations. Companies are hiring "AI ethics officers" and "automation specialists" who spend more time in meetings about AI than implementing it. This suggests we may be witnessing the birth of what could be called "AI bureaucracy" – jobs that exist primarily to manage, oversee, or optimize interactions with AI systems, potentially as divorced from real value creation as the roles they're replacing. The bullshit jobs critique isn't without its limitations. What appears inefficient may serve important functions that aren't immediately obvious. Redundancy in complex systems often provides resilience. The customer success manager scheduling seemingly pointless check-in calls might prevent million-dollar churn through relationship management that AI cannot replicate. Some of the most successful SaaS companies – Slack, Notion, Figma – emerged from overcrowded markets. The process of hiring diverse perspectives, even if some prove redundant, may be necessary for breakthrough innovation. And behind every critique of meaningless work stands a real person with real financial obligations. The growth marketer whose role seems pointless represents someone's mortgage payment, someone's child's college fund. The solution isn't eliminating all potentially redundant roles – that would be both cruel and counterproductive. Instead, the industry needs conscious evolution guided by several principles: Honest Performance Metrics: Moving beyond vanity metrics to measure actual business impact. Roles that can't demonstrate clear value creation over reasonable time periods should be restructured or eliminated. Skill Depth Over Buzzword Fluency: As Powell-Thompson argues, the industry needs "fewer growth hackers and more grown-ups. Fewer decks, more decisions. Fewer generalists, more expertise." Human-AI Collaboration: Rather than viewing AI as human replacement, smart companies are determining how to combine human judgment with AI capability for genuinely valuable outcomes. Cultural Honesty: Developing organizational cultures capable of honest productivity assessment without corporate speak or fear-based defensiveness. The SaaS industry's AI-driven reckoning previews what's coming for knowledge work broadly. As artificial intelligence becomes more capable, every industry will face similar questions about which roles create genuine value versus which exist primarily to create the appearance of activity. Graeber's insight about bullshit jobs revealed a moral crisis – people trapped in roles they knew were meaningless, maintaining economic survival through elaborate performance. The AI revolution offers an opportunity to escape this trap, but only through honest assessment of what we're escaping from. The SaaS industry's three hundred billion dollar scale means getting this transition right matters far beyond Silicon Valley. The sector employs millions globally and underpins much of the modern economy's digital infrastructure. If Graeber was correct about bullshit jobs' prevalence, then AI's arrival represents more than technological disruption – it's an opportunity for work reform. A chance to align human effort with genuine value creation benefiting both workers and customers. But realizing this opportunity requires something both the SaaS industry and broader business world have struggled with: courage to be honest about what actually works, what doesn't, and why. As Powell-Thompson concludes: "The truth is simple, and brutal. SaaS didn't just scale software – it scaled mediocrity." The question now is whether the industry will use this moment of AI-driven disruption for genuine transformation or simply automate meaningful work while preserving meaningless jobs through increasingly elaborate justifications. The three hundred billion dollar question is which path the industry will choose.

UK Regulator Partners with Nvidia to Help Banks Test AI Tools
UK Regulator Partners with Nvidia to Help Banks Test AI Tools

Yahoo

time18 minutes ago

  • Yahoo

UK Regulator Partners with Nvidia to Help Banks Test AI Tools

NVIDIA Corporation (NASDAQ:NVDA) is one of the best Dow stocks to invest in. Recently, Britain's financial regulator announced a partnership with the company to help banks safely explore artificial intelligence technologies. The Financial Conduct Authority (FCA) revealed plans to launch a new 'Supercharged Sandbox' in October, aimed at supporting financial institutions in the early stages of AI experimentation. Through this initiative, firms will gain access to NVIDIA Corporation (NASDAQ:NVDA)'s accelerated computing tools and AI Enterprise Software, along with improved data, technical expertise, and regulatory guidance to help speed up innovation. A close-up of a colorful high-end graphics card being plugged in to a gaming computer. The FCA noted that this program is intended for companies still in the early "discovery and experiment" stage of AI adoption, while a separate testing environment is already available for firms further along in development. Jessica Rusu, the FCA's chief data, intelligence and information officer, made the following comment: 'This collaboration will help those that want to test AI ideas but who lack the capabilities to do so. We'll help firms harness AI to benefit our markets and consumers, while supporting economic growth.' NVIDIA Corporation (NASDAQ:NVDA) produces the graphics processing units (GPUs) that power the training and operation of advanced AI models. The stock has surged by nearly 4% since the start of 2025. While we acknowledge the potential of NVDA as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: and Disclosure. None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store