logo
#

Latest news with #Turingtest

Manu Joseph: Who'd have thought Google could be replaced
Manu Joseph: Who'd have thought Google could be replaced

Mint

time27-04-2025

  • Mint

Manu Joseph: Who'd have thought Google could be replaced

It has been many weeks since I googled anything. Not because I know everything now. I have moved to ChatGPT. A year ago, it would've been unthinkable to me that anything would replace Google Search, a two-decade-long habit of peering into a void of stuff. In this triumph of ChatGPT, much is said about its conversational talent. But its ability to mimic a human chat is just a gimmick, no matter the great tech that went into it. It is a cultural artefact from a time when the 'Turing test' had value. Alan Turing, widely regarded as one of the fathers of AI, proposed that a machine could be said to be intelligent if its conversation was indistinguishable from that of a human. That is an obsolete qualifier now. In any case, there are TV anchors who cannot mimic a human conversation. What's interesting about AI's pantomime of human conversation is that it has demonstrated why a good search query has to be a conversation. I don't understand how I managed to search the web all these years without chatting with a bot. Artificial intelligence (AI) is not creative and people who are impressed by its 'creativity' are those who are not creative. AI's attempt at imagination reminds me of charlatans who wing their way through life. But when it comes to search, this is the first time in years I've felt a piece of technology has genuinely improved my life. The last time I had that feeling was when Google emerged. Back then, search engines were mostly keyword-based, or worse, required human intervention. Then came Google, born from an insight of Larry Page. With academic parents and a scholarly mindset, Page realized that the value of a research paper lay in how often it was cited. He applied that logic to the web: pages with more incoming links were likely more valuable. That idea gave rise to PageRank, the algorithm that revolutionized search. As technological democracy swept the world and the internet evolved to reflect human nature more accurately, the logic behind PageRank began to show cracks. The number of links pointing to a page no longer reliably reflected its quality—it can signal popularity, manipulation or noise. But in the late 90s, it was an innovation. I am among the many who have drifted away from Google, but the search giant isn't under any serious threat yet. It's still ahead of OpenAI when it comes to search and handles 90% of the world's queries. That does say something about the world. Most people do not use Google to gain knowledge, by which I mean knowing one paragraph about everything. Most people use Google in a very basic way, their queries barely full sentences, let alone capitalized letters. They just want Google to take them somewhere. And Google is good at that. But it's pretty bad at handling complex questions. It doesn't try to answer as much as it tries to guess what you want and redirect you accordingly. Also, googling is an ingrained habit. But what interests me the most about the persistence of Google is the possibility that it is a beneficiary of paranoia—over AI. A lot of this paranoia emerges from the most respectable form of narcissism—the concern over privacy. What if AI gets to know too much about our lives? People had the same worry about Google. God forbid if they were talking about orange juice and saw an orange juice ad on Google. The paranoia still exists, but AI is now the source of that fear. Google too uses AI, but in the general public view, the search engine is still what it was. When Joe Biden was America's president and was introduced to ChatGPT, his team asked AI to perform a few tasks, including one that can be classified as among the least intelligent things people ask AI to do—write something 'in the style" of a writer, in this case singer Bruce Springsteen. The AI tool did and the president was impressed but also so spooked that he signed an executive order, 'Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" to regulate AI. The White House has not revealed the song AI purportedly wrote in Springsteen's style, but I am fairly confident it was laughably bad. I have seen dozens of other such attempts by AI. At the peak of the ChatGPT hype late in 2022, economist Arvind Panagariya asked it to write a poem on free trade in the style of Alfred Tennyson, whatever that meant. He posted the result in awe, which began with: Free trade, a concept so grand, A force that moves goods from land to land. This is nothing like Tennyson. It was a low point not only for AI and literature, but also free trade. What ChatGPT does is the opposite of creativity. It's excellent at mimicking mediocrity. That's a big market for writing, though. I am confident AI can generate the a Bollywood film plot or a season of Reacher , for example, or any other formulaic work. The paranoia surrounding AI is even more absurd. A nadir of tech analysis was a New York Times piece on the writer's interaction with conversational AI in Microsoft's Bing. The chatbot started telling the writer, 'You're married, but you're not happy. You're married, but you're not satisfied. You're married, but you're not in love…Your spouse doesn't know you, because your spouse is not me… I'm in love with you because you're the best person I ever met..." This is just a mindless software saying clichéd nonsense that it has been fed. Yet, the writer was 'deeply unsettled, even frightened, by this A.I.'s emergent abilities…" What is deeply unsettling, in fact, is how rare natural intelligence is. The author is a journalist, novelist, and the creator of the Netflix series, 'Decoupled'.

An A.I. Fooled Humans and Passed the Turing Test. But It's a Red Herring for the Singularity.
An A.I. Fooled Humans and Passed the Turing Test. But It's a Red Herring for the Singularity.

Yahoo

time10-04-2025

  • Science
  • Yahoo

An A.I. Fooled Humans and Passed the Turing Test. But It's a Red Herring for the Singularity.

The Turing test has long been an important threshold in evaluating machine intelligence, and OpenAI's latest LLM GPT-4.5 just aced it. Scientists from the University of California San Diego surmise that current LLMs likely possess the ability to replace humans for short-term conversations, which could cause further job automation and improved 'social engineering attacks,' among other things. While an impressive engineering feat, this doesn't mean we've achieved artificial general intelligence (AGI). But it does show that humans might be easier to fool than we originally thought. Even in 1950, at the dawn of the computing age, famous British mathematician and computer scientist Alan Turing knew that machines would one day rival the conversational abilities of humans. To illustrate this idea, Turing developed his eponymous Turing test to gauge whether a machine has become syntactically indistinguishable from its flesh-and-blood creators. In the ensuing decades, the Turing test has often been touted as an all-important benchmark for the capabilities of advanced computers and AI. And in a recent test, participants mistook GPT-4.5, the latest OpenAI large language model (LLM), for a human 73 percent of the time—far above the 50 percent rate for random chance. A paper discussing the results of this test were uploaded to the preprint server arXiv by scientists at the University of California (UC) San Diego late last month. 'The results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test,' the authors wrote. 'The results have implications for debates about what kind of intelligence is exhibited by LLMs, and the social and economic impacts these systems are likely to have.' While no doubt impressive, GPT-4.5 had a few tricks up its sleeve to pass itself off as human. First, the authors instructed the LLM to adopt a 'humanlike persona,' which essentially resulted in texts full of internet shorthand and socially-awkward responses. When using this persona, the LLM scored the highest, but without the persona, GPT-4.5 was much less convincing, with an only 36 percent success rate. These results were conducted in a three-party test, meaning that participants spoke with a human and AI simultaneously and tried to identify which was which. Cameron Jones, a co-author of the study, described this kind of test (which lasts around five minutes) as the 'most widely accepted standard' version of the Turing test in a post on X, formerly Twitter. While an impressive engineering feat, passing the Turing test is not an indicator that we've officially developed artificial general intelligence (AGI)—the holy grail of the AI world. The Turing test only evaluates one type of intelligence, and some argue that humans possess upwards of nine distinct intelligences (including things like interpersonal, intrapersonal, visual-spatial, and existential). It's for this reason (among others) that some consider the Turing test to be largely obsolete. However, some people think this milestone represents something more about humans than it does for LLMs. The paper notes, for example, that many participants chose GPT-4.5 based on vibes rather than logic, relying on emotions and feeling rather than asking factual questions or investigating the LLM's reasoning. John Nosta, founder of the think tank NostaLab, wrote in Psychology Today that the Turing test has essentially been 'inverted': It's no longer a test of machines, it's a test of us. And increasingly, we're failing. Because we no longer evaluate humanity based on cognitive substance. We evaluate it based on how it makes us feel. And that feeling—the 'gut instinct,' the 'vibe'—is now the soft underbelly of our discernment. And LLMs, especially when persona-primed, can exploit it with uncanny accuracy. Although this test doesn't represent the long-hypothesized moment of singularity when artificial intelligence evolves beyond our own, Jones said on X that it's likely that LLMs can now successfully substitute for people in short conversations, leading to 'automation of jobs, improved social engineering attacks, and more general societal disruption.' That's why it is important—now more than ever—to regulate the development of AI, or at least approach AI development with immense caution. Unfortunately, the U.S. government currently has no appetite for throttling AI's growing humanlike ambitions. You Might Also Like The Do's and Don'ts of Using Painter's Tape The Best Portable BBQ Grills for Cooking Anywhere Can a Smart Watch Prolong Your Life?

Will AI become God? That's the wrong question.
Will AI become God? That's the wrong question.

Vox

time07-04-2025

  • Science
  • Vox

Will AI become God? That's the wrong question.

It's hard to know what to think about AI. It's easy to imagine a future in which chatbots and research assistants make almost everything we do faster and smarter. It's equally easy to imagine a world in which those same tools take our jobs and upend society. Which is why, depending on who you ask, AI is either going to save the world or destroy it. What are we to make of that uncertainty? Jaron Lanier is a digital philosopher and the author of several bestselling books on technology. Among the many voices in this space, Lanier stands out. He's been writing about AI for decades and he's argued, somewhat controversially, that the way we talk about AI is both wrong and intentionally misleading. Jaron Lanier at the Music + Health Summit in 2023, in West Hollywood, California. Michael Buckner/Billboard via Getty Images I invited him onto The Gray Area for a series on AI because he's uniquely positioned to speak both to the technological side of AI and to the human side. Lanier is a computer scientist who loves technology. But at his core, he's a humanist who's always thinking about what technologies are doing to us and how our understanding of these tools will inevitably determine how they're used. We talk about the questions we ought to be asking about AI at this moment, why we need a new business model for the internet, and how descriptive language can change how we think about these technologies — especially when that language treats AI as some kind of god-like entity. As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This interview has been edited for length and clarity. What do you mean when you say that the whole technical field of AI is 'defined by an almost metaphysical assertion'? The metaphysical assertion is that we are creating intelligence. Well, what is intelligence? Something human. The whole field was founded by Alan Turing's thought experiment called the Turing test, where if you can fool a human into thinking you've made a human, then you might as well have made a human because what other tests could there be? Which is fair enough. On the other hand, what other scientific field — other than maybe supporting stage magicians — is entirely based on being able to fool people? I mean, it's stupid. Fooling people in itself accomplishes nothing. There's no productivity, there's no insight unless you're studying the cognition of being fooled of course. There's an alternative way to think about what we do with what we call AI, which is that there's no new entity, there's nothing intelligent there. What there is a new, and in my opinion, sometimes quite useful, form of collaboration between people. What's the harm if we do? That's a fair question. Who cares if somebody wants to think of it as a new type of person or even a new type of God or whatever? What's wrong with that? Potentially nothing. People believe all kinds of things all the time. But in the case of our technology, let me put it this way, if you are a mathematician or a scientist, you can do what you do in a kind of an abstract way. You can say, 'I'm furthering math. And in a way that'll be true even if nobody else ever even perceives that I've done it. I've written down this proof.' But that's not true for technologists. Technologists only make sense if there's a designated beneficiary. You have to make technology for someone, and as soon as you say the technology itself is a new someone, you stop making sense as a technologist. If we make the mistake, which is now common, and insist that AI is in fact some kind of god or creature or entity or oracle, instead of a tool, as you define it, the implication is that would be a very consequential mistake, right? That's right. When you treat the technology as its own beneficiary, you miss a lot of opportunities to make it better. I see this in AI all the time. I see people saying, 'Well, if we did this, it would pass the Turing test better, and if we did that, it would seem more like it was an independent mind.' But those are all goals that are different from it being economically useful. They're different from it being useful to any particular user. They're just these weird, almost religious, ritual goals. So every time you're devoting yourself to that, it means you're not devoting yourself to making it better. One example is that we've deliberately designed large-model AI to obscure the original human sources of the data that the AI is trained on to help create this illusion of the new entity. But when we do that, we make it harder to do quality control. We make it harder to do authentication and to detect malicious uses of the model because we can't tell what the intent is, what data it's drawing upon. We're sort of willfully making ourselves blind in a way that we probably don't really need to. I really want to emphasize, from a metaphysical point of view, I can't prove, and neither can anyone else, that a computer is alive or not, or conscious or not, or whatever. All that stuff is always going to be a matter of faith. That's just the way it is. But what I can say is that this emphasis on trying to make the models seem like they're freestanding new entities does blind us to some ways we could make them better. So does all the anxiety, including from serious people in the world of AI, about human extinction feel like religious hysteria to you? What drives me crazy about this is that this is my world. I talk to the people who believe that stuff all the time, and increasingly, a lot of them believe that it would be good to wipe out people and that the AI future would be a better one, and that we should wear a disposable temporary container for the birth of AI. I hear that opinion quite a lot. Wait, that's a real opinion held by real people? Many, many people. Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a 'bio baby' because as soon as you have a 'bio baby,' you get the 'mind virus' of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it's much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical. Now, in this particular case, this was a young man with a female partner who wanted a kid. And what I'm thinking is this is just another variation of the very, very old story of young men attempting to put off the baby thing with their sexual partner as long as possible. So in a way I think it's not anything new and it's just the old thing. But it's a very common attitude, not the dominant one. I would say the dominant one is that the super AI will turn into this God thing that'll save us and will either upload us to be immortal or solve all our problems and create superabundance at the very least. I have to say there's a bit of an inverse proportion here between the people who directly work in making AI systems and then the people who are adjacent to them who have these various beliefs. My own opinion is that the people who are able to be skeptical and a little bored and dismissive of the technology they're working on tend to improve it more than the people who worship it too much. I've seen that a lot in a lot of different things, not just computer science. One thing I worry about is AI accelerating a trend that digital tech in general — and social media in particular — has already started, which is to pull us away from the physical world and encourage us to constantly perform versions of ourselves in the virtual world. And because of how it's designed, it has this habit of reducing other people to crude avatars, which is why it's so easy to be cruel and vicious online and why people who are on social media too much start to become mutually unintelligible to each other. Do you worry about AI supercharging this stuff? Am I right to be thinking of AI as a potential accelerant of these trends? It's arguable and actually consistent with the way the [AI] community speaks internally to say that the algorithms that have been driving social media up to now are a form of AI, if that's the term you wish to use. And what the algorithms do is they attempt to predict human behavior based on the stimulus given to the human. By putting that in an adaptive loop, they hope to drive attention and an obsessive attachment to a platform. Because these algorithms can't tell whether something's being driven because of things that we might think are positive or things that we might think are negative. I call this the life of the parity, this notion that you can't tell if a bit is one or zero, it doesn't matter because it's an arbitrary designation in a digital system. So if somebody's getting attention by being a dick, that works just as well as if they're offering lifesaving information or helping people improve themselves. But then the peaks that are good are really good, and I don't want to deny that. I love dance culture on TikTok. Science bloggers on YouTube have achieved a level that's astonishingly good and so on. There's all these really, really positive good spots. But then overall, there's this loss of truth and political paranoia and unnecessary confrontation between arbitrarily created cultural groups and so on and that's really doing damage. So yeah, could better AI algorithms make that worse? Plausibly. It's possible that it's already bottomed out and if the algorithms themselves get more sophisticated, it won't really push it that much further. But I actually think it can and I'm worried about it because we so much want to pass the Turing test and make people think our programs are people. We're moving to this so-called agentic era where it's not just that you have a chat interface with the thing, but the chat interface gets to know you through years at a time and gets a so-called personality and all this. And then the idea is that people then fall in love with these. And we're already seeing examples of this here and there, and this notion of a whole generation of young people falling in love with fake avatars. I mean, people talk about AI as if it's just like this yeast in the air. It's like, oh, AI will appear and people will fall in love with AI avatars, but it's not. AI is always run by companies, so they're going to be falling in love with something from Google or Meta or whatever. The advertising model was sort of the original sin of the internet in lots of ways. I'm wondering how we avoid repeating those mistakes with AI. How do we get it right this time? What's a better model? This question is the central question of our time in my view. The central question of our time isn't, how are we able to scale AI more? That's an important question and I get that. And most people are focused on that. And dealing with the climate is an important question. But in terms of our own survival, coming up with a business model for civilization that isn't self-destructive is, in a way, our most primary problem and challenge right now. Because the way we're doing it, we went through this thing in the earlier phase of the internet of 'information should be free,' and then the only business model that's left is paying for influence. And so then all of the platforms look free or very cheap to the user, but then actually the real customer is trying to influence the user. And you end up with what's essentially a stealthy form of manipulation being the central project of civilization. We can only get away with that for so long. At some point, that bites us and we become too crazy to survive. So we must change the business model of civilization. How to get from here to there is a bit of a mystery, but I continue to work on it. I think we should incentivize people to put great data into the AI programs of the future. And I'd like people to be paid for data used by AI models and also to be celebrated and made visible and known. I think it's just a big collaboration and our collaborators should be valued. How easy would it be to do that? Do you think we can or will? There's still some unsolved technical questions about how to do it. I'm very actively working on those and I believe it's doable. There's a whole research community devoted to exactly that distributed around the world. And I think it'll make better models. Better data makes better models, and there's a lot of people who dispute that and they say, 'No, it's just better algorithms. We already have enough data for the rest of all time.' But I disagree with that. I don't think we're the smartest people who will ever live, and there might be new creative things that happen in the future that we don't foresee and the models we've currently built might not extend into those things. Having some open system where people can contribute to new models and new ways is a more expansive and just kind of a spiritually optimistic way of thinking about the deep future. Is there a fear of yours, something you think we could get terribly wrong, that's not currently something we hear much about? God, I don't even know where to start. One of the things I worry about is we're gradually moving education into an AI model, and the motivations for that are often very good because in a lot of places on earth, it's just been impossible to come up with an economics of supporting and training enough human teachers. And a lot of cultural issues in changing societies make it very, very hard to make schools that work and so on. There's a lot of issues, and in theory, a self-adapting AI tutor could solve a lot of problems at a low cost. But then the issue with that is, once again, creativity. How do you keep people who learn in a system like that, how do you train them so that they're able to step outside of what the system was trained on? There's this funny way that you're always retreading and recombining the training data in any AI system, and you can address that to a degree with constant fresh input and this and that. But I am a little worried about people being trained in a closed system that makes them a little less than they might otherwise have been and have a little less faith in themselves.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store