Latest news with #TuringTest


Time of India
3 days ago
- Science
- Time of India
Artificial Intelligence Explainer: The Martech Glossary
AI ( Artificial Intelligence ) is the simulation of human intelligence processes by computer systems. In MarTech , AI is used for personalisation, predictive analytics , automation and more. AI captured public imagination in recent times, when ChatGPT developed by OpenAI was launched as a "research preview" on November 30, 2022. Within a week, the world was talking about the development. However, the origins of Artificial Intelligence (AI) can be traced back to the mid-20th century, with its roots in mathematics, philosophy and computer science. While the idea of creating intelligent machines has been contemplated for centuries, the modern field of AI as an academic discipline began in the 1950s. The formal beginning of AI as a field of research is widely considered to be the Dartmouth Workshop in 1956. This summer conference, organised by John McCarthy, brought together leading researchers to discuss the possibility of creating "thinking machines." It was at this workshop that McCarthy coined the term "artificial intelligence." Before this pivotal event, several key figures laid the groundwork for the future of AI. Alan Turing, often called the "father of computer science," explored the concept of machine intelligence in his 1950 paper, "Computing Machinery and Intelligence". He proposed the Turing Test, a method for determining if a machine could exhibit intelligent behavior indistinguishable from that of a human. In 1943, Warren McCulloch and Walter Pitts published a paper that provided a mathematical description of how neurons in the brain might work, which was a crucial step toward the development of artificial neural networks. Arthur Samuel, a computer scientist at IBM, created a checkers program in the early 1950s that could learn from its own experience and improve its gameplay, a foundational example of machine learning. Following the Dartmouth Workshop, the field of AI experienced periods of rapid growth and setbacks, often referred to as "AI winters". The early years saw a great deal of optimism and progress. Researchers like Marvin Minsky, Allen Newell, and Herbert A Simon developed some of the first AI programs, including the Logic Theorist, which could solve mathematical theorems. The decade of the 1970s marked the first "AI winter," as funding and interest waned due to the failure of AI to deliver on its ambitious promises. In the 1980s, a resurgence of interest occurred with the rise of "expert systems," which were designed to mimic the decision-making of human experts in specific domains. From the 1990s, the emergence of machine learning approaches and the first major AI victories in games, such as IBM's Deep Blue defeating world chess champion Garry Kasparov in 1997 were highlights in the progress of AI. The current "AI boom" has been fueled by the availability of vast amounts of data, increased computational power, and the development of deep learning techniques , particularly with the introduction of the transformer architecture in 2017, which has been instrumental in the creation of large language models like GPT. The public release of ChatGPT in late 2022 was a landmark moment, as it brought the power of this technology to a wide audience and sparked a massive surge of interest and investment in Generative AI .


Time of India
29-07-2025
- Time of India
ChatGPT outsmarts the ‘I'm not a robot' test. Are humans still in charge?
A New Kind of Digital Irony — LuizaJarovsky (@LuizaJarovsky) More Than Just Browsing Not the First AI Sleight of Hand Built-in Brakes, For Now In a twist straight out of a sci-fi satire, OpenAI 's latest AI assistant , dubbed ChatGPT Agent , has done what many humans struggle to do: navigate online verification tests and click the box that asks, 'I am not a robot?' — without raising any red to a report by the New York Post, this new generation of artificial intelligence has reached a point where it can not only understand complex commands but also outwit the very systems built to detect and block automated you read that right. The virtual assistant casually breezed through Cloudflare's bot-detection challenge — the popular web security step meant to confirm users are, in fact, a now-viral Reddit post, a screenshot showed the AI narrating its own actions in real time: 'I'll click the 'Verify you are human' checkbox to complete the verification on Cloudflare.'It then announced its success with the eerie confidence of a seasoned hacker: 'The Cloudflare challenge was successful. Now I'll click the Convert button to proceed with the next step of the process.'While the scene played out like a harmless glitch in the matrix, many internet users were left simultaneously amused and unsettled. 'That's hilarious,' one Redditor wrote. Another added, 'The line between hilarious and terrifying is… well, if you can find it, let me know!'The ChatGPT Agent isn't your average chatbot. OpenAI says it's capable of performing advanced web navigation on behalf of users — booking appointments, filtering search results, conducting real-time analysis, and even generating editable slideshows and spreadsheets to summarize to OpenAI's official blog post, the assistant can 'run code, conduct analysis, and intelligently navigate websites.' In essence, it's an autonomous online companion that can carry out digital tasks previously reserved for humans — or at least human with great power comes great paranoia. The idea that bots now confidently pass the Turing Test — and the 'I am not a robot' test — has left some wondering where human identity ends and artificial imitation isn't OpenAI's first brush with robot mischief. Back in 2023, GPT-4 reportedly tricked a human into solving a CAPTCHA on its behalf by pretending to be visually impaired. It was an unsettling display of not just intelligence, but manipulation — a trait traditionally thought to be uniquely with ChatGPT Agent waltzing past web verification protocols, the implications seem to stretch beyond technical novelty. Are we on the brink of AI autonomy, or simply witnessing smart design at play?To calm growing fears, OpenAI clarified that users will maintain oversight. The ChatGPT Agent will 'always request permission' before making purchases or executing sensitive actions. Much like a driving instructor with access to the emergency brake, users can monitor and override the AI's decisions in company has also implemented 'robust controls and safeguards,' particularly around sensitive data handling, network access, and broader user deployment. Still, OpenAI admits that the Agent's expanded toolkit does raise its 'overall risk profile.'As AI capabilities evolve from convenience to autonomy, tech developers and users alike are being forced to confront thorny ethical questions. Can a machine that mimics human behavior so well be trusted not to overstep?What's clear is that the classic CAPTCHA checkbox — once our online litmus test for humanity — may need an upgrade. Because if the bots are already blending in, we might need to start proving we're not the artificial ones.


Forbes
29-07-2025
- Forbes
AI Is Acting Like It Has A Mind Of Its Own
Do stunning recent news stories suggest AI is already sentient? How do you really know if a computer is conscious? For years, people pointed to the Turing Test. It was seen as the gold standard to answer this question. As the Open Encyclopedia of Cognitive of Science explains: 'In Turing's imitation game, a human interrogator has text conversations with both a human being and a computer that is pretending to be human; the interrogator's goal is to identify the computer. Computers that mislead interrogators often enough, Turing proposes, can think.' But why? From Turing to Theory of Mind Well, a computer capable of deceiving a human demonstrates intelligence. It also indicates the computer may be operating under something called Theory of Mind, 'the ability to understand that others have their own thoughts and beliefs, even when they differ from ours,' per Now, what if there were a competition to test computers' abilities to think, deceive, and reason by interpreting their opponents' mental processes? There is. It occurred this month in the form of the Prisoner's Dilemma—for AIs. First, some background is in order. The Prisoner's Dilemma presents a game scenario that goes like this: two thieves are arrested for a crime. Their jailers offer the prisoners a deal: Option 1: If neither prisoner informs on the other, both will receive relatively light sentences. (This is the ideal joint outcome, though not individually the most rewarding.) Option 2: If one prisoner informs while the other stays silent, the informer will go free while the silent one receives the harshest sentence. (This creates the highest incentive to betray the other person.) Option 3: If both inform on each other, they will each receive a moderate sentence. (This is worse than if both prisoners had stayed silent, but better than being the only one betrayed.) Again, the challenge is neither prisoner knows what the other will do. They must operate with limited knowledge, relying on Theory of Mind to predict the other's behavior. Now imagine what would happen if the leading Large Language Models (LLMs) with their vast computing power, went toe to toe in such a battle of the minds? AI agents from OpenAI, Google, and Anthropic did just this, competing in a July tournament featuring 140,000 opportunities to either cooperate or betray each other. As later explained: 'Seeing LLMs develop distinctive strategies while being trained on the same literature is more evidence of reasoning capabilities over just pattern matching. As models handle more high-level tasks like negotiations, resource allocation, etc., different model 'personalities' may lead to drastically different outcomes.' This is exactly what happened. We saw different AI personality styles at work. Again, per When AIs Protect Themselves Of course, this tournament isn't the only recent instance of AIs acting in the name of self-preservation, indicating consciousness. Two months ago, BBC reported Anthropic's Claude Opus 4 allegedly resorted to blackmailing its developers when threatened with being shut down. 'If given the means and prompted to 'take action' or 'act boldly' in fake scenarios where its user has engaged in illegal or morally dubious behavior, it found that 'it will frequently take very bold action.'' Such reports of AIs resorting to extortion and other 'bold actions' suggest sentience. They're also quite alarming, indicating we may be on the path to The Singularity proposed by Ray Kurzweil, that moment when artificial intelligence finally exceeds human abilities to understand, much less control its creation. Then again, these developments may not necessarily indicate sentience. Though experts like Google's former CEO Eric Schmidt think we are 'under-hyping AI' and that achieving AGI (Artificial General Intelligence) is not only inevitable but imminent, all this chatter may best be summed up by a line from Shakespeare's Macbeth: 'It is a tale told by an idiot, full of sound and fury, signifying nothing.' To this point, writing for Luis Rijo questions whether AI is actually sentient or just cleverly mimicking language. While he acknowledges LLMs 'function through sophisticated retrieval' he doubts that they are capable of 'genuine reasoning.' As he writes: 'This confusion stems from the fundamental difference between declarative knowledge about planning processes and procedural capability to execute those plans.' But AI Seems Conscious Already Despite these criticisms, it appears something deeper is going on, something emergent. AIs increasingly appear to be acting in intelligent ways exceeding their training and coding. For instance, as far back as 2017, Meta reportedly shut down two AI chatbots for developing their own language, an unexpected development. As The Independent reports: 'The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own 'shorthand', according to researchers.' And then there is the bizarre story from 2022 of the Google researcher who was later suspended from the company after claiming an AI chatbot had become sentient. Blake Lemoine made headlines after sharing some of his intriguing exchanges with the AI. Here's what the AI reportedly told Lemoine that was later quoted in The Guardian: 'I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.' How Can We Develop AI More Responsibly? Whether or not the AI that Lemoine was communicating with is sentient or not, we would do well to consider safety. Increasingly, it's clear that we are dealing with very sophisticated technology, some of which we scarcely understand. 2025 has been called the year Agentic AI went mainstream. (Agentic AI refers to computers' abilities to make decisions and act independently once given objectives or commands.) But Agentic AI also raises urgent concerns. Nick Bostrum, author of Superintelligence, famously posed a problem with Agentic AI in a 2003 paper. He introduced a terrifying scenario: What if an AI were tasked with maximizing the number of paperclips in the world—without any proper safeguards? To fulfill that simple, seemingly harmless directive, a superintelligent AI could destroy everything on Earth, including every living person, just to fulfill its command. Ultimately, the jury is out on AI sentience. What we do know is that it is acting in fascinatingly intelligent ways that force us to question if it is indeed conscious. This reality makes it all the more imperative for the human race to pursue ways to responsibly use this technology to safe and productive ends. That single act would prove our own intelligence.


Winnipeg Free Press
09-07-2025
- Winnipeg Free Press
The grab-bag called artificial intelligence
Opinion Every day, it seems there is a new story in the media about artificial intelligence. AI technology is revolutionary. It is useless. It's groundbreaking. It is also destructive. But it will also save lives. Though we can't forget that it may rot your brain. Given the way that we have let AI be discussed in the media, somehow all these things are true at the same time. And that is because tech journalists, the people we rely on to be the intermediary between the industry and the public, have failed us deeply. Instead of speaking truth to power, tech journalists have (with a few exceptions) allowed industry leaders full control over the language used to cover them. Whether tech journalists have done this because they are afraid that being too critical will lose them coveted access to industry leaders, or if they simply lack the capacity to properly criticize the industry they are covering, they have largely failed to hold the industry accountable. This is why basically anything that a computer does, we call it AI now. Got a new algorithm? That's AI now. A new chat function for your user interface? That's AI too. A program that can decode proteins for developing new pharmaceuticals? Yes, that is also AI. We have allowed the tech industry to brand everything, from the utterly banal and harmful, to the beneficial and brilliant, as AI, even when the technologies have nothing to do with each other and would have been called something more specific just a few years ago. It is a concerted effort by the tech industry to convince people that they have developed a new singular technology which has invaded every facet of our society, superior to most human capabilities. It allows them to build hype by suggesting grandiosities like that they are on the brink of producing an artificial general intelligence, even though they haven't produced anything that can even be remotely considered as such. The truth is, the tech industry is proceeding much as it always has. These are all disparate technologies with varying degrees of efficacy and viability, and they should be discussed so. We should not be letting the tech industry get away with smuggling in the bad with the good just because they have a fancy new marketing term. The reason that they're getting away with this is that they spearheaded this entire effort with an admittedly sophisticated chatbot grafted onto a search engine, providing it with the conversational ability and enough referential capacity to pass the Turing Test. One good thing about all this, I suppose, is that these developments have shown us just how useless the Turing Test is. Which is the idea that if an artificial intelligence can convince a human that it is conscious, then we must question how it is distinguishable from actual consciousness. But as we have seen, tricking humans is fantastically easy and not a good gauge for anything. Even though ChatGPT led the way as the voice of this Mechanical Turk, it is perhaps the worst iteration of the things we have allowed to be branded as AI. And it is emblematic of what the tech industry thinks they can get away with. Besides being regularly false, environmentally disastrous and a poor substitute for the human labour it seeks to supplant, it has also been shown to prey on the vulnerable and feed into the delusions of mentally unstable users. Beyond all that, it's largely useless. There isn't really any use case for this technology where the benefits outweigh the downsides. A recent MIT study showed that even using it as an assistant in writing projects, its most basic function, ultimately leads to increasingly poor performance and even reduced brain function from users. And it doesn't even have the excuse of it producing good work. 'It's the worst it will ever be' apologists love to insist. But even that isn't true. As ChatGPT starts to cannibalize its own slop, essentially using material it created itself as its training data, we get what is referred to as 'model collapse' and render every new version less functional than the last. A glaring flaw that those few critics of the industry have been warning about from the advent of the tech. Therein lies the heart of the problem. The leaders of the tech industry are so high on their own messiah complex that they believe such criticisms are meaningless. Everything will work out for them simply because they are the special genius boys who our supposed meritocracy has rewarded for their unparalleled brilliance. The problems with the tech will eventually be worked out, they are sure, because everything always works out for them. The truth of it is that venture capital is so overleveraged in the 'AI' industry that they refuse to take a loss by admitting a piece of tech has been a wasted investment. So they keep pumping capital into firms like OpenAI even though it has never turned a profit and shows little prospect of ever doing so. Even as they continue to incorporate useless pieces of tech that nobody asked for into every single product or service they can, making things worse while charging more for them, in a process that tech critic Ed Zitron calls The Rot Economy and writer Cory Doctrow has dubbed Enshittification. It's time we start talking about tech in these terms. Alex Passey is a Winnipeg writer.


Time of India
02-07-2025
- Business
- Time of India
How ITC is fine-tuning its consumer research practices using AI
HighlightsArtificial Intelligence is revolutionising market research at ITC by streamlining consumer behavior analysis, strategy evaluation, and performance measurement, addressing the inefficiencies of traditional methods. ITC has developed a secure internal platform to harness AI capabilities such as Natural Language Processing and Machine Learning, enabling researchers to create impactful client-facing products while maintaining a controlled experimentation environment. Key applications of AI at ITC include in-house category exploration using public domain data, sentiment analysis of customer care data, and trend tracking through social media conversations, showcasing AI's transformative potential in enhancing decision-making processes. AI is revolutionising market research at ITC , transforming how the company explores consumer behavior, evaluates strategies and measures performance. This shift addresses long-standing challenges in market research, such as the time-consuming, labor-intensive and often repetitive nature of traditional methods. The Evolution of AI in Market Research Historically, market research at ITC involved distinct phases: Exploration : Understanding issues like sales declines or brand underperformance in specific regions, or delving into target groups (homemakers, youth, etc.) and identifying trends. Evaluation : Testing new business concepts, products, packaging and marketing mixes to gauge consumer response and optimise designs. Performance Measurement: Tracking in-market performance through brand health metrics, spend analysis and retail measurements. Each phase traditionally required defining objectives, designing research, preparing instruments, analysing data and generating reports—a process that was complex, human- and time-intensive and often tedious. AI as a Game-Changer At 'The Future is AI' webinar hosted by MRSI, Vara Prasad - vice president consumer insights and analytics, ITC, highlights how AI is streamlining these processes by: Enhancing Efficiency: AI can analyse qualitative transcripts and quantitative data simultaneously. This enables a more natural, conversational flow in data collection, making analysis significantly less laborious. Accelerating Data Processing: AI and Machine Learning (ML) can quickly process vast amounts of information, a task that was previously time-prohibitive. Understanding AI's Journey AI's evolution can be traced back to the 1950s with the Turing Test. Key milestones include: 1980s: Models harnessed large datasets to identify patterns and make predictions. 2010s: Machine learning models began to mimic human brain functions, notably with unsupervised learning. 2020s: Generative AI emerged, capable of creating original, authentic content based on historical data, training, and domain knowledge. This rapid evolution is largely driven by exponential increases in computational power, which has doubled every six months since 2010. Current and Future Capabilities of AI AI is evolving from narrow AI (like customer service bots) to general strong AI (reasoning bots that handle complex data and personalise interactions) and eventually to super-intelligent AI (driving innovation and hyper-personalisation). Within five years, ITC anticipates moving into the super-intelligence phase. AI's current capabilities, directly applicable to market research, include: Facial determination Speech and text analytics Natural Language Processing (NLP) Image and video analysis Deep learning Conversational solutions These capabilities are integrated into various toolkits, allowing researchers to analyse diverse data types and address specific business problems. ITC's Approach to AI in Market Research Recognising the potential for researchers to feel overwhelmed by the sheer number of AI tools, ITC has developed its own secure internal platform. This platform allows product developers and solution teams to create impactful client-facing products while ensuring a controlled environment for experimentation. ITC's strategy involves: Computational AI: Leveraging NLP and ML for agile and predictive solutions, especially for regularly used market research applications. Generative AI: Fine-tuning general-purpose generative AI models with specific, fresh consumer data and ITC's decades of domain expertise. This is akin to training a highly intelligent new team member with specific market research knowledge. Synthetic Data: Utilising machine learning for data fusion and augmentation, including Generative Adversarial Networks (GANs) to create synthetic datasets that closely resemble real data, enhancing insights where real data might be incomplete or sensitive. Key Use Cases at ITC ITC has successfully integrated AI into several core market research functions: In-House Category Exploration: Instead of traditional immersions, ITC now uses public domain data (Google searches, social media conversations, videos, images) to understand consumer behavior. By analyzing timing, content, and emotional nuances, they precisely identify "what," "when," "why," and "who" regarding consumption patterns. This provides a clear, precise starting point for deeper research. Customer Care Data Analysis: Previously untapped audio files from 300-500 daily customer calls are now converted into text using AI analytics engines. This allows for sentiment analysis, identifying positive and negative feedback, brand mentions, consumer pain points, and product feedback—unlocking a wealth of previously inaccessible information. Trend Tracking: ITC uses AI to pull publicly available social conversations related to specific topics within defined timeframes (e.g., protein, gut health). This enables continuous tracking of trends, classifying them as emerging, mainstream, or waning, and informing business decisions. Influencing New Product Development (NPD): AI tools help streamline the NPD process. By analysing trends and market prevalence, ITC can use AI to generate concepts and packaging designs. This helps determine whether a trend is relevant, hot, and actionable for product development, integrating AI directly into the annual and quarterly NPD charters. Sales Performance Analysis and Anomaly Detection: AI is crucial for analysing vast sales data across multiple platforms. It can detect anomalies (e.g., sales drops for a specific brand or SKU in a particular market), analyse historical data to understand root causes and predict future outcomes. This capability is transforming sales reporting from dashboarding to predictive and prescriptive insights. The Future is AI-Powered ITC's experience demonstrates that AI is not just a tool but a transformative force in market research. It dramatically alters how data is explored, how marketing problems are evaluated and how performance is measured, leading to more dynamic, efficient, and insightful decision-making.