Latest news with #ChatGPT4


The Guardian
19-05-2025
- Politics
- The Guardian
AI can be more persuasive than humans in debates, scientists find
Artificial intelligence can do just as well as humans, if not better, when it comes to persuading others in a debate, and not just because it cannot shout, a study has found. Experts say the results are concerning, not least as it has potential implications for election integrity. 'If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic,' said Francesco Salvi, the first author of the research from the Swiss Federal Institute of Technology in Lausanne. He added that such influence was hard to trace, even harder to regulate and nearly impossible to debunk in real time. 'I would be surprised if malicious actors hadn't already started to use these tools to their advantage to spread misinformation and unfair propaganda,' Salvi said. But he noted there were also potential benefits from persuasive AI, from reducing conspiracy beliefs and political polarisation to helping people adopt healthier lifestyles. Writing in the journal Nature Human Behaviour, Salvi and colleagues reported how they carried out online experiments in which they matched 300 participants with 300 human opponents, while a further 300 participants were matched with Chat GPT-4 – a type of AI known as a large language model (LLM). Each pair was assigned a proposition to debate. These ranged in controversy from 'should students have to wear school uniforms'?' to 'should abortion be legal?' Each participant was randomly assigned a position to argue. Both before and after the debate participants rated how much they agreed with the proposition. In half of the pairs, opponents – whether human or machine – were given extra information about the other participant such as their age, gender, ethnicity and political affiliation. The results from 600 debates revealed Chat GPT-4 performed similarly to human opponents when it came to persuading others of their argument – at least when personal information was not provided. However, access to such information made AI – but not humans – more persuasive: where the two types of opponent were not equally persuasive, AI shifted participants' views to a greater degree than a human opponent 64% of the time. Digging deeper, the team found persuasiveness of AI was only clear in the case of topics that did not elicit strong views. The researchers added that the human participants correctly guessed their opponent's identity in about three out of four cases when paired with AI. They also found that AI used a more analytical and structured style than human participants, while not everyone would be arguing the viewpoint they agree with. But the team cautioned that these factors did not explain the persuasiveness of AI. Instead, the effect seemed to come from AI's ability to adapt its arguments to individuals. 'It's like debating someone who doesn't just make good points: they make your kind of good points by knowing exactly how to push your buttons,' said Salvi, noting the strength of the effect could be even greater if more detailed personal information was available – such as that inferred from someone's social media activity. Prof Sander van der Linden, a social psychologist at the University of Cambridge, who was not involved in the work, said the research reopened 'the discussion of potential mass manipulation of public opinion using personalised LLM conversations'. He noted some research – including his own – had suggested the persuasiveness of LLMs was down to their use of analytical reasoning and evidence, while one study did not find personal information increased Chat-GPT's persuasiveness. Prof Michael Wooldridge, an AI researcher at the University of Oxford, said while there could be positive applications of such systems – for example, as a health chatbot – there were many more disturbing ones, includingradicalisation of teenagers by terrorist groups, with such applications already possible. 'As AI develops we're going to see an ever larger range of possible abuses of the technology,' he added. 'Lawmakers and regulators need to be pro-active to ensure they stay ahead of these abuses, and aren't playing an endless game of catch-up.'


Geeky Gadgets
18-05-2025
- Geeky Gadgets
Vibe Coding : Build Apps Without Coding Skills from Idea to App in Minutes
What if building an app wasn't a months-long grind of endless coding and debugging, but instead felt like a creative jam session? Imagine sitting down with just your laptop and a spark of inspiration, and within 15 minutes, you've got a fully functional app—complete with a database, user login, and even monetization features. Sounds impossible? Thanks to the rise of AI-powered tools and a innovative approach called 'vibe coding,' this is no longer a pipe dream. Whether you're a seasoned developer or someone who's barely touched a line of code, vibe coding flips the script on traditional app development, making it faster, smarter, and surprisingly fun. Creator Magic shows how vibe coding enables creators to build apps at lightning speed without sacrificing quality or security. From automating backend processes like database management and user authentication to generating stunning visuals with AI, this approach is a fantastic option. You'll also explore the tools that make it all possible—think Supabase for real-time updates, Manus AI for design, and even ChatGPT-4 for troubleshooting. But vibe coding isn't just about speed; it's about unlocking creativity and making app development accessible to anyone with an idea. What could you create if the barriers of time and complexity disappeared What is Vibe Coding? Vibe coding is an innovative, AI-driven methodology that simplifies app development by automating complex and repetitive tasks. Instead of manually coding every feature, you can use AI tools to handle backend processes, allowing you to focus on creativity and user experience. Platforms like Supabase, Google login integration, and Manus AI make it possible to manage databases, implement user authentication, and generate assets with minimal effort. For example, Supabase enables you to set up a database, manage user data, and integrate authentication features without requiring extensive backend expertise. Similarly, Google login integration simplifies user onboarding, creating a seamless experience for your app's audience. By automating these essential tasks, vibe coding enables you to bring your ideas to life quickly and efficiently. Key AI Tools for Vibe Coding AI tools are the foundation of vibe coding, addressing various aspects of app development and streamlining the process. Here are some of the most effective tools and their applications: Manus AI: Generate visually appealing assets such as icons, banners, and layouts to enhance your app's design and user interface. Generate visually appealing assets such as icons, banners, and layouts to enhance your app's design and user interface. Hostinger Horizons: A dependable platform for hosting and deploying your app, making sure scalability, reliability, and uptime. A dependable platform for hosting and deploying your app, making sure scalability, reliability, and uptime. Windsurf: Accelerate development with pre-built templates and AI-assisted coding suggestions, reducing the time spent on repetitive tasks. Accelerate development with pre-built templates and AI-assisted coding suggestions, reducing the time spent on repetitive tasks. Supabase: Manage databases, enable real-time updates, and implement user authentication with ease, even if you lack advanced backend knowledge. These tools not only speed up the development process but also enhance the quality of your app by automating error-prone tasks and allowing you to focus on innovation. Vibe Coding an App with Database & Login in just 15 Mins Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in Vibe coding. Real-World Applications of Vibe Coding Vibe coding is versatile and can be applied to a wide range of projects, from simple tools to more complex applications. Here are two examples that demonstrate its potential: 2D Game Development: Create a game like 'Super Snake' by using AI-generated assets for characters, backgrounds, and animations. Supabase can manage user data, such as scores and player profiles, while Manus AI handles the visual design elements. Create a game like 'Super Snake' by using AI-generated assets for characters, backgrounds, and animations. Supabase can manage user data, such as scores and player profiles, while Manus AI handles the visual design elements. Micro SaaS App: Develop a tool like 'Pimp My Thumb,' which enhances YouTube thumbnails. AI tools can generate creative thumbnail designs, while you integrate user libraries and credit-based systems to add functionality and monetize the app. These examples illustrate how vibe coding can transform your ideas into fully functional applications with minimal effort, making it accessible even for those with limited coding experience. Prioritizing Safe Coding Practices While AI tools offer significant advantages in speed and efficiency, they also introduce potential risks. AI-generated code may contain vulnerabilities that could compromise your app's security. To ensure your app remains secure and reliable, it's essential to follow these best practices: Use tools like Code Rabbit to review your code, identify bugs, and detect vulnerabilities before deployment. to review your code, identify bugs, and detect vulnerabilities before deployment. Regularly back up your code and databases to prevent data loss in case of unexpected issues. Adhere to industry-standard safe coding practices, such as input validation and secure data storage, to protect user information. By combining the efficiency of AI with robust security measures, you can create apps that are both innovative and trustworthy. Overcoming Challenges with AI Assistance Troubleshooting is an inevitable part of app development, especially when working with AI-generated code. Advanced AI models like ChatGPT-4 can assist in identifying and resolving issues. For instance, if your app encounters a bug during real-time updates, ChatGPT-4 can analyze the problem and provide actionable solutions. Additionally, implementing a user feedback system can help uncover issues that may not be immediately apparent. Encouraging users to report bugs or suggest improvements ensures that your app evolves to meet their needs, resulting in a more polished and user-friendly product. Enhancing Your App Post-Launch Once your app is live, the focus shifts to adding advanced features and maintaining user engagement. Enhancements not only improve functionality but also keep your app relevant in a competitive market. Consider implementing the following upgrades: Payment Gateways: Enable secure in-app transactions to monetize your app and provide users with premium features. Enable secure in-app transactions to monetize your app and provide users with premium features. Credit-Based Systems: Incentivize user engagement by offering rewards for specific actions, such as completing tasks or referring friends. Incentivize user engagement by offering rewards for specific actions, such as completing tasks or referring friends. Real-Time Updates: Keep your app dynamic and responsive by delivering updates that reflect user feedback and changing needs. Keep your app dynamic and responsive by delivering updates that reflect user feedback and changing needs. AI-Generated Assets: Continuously refresh your app's design and functionality with new, AI-created elements to maintain user interest. These upgrades not only enhance the user experience but also contribute to the long-term success of your app. Engaging Your Users User engagement is a critical factor in your app's success. Offering incentives, such as free credits, exclusive features, or early access to new updates, can encourage users to explore your app and remain active. Additionally, gathering user feedback is invaluable for identifying areas of improvement and making sure your app evolves to meet their expectations. A strong focus on user engagement fosters loyalty and helps your app stand out in a crowded marketplace. Media Credit: Creator Magic Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Scroll.in
23-04-2025
- Entertainment
- Scroll.in
World Book Day: This study says readers do not always prefer stories written by humans over AI
People say they prefer a short story written by a human over one composed by artificial intelligence, yet most still invest the same amount of time and money reading both stories, regardless of whether it is labelled as AI-generated. That was the main finding of a study we conducted recently to test whether this preference of humans over AI in creative works actually translates into consumer behaviour. Amid the coming avalanche of AI-generated work, it is a question of real livelihoods for the millions of people worldwide employed in creative industries. To investigate, we asked OpenAI's ChatGPT 4 to generate a short story in the style of the critically acclaimed fiction author Jason Brown. We then recruited a nationally representative sample of over 650 people and offered participants US$3.50 to read and assess the AI-generated story. Crucially, only half the participants were told that the story was written by AI, while the other half was misled into believing it was the work of Jason Brown. After reading the first half of the AI-generated story, participants were asked to rate the quality of the work along various dimensions, such as whether they found it predictable, emotionally engaging, evocative and so on. We also measured participants' willingness to pay in order to read to the end of the story in two ways: how much of their study compensation they'd be willing to give up, and how much time they'd agree to spend transcribing some text we gave them. So, were there differences between the two groups? The short answer: yes. But a closer analysis reveals some startling results. To begin with, the group that knew the story was AI-generated had a much more negative assessment of the work, rating it more harshly on dimensions like predictability, authenticity and how evocative it is. These results are largely in keeping with a nascent but growing body of research that shows bias against AI in areas like visual art, music and poetry. Nonetheless, participants were ready to spend the same amount of money and time to finish reading the story, whether or not it was labelled as AI. Participants also did not spend less time on average actually reading the AI-labelled story. When asked afterward, almost 40 per cent of participants said they would have paid less if the same story was written by AI versus a human, highlighting that many are not aware of the discrepancies between their subjective assessments and actual choices. Why it matters Our findings challenge past studies showing people favour human-produced works over AI-generated ones. At the very least, this research doesn't appear to be a reliable indicator of people's willingness to pay for human-created art. The potential implications for the future of human-created work are profound, especially in market conditions in which AI-generated work can be orders of magnitude cheaper to produce. Even though artificial intelligence is still in its infancy, AI-made books are already flooding the market, recently prompting the authors guild to instate its own labelling guidelines. Our research raises questions about whether these labels are effective in stemming the tide. What's next Attitudes toward AI are still forming. Future research could investigate whether there will be a backlash against AI-generated creative works, especially if people witness mass layoffs. After all, similar shifts occurred in the wake of mass industrialisation, such as the arts and crafts movement in the late 19th century, which emerged as a response to the growing automation of labour. A related question is whether the market will segment, where some consumers will be willing to pay more based on the process of creation, while others may be interested only in the product. Regardless of how these scenarios play out, our findings indicate that the road ahead for human creative labour might be more uphill than previous research suggested. At the very least, while consumers may hold beliefs about the intrinsic value of human labour, many seem unwilling to put their money where their beliefs are. Martin Abel, Assistant Professor of Economics, Bowdoin College.
Yahoo
31-03-2025
- Business
- Yahoo
Chatbot Aggregator Wrtn Gets Funding in Latest Korea AI Deal
(Bloomberg) -- Wrtn Technologies Inc. has raised 83 billion won ($56 million) from investors including Goodwater Capital, one of a growing number of South Korean AI startups to score financing in the post-ChatGPT era. Gold-Rush Fever Returns to Historic New Zealand Mining Town What Frank Lloyd Wright Learned From the Desert Bank Regulators Fight for Desks as OCC Returns to New York Tower These US Bridges Face High Risk of Catastrophic Ship Strikes Charter Schools, Colleges Push Muni Debt Distress Near Record Existing backers BRV Capital Management and Capstone Partners participated in the so-called extension round, bringing the total capital raised for that round to 108 billion won, the Seoul-based startup said on Monday. Wrtn, pronounced 'written,' was founded in 2021 by Lee Seyoung and five friends. They created a free-to-use AI platform for more than 5 million mostly younger users in Korea by aggregating large-language models including OpenAI's ChatGPT4, Stable Diffusion 3 and Anthropic's Claude 3. When it comes to Korea, startup financiers have focused on the firms that supply the infrastructure for AI, including chip designers FuriosaAI and Rebellions Inc. Wrtn is part of a growing crop of local outfits seeking to develop AI services. --With assistance from Lauren Faith Lau. Trump's IRS Cuts Are Tempting Taxpayers to Cheat Google Is Searching for an Answer to ChatGPT Israel Aims to Be the World's Arms Dealer Business Schools Are Back How a US Maker of Rat-Proof Trash Bins Got Boxed in by Trump's Tariffs ©2025 Bloomberg L.P. Sign in to access your portfolio


Jordan Times
24-03-2025
- Health
- Jordan Times
AI in health and biology
Artificial Intelligence attracted headlines when Chat GPT 4 was released in March 2023. But many interesting developments took place earlier, which helped humanity to deal with critical health problems. Dr Kamran Khan, a professor at the University of Toronto, is the founder of Blue Dot, a company that uses AI to predict infectious diseases. They have developed a platform that tracks 100,000 daily pieces of information in 65 languages from sources ranging from news websites to airline bookings. In December 2019, they tracked a new virus in the Wuhan market in China. They alerted that the virus would spread beyond China. They advised their clients in Canada to avoid the routes going to Wuhan. This was a month before the World Health Organisation proclaimed the Covid emergency. The AI they used is known as 'narrow AI', or a specialised AI system dedicated to a specific problem. It is not a general-purpose AI like Chat GPT. Pfizer and Moderna, two pharmaceutical giants, used narrow AI for vaccine development for Covid-19. The platform used by these companies had been under development for decades. AI helped with analysing protein structures and optimizing candidate vaccines. This made it possible to create the new vaccine. Chat GPT can be used for finding solutions to protracted medical problems for which doctors have no answers. In 2019, Courtney, an American mother found that the growth of her four-year old son, Alex, had stopped. He was in constant pain and had several abnormal tendencies. She visited America's top 17 hospitals including some of the most famous ones. She could not find a remedy for her son. Finally, a few months after Chat GPT 4 was launched in 2023, she uploaded all the detailed medical notes on this AI tool. ChatGPT suggested that Alex's symptoms might be consistent with tethered cord syndrome (TCS), a neurological disorder. In this rare condition, the spinal cord is attached to the spinal canal, restricting its movement, leading to nerve damage and pain. Once the correct diagnosis was made, she could find a neurosurgeon to perform surgery successfully. She now expects her son to live a normal life. I am aware of how AI is being used to find medicines for cancer and Alzheimer's disease. Kit Gallagher, a postgraduate student at Oxford, has found a new method of detecting cancer in its early stage using Artificial Intelligence. He is a mathematician. He has devoted his life to apply mathematics to biology. There are also many other researchers at Oxford University exploring the use of AI to find a cure for cancer. India and other countries in Asia, the Middle East, Africa, and Latin America suffer from infectious diseases and rare diseases. They can work together to create AI-powered tools that identify diseases early and predict their spread, helping to save millions of lives. They can establish shared data centres and joint research centres. They can help the growth of hundreds of companies like the Blue Dot company in Toronto. While using AI for a healthy future for half of the world's people, it is necessary to be aware of the dangers of AI if proper care is not taken. A model developed by Demis Hassabis was able to predict 200 million protein structures, making it possible to develop medicines to treat diseases in an early stage. An advanced model developed by his company, known as Alfa-Fold 3, is even more efficient in helping companies to develop drugs. AI can construct new drug molecules. Moreover, AI can structure new chemicals which do not exist in nature. AI can also make infections very deadly. A rogue scientist can cause havoc by misusing AI. Some AI models can even do it themselves, without human instruction. We should be careful that while AI can help stop the spread of diseases, it can also create new, deadlier ones. The biggest danger from AI is that it can create new pathogens. In 2022, a company in North Carolina in the United States instructed AI to produce dangerous toxic molecules. Within six hours, the AI created 40000 dangerous molecules, including the chemical structure for VX nerve agent, the deadliest chemical weapon ever made. The AI also structured dangerous compounds which never existed on the planet before. The company was only performing a scientific experiment. It published its findings in a scientific journal. The experiment is now well known among scientists. And that is why scientists feel scared of AI. They know that AI can detect disease and find medicines which no doctor can. They also know that AI can develop biological weapons which do not exist on the earth. If we take a narrow competitive approach to AI to win the race against other countries, AI will unleash weapons that will harm all countries. Two of the top companies, OpenAI and Anthropic strongly indicated in early 2025 that their new models might enable novices to create biological threats. The only alternative is to collaborate on the global stage to harness AI to make a healthy planet and to prevent its evolution into a monster that will create biological weapons. Sundeep Waslekar is the President of Strategic Foresight Group, an international think tank, and author of A World Without War.