AI chatbots are 'juicing engagement' instead of being useful, Instagram co-founder warns
Instagram co-founder Kevin Systrom says AI companies are trying too hard to 'juice engagement' by pestering their users with follow-up questions, instead of providing actually useful insights.
Systrom said the tactics represent 'a force that's hurting us,' comparing them to those used by social media companies to expand aggressively.
'You can see some of these companies going down the rabbit hole that all the consumer companies have gone down in trying to juice engagement,' he said at StartupGrind this week. 'Every time I ask a question, at the end it asks another little question to see if it can get yet another question out of me.'
The comments come amid criticism of ChatGPT for being too nice to users instead of directly answering their questions. OpenAI has apologized for the problem and blamed 'short-term feedback' from users for it.
Systrom suggested that chatbots being overly engaging is not a bug, but an intentional feature designed for AI companies to show off metrics like time spent and daily active users. AI companies should be 'laser-focused' on providing high-quality answers rather than moving metrics in the easiest way possible, he said.
Systrom didn't name any specific AI companies in his remarks. He didn't immediately respond to a request for comment.
In response, OpenAI pointed TechCrunch to its user specs, which state that its AI model "often does not have all of the information" to provide a good answer and may ask for "clarification or more details."
But unless questions are too vague or difficult to answer, the AI should "take a stab at fulfilling the request and tell the user that it could be more helpful with certain information," the specs read.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Meta in Talks for Scale AI Investment That Could Top $10 Billion
(Bloomberg) -- Meta Platforms Inc. is in talks to make a multibillion-dollar investment into artificial intelligence startup Scale AI, according to people familiar with the matter. Next Stop: Rancho Cucamonga! Where Public Transit Systems Are Bouncing Back Around the World ICE Moves to DNA-Test Families Targeted for Deportation with New Contract Trump Said He Fired the National Portrait Gallery Director. She's Still There. US Housing Agency Vulnerable to Fraud After DOGE Cuts, Documents Warn The financing could exceed $10 billion in value, some of the people said, making it one of the largest private company funding events of all time. The terms of the deal are not finalized and could still change, according to the people, who asked not to be identified discussing private information. Representatives for Scale and Meta declined to comment. Scale AI, whose customers include Microsoft Corp. and OpenAI, provides data labeling services to help companies train machine-learning models and has become a key beneficiary of the generative AI boom. The startup was last valued at about $14 billion in 2024, in a funding round that included backing from Meta and Microsoft. Earlier this year, Bloomberg reported that Scale was in talks for a tender offer that would value it at $25 billion. This would be Meta's biggest ever external AI investment, and a rare move for the company. The social media giant has before now mostly depended on its in-house research, plus a more open development strategy, to make improvements in its AI technology. Meanwhile, Big Tech peers have invested heavily: Microsoft has put more than $13 billion into OpenAI while both Inc. and Alphabet Inc. have put billions into rival Anthropic. Part of those companies' investments have been through credits to use their computing power. Meta doesn't have a cloud business, and it's unclear what format Meta's investment will take. Chief Executive Officer Mark Zuckerberg has made AI Meta's top priority, and said in January that the company would spend as much as $65 billion on related projects this year. The company's push includes an effort to make Llama the industry standard worldwide. Meta's AI chatbot — already available on Facebook, Instagram and WhatsApp — is used by 1 billion people per month. Scale, co-founded in 2016 by CEO Alexandr Wang, has been growing quickly: The startup generated revenue of $870 million last year and expects sales to more than double to $2 billion in 2025, Bloomberg previously reported. Scale plays a key role in making AI data available for companies. Because AI is only as good as the data that goes into it, Scale uses scads of contract workers to tidy up and tag images, text and other data that can then be used for AI training. Scale and Meta share an interest in defense tech. Last week, Meta announced a new partnership with defense contractor Anduril Industries Inc. to develop products for the US military, including an AI-powered helmet with virtual and augmented reality features. Meta has also granted approval for US government agencies and defense contractors to use its AI models. The company is already partnering with Scale on a program called Defense Llama — a version of Meta's Llama large language model intended for military use. Scale has increasingly been working with the US government to develop AI for defense purposes. Earlier this year the startup said it won a contract with the Defense Department to work on AI agent technology. The company called the contract 'a significant milestone in military advancement.' The SEC Pinned Its Hack on a Few Hapless Day Traders. The Full Story Is Far More Troubling Cavs Owner Dan Gilbert Wants to Donate His Billions—and Walk Again Is Elon Musk's Political Capital Spent? What Does Musk-Trump Split Mean for a 'Big, Beautiful Bill'? Cuts to US Aid Imperil the World's Largest HIV Treatment Program ©2025 Bloomberg L.P. Sign in to access your portfolio
Yahoo
3 hours ago
- Yahoo
Gerry Adams's lawyer to pursue chatbots for libel
The high-profile media lawyer who represented Gerry Adams in his libel trial against the BBC is now preparing to sue the world's most powerful AI chatbots for defamation. As one of the most prominent libel lawyers in the UK, Paul Tweed said that artificial intelligence was the 'new battleground' in trying to prevent misinformation about his clients from being spread online. Mr Tweed is turning his attention to tech after he recently helped the former Sinn Fein leader secure a €100,000 (£84,000) payout over a BBC documentary that falsely claimed he sanctioned the murder of a British spy. The Belfast-based solicitor said he was already building a test case against Meta that could trigger a flurry of similar lawsuits, as he claims to have exposed falsehoods shared by chatbots on Facebook and Instagram. It is not the first time tech giants have been sued for defamation over questionable responses spewed out by their chatbots. Robby Starbuck, the US activist known for targeting diversity schemes at major companies, has sued Meta for defamation alleging that its AI chatbot spread a number of false claims about him, including that he took part in the Capitol riots. A Norwegian man also filed a complaint against OpenAI after its ChatGPT software incorrectly stated that he had killed two of his sons and been jailed for 21 years. Mr Tweed, who has represented celebrities such as Johnny Depp, Harrison Ford and Jennifer Lopez, said: 'My pet subject is generative AI and the consequences of them repeating or regurgitating disinformation and misinformation.' He believes statements put out by AI chatbots fall outside the protections afforded to social media companies, which have traditionally seen them avoid liability for libel. If successful, Mr Tweed will expose social media companies that have previously argued they should not be responsible for claims made on their platforms because they are technology companies rather than traditional publishers. Mr Tweed said: 'I've been liaising with a number of well-known legal professors on both sides of the Atlantic and they agree that there's a very strong argument that generative AI will fall outside the legislative protections.' The lawyer said that chatbots are actually creating new content, meaning they should be considered publishers. He said that the decision by many tech giants to move their headquarters to Ireland for lower tax rates had also opened them up to being sued in Dublin's high courts, where libel cases are typically decided by a jury. This setup is often seen as more favourable to claimants, which Mr Tweed himself says has fuelled a wave of 'libel tourism' in Ireland. He also said Dublin's high courts are attractive as a lower price option compared to London, where he said the costs of filing libel claims are 'eye-watering'. He said: 'I think it's absurd now, the level of costs that are being claimed. The libel courts in London are becoming very, very expensive and highly risky now. The moment you issue your claim form, the costs go into the stratosphere. 'It's not in anyone's interest for people to be deprived of access to justice. It will get to the point where nobody sues for libel unless you're a billionaire.' Meta was contacted for comment. Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.
Yahoo
3 hours ago
- Yahoo
An Appeal to My Alma Mater
The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. When Maggie Li Zhang enrolled in a college class where students were told to take notes and read on paper rather than on a screen, she felt anxious and alienated. Zhang and her peers had spent part of high school distance learning during the pandemic. During her first year at Pomona College, in Southern California, she had felt most engaged in a philosophy course where the professor treated a shared Google Doc as the focus of every class, transcribing discussions in real time on-screen and enabling students to post comments. So the 'tech-free' class that she took the following semester disoriented her. 'When someone writes something you think: Should I be taking notes too?' she told me in an email. But gradually, she realized that exercising her own judgments about what to write down, and annotating course readings with ink, helped her think more deeply and connect with the most difficult material. 'I like to get my finger oil on the pages,' she told me. Only then does a text 'become ripe enough for me to enter.' Now, she said, she feels 'far more alienated' in classes that allow screens. Zhang, who will be a senior in the fall, is among a growing cohort of students at Pomona College who are trying to alter how technology affects campus life. I attended Pomona from 1998 to 2002; I wanted to learn more about these efforts and the students' outlook on technology, so I recently emailed or spoke with 10 of them. One student wrote an op-ed in the student newspaper calling for more classes where electronic devices are banned. Another co-founded a 'Luddite Club' that holds a weekly tech-free hangout. Another now carries a flip phone rather than a smartphone on campus. Some Pomona professors with similar concerns are limiting or banning electronic devices in their classes and trying to curtail student use of ChatGPT. It all adds up to more concern over technology than I have ever seen at the college. These Pomona students and professors are hardly unique in reacting to a new reality. A generation ago, the prevailing assumption among college-bound teenagers was that their undergraduate education would only benefit from cutting-edge technology. Campus tour guides touted high-speed internet in every dorm as a selling point. Now that cheap laptops, smartphones, Wi-Fi, and ChatGPT are all ubiquitous—and now that more people have come to see technology as detrimental to students' academic and social life—countermeasures are emerging on various campuses. The Wall Street Journal reported last month that sales of old-fashioned blue books for written exams had increased over the past year by more than 30 percent at Texas A&M University and nearly 50 percent at the University of Florida, while rising 80 percent at UC Berkeley over the past two years. And professors at schools such as the University of Virginia and the University of Maryland are banning laptops in class. The pervasiveness of technology on campuses poses a distinct threat to small residential liberal-arts colleges. Pomona, like its closest peer institutions, spends lots of time, money, and effort to house nearly 95 percent of 1,600 students on campus, feed them in dining halls, and teach them in tiny groups, with a student-to-faculty ratio of 8 to 1. That costly model is worth it, boosters insist, because young people are best educated in a closely knit community where everyone learns from one another in and outside the classroom. Such a model ceases to work if many of the people physically present in common spaces absent their minds to cyberspace (a topic that the psychologist Jonathan Haidt has explored in the high-school context). At the same time, Pomona is better suited than most institutions to scale back technology's place in campus life. With a $3 billion endowment, a small campus, and lots of administrators paid to shape campus culture, it has ample resources and a natural setting to formalize experiments as varied as, say, nudging students during orientation to get flip phones, forging a tech-free culture at one of its dining halls, creating tech-free dorms akin to its substance-free options––something that tiny St. John's College in Maryland is attempting––and publicizing and studying the tech-free classes of faculty members who choose that approach. Doing so would differentiate Pomona from competitors. Aside from outliers such as Deep Springs College and some small religious institutions—Wyoming Catholic College has banned phones since 2007, and Franciscan University of Steubenville in Ohio launched a scholarship for students who give up smartphones until they earn their degree—vanishingly few colleges have committed to thoughtful limits on technology. [Jonathan Haidt: Get phones out of schools now] My hope is that Pomona or another liberal-arts college recasts itself from a place that brags about how much tech its incoming students will be able to access––'there are over 160 technology enhanced learning spaces at Pomona,' the school website states––to a place that also brags about spaces that it has created as tech refuges. 'In a time of fierce competition for students, this might be something for a daring and visionary college president to propose,' Susan McWilliams Barndt, a Pomona politics professor, told me. McWilliams has never allowed laptops or other devices in her classes; she has also won Pomona's most prestigious teaching prize every time she's been eligible. 'There may not be a million college-bound teens across this country who want to attend such a school,' she said, 'but I bet there are enough to sustain a vibrant campus or two.' So far, Pomona's leadership has not aligned itself with the professors and students who see the status quo as worse than what came before it. 'I have done a little asking around today and I was not able to find any initiative around limiting technology,' the college's new chief communications officer, Katharine Laidlaw, wrote to me. 'But let's keep in touch. I could absolutely see how this could become a values-based experiment at Pomona.' Pomona would face a number of obstacles in trying to make itself less tech-dependent. The Americans With Disabilities Act requires allowing eligible students to use tools such as note-taking software, closed captioning, and other apps that live on devices. But Oona Eisenstadt, a religious-studies professor at Pomona who has taught tech-free classes for 21 years, told me that, although she is eager to follow the law (and even go beyond it) to accommodate her students, students who require devices in class are rare. If a student really needed a laptop to take notes, she added, she would consider banning the entire class from taking notes, rather than allowing the computer. 'That would feel tough at the beginning,' she said, but it 'might force us into even more presence.' Ensuring access to course materials is another concern. Amanda Hollis-Brusky, a professor of politics and law, told me that she is thinking of returning to in-class exams because of 'a distinct change' in the essays her students submit. 'It depressed me to see how often students went first to AI just to see what it spit out, and how so much of its logic and claims still made their way into their essays,' she said. She wants to ban laptops in class too––but her students use digital course materials, which she provides to spare them from spending money on pricey physical texts. 'I don't know how to balance equity and access with the benefits of a tech-free classroom,' she lamented. Subsidies for professors struggling with that trade-off is the sort of experiment the college could fund. Students will, of course, need to be conversant in recent technological advances to excel in many fields, and some courses will always require tech in the classroom. But just as my generation has made good use of technology, including the iPhone and ChatGPT, without having been exposed to it in college, today's students, if taught to think critically for four years, can surely teach themselves how to use chatbots and more on their own time. In fact, I expect that in the very near future, if not this coming fall, most students will arrive at Pomona already adept at using AI; they will benefit even more from the college teaching them how to think deeply without it. Perhaps the biggest challenge of all is that so many students who don't need tech in a given course want to use it. 'In any given class I can look around and see LinkedIn pages, emails, chess games,' Kaitlyn Ulalisa, a sophomore who grew up near Milwaukee, wrote to me. In high school, Ulalisa herself used to spend hours every day scrolling on Instagram, Snapchat, and TikTok. Without them, she felt that she 'had no idea what was going on' with her peers. At Pomona, a place small enough to walk around campus and see what's going on, she deleted the apps from her phone again. Inspired by a New York Times article about a Luddite Club started by a group of teens in Brooklyn, she and a friend created a campus chapter. They meet every Friday to socialize without technology. Still, she said, for many college students, going off TikTok and Instagram seems like social death, because their main source of social capital is online. [From the September 2017 issue: Have smartphones destroyed a generation?] Accounts like hers suggest that students might benefit from being forced off of their devices, at least in particular campus spaces. But Michael Steinberger, a Pomona economics professor, told me he worries that an overly heavy-handed approach might deprive students of the chance to learn for themselves. 'What I hope that we can teach our students is why they should choose not to open their phone in the dining hall,' he said. 'Why they might choose to forgo technology and write notes by hand. Why they should practice cutting off technology and lean in to in-person networking to support their own mental health, and why they should practice the discipline of choosing this for themselves. If we limit the tech, but don't teach the why, then we don't prepare our students as robustly as we might.' Philosophically, I usually prefer the sort of hands-off approach that Steinberger is advocating. But I wonder if, having never experienced what it's like to, say, break bread in a dining hall where no one is looking at a device, students possess enough data to make informed decisions. Perhaps heavy-handed limits on tech, at least early in college, would leave them better informed about trade-offs and better equipped to make their own choices in the future. What else would it mean for a college-wide experiment in limited tech to succeed? Administrators would ideally measure academic outcomes, effects on social life, even the standing of the college and its ability to attract excellent students. Improvements along all metrics would be ideal. But failures needn't mean wasted effort if the college publicly shares what works and what doesn't. A successful college-wide initiative should also take care to avoid undermining the academic freedom of professors, who must retain all the flexibility they currently enjoy to make their own decisions about how to teach their classes. Some will no doubt continue with tech-heavy teaching methods. Others will keep trying alternatives. Elijah Quetin, a visiting instructor in physics and astronomy at Pomona, told me about a creative low-tech experiment that he already has planned. Over the summer, Quetin and six students (three of them from the Luddite Club) will spend a few weeks on a ranch near the American River; during the day, they will perform physical labor—repairing fencing, laying irrigation pipes, tending to sheep and goats—and in the evening, they'll undertake an advanced course in applied mathematics inside a barn. 'We're trying to see if we can do a whole-semester course in just two weeks with no infrastructure,' he said. He called the trip 'an answer to a growing demand I'm hearing directly from students' to spend more time in the real world. It is also, he said, part of a larger challenge to 'the mass-production model of higher ed,' managed by digital tools 'instead of human labor and care.' Even in a best-case scenario, where administrators and professors discover new ways to offer students a better education, Pomona is just one tiny college. It could easily succeed as academia writ large keeps struggling. 'My fear,' Gary Smith, an economics professor, wrote to me, 'is that education will become even more skewed with some students at elite schools with small classes learning critical thinking and communication skills, while most students at schools with large classes will cheat themselves by using LLMs'—large language models—'to cheat their way through school.' But successful experiments at prominent liberal-arts colleges are better, for everyone, than nothing. While I, too, would lament a growing gap among college graduates, I fear a worse outcome: that all colleges will fail to teach critical thinking and communication as well as they once did, and that a decline in those skills will degrade society as a whole. If any school provides proof of concept for a better way, it might scale. Peer institutions might follow; the rest of academia might slowly adopt better practices. Some early beneficiaries of the better approach would meanwhile fulfill the charge long etched in Pomona's concrete gates: to bear their added riches in trust for mankind. Article originally published at The Atlantic