
East Dunbartonshire council questioned over use of AI in school policy
Critics have hit out at the design of surveys that formed part of the process, as well as the council's use of artificial intelligence (AI) tools to analyse responses, with the local authority now facing formal complaints about the matter.
As part of work to develop a new policy around smartphones in schools, officials at East Dunbartonshire Council opened online surveys for teachers, parents, secondary school students and upper-primary school pupils.
READ MORE: River City did not pass value for money test, BBC Scotland boss tells MSPs
Each survey, which did not collect names but did record information on the schools that young people attend, ran for around two weeks, with the council receiving a total of more than 11,000 responses across the four different groups.
In order to process the survey data 'efficiently and consistently', council officers made use of several AI tools to process the contents of open text boxes in which respondents were invited to add 'any additional information' that they wished to be considered as part of the review.
This material, including that produced by young children, was input to ChatGPT, Gemini AI and Microsoft Copilot, which were used to 'assist in reviewing and summarising the anonymous comments.' Officials say that this generated a 'breakdown of key messages' that were then provided to the project working group, but when asked to share the summary of survey responses claimed that this 'is not available as yet.'
Asked to explain how the output of AI platforms was checked for accuracy, the council stated that cross-validation, human oversight, triangulation and bias-monitoring processes were all applied, with reviews by officials ensuring 'fidelity' to the more than 11,000 responses that were received. Officials stated that these 'safeguards' would ensure that 'the final summaries accurately reflect the breadth and nuance of stakeholder views gathered during the consultation.'
However, those taking part in the survey were not informed that their information would be processed using AI platforms. The Information Commissioner's Office, which regulates areas such as data protection across the whole of the UK, told The Herald that they would expect organisations including local authorities to be "transparent' about how data is being processed, including advising of the purpose of AI tools that are to be used and explaining what the council intends to do with the outputs that are generated.
The council has told The Herald that the surveys closed on 13 or 14 May, that work on a new policy began on 19 May, and that a full draft policy had been produced and submitted to the legal department by 27 May – the same day on which the council had been approached about the issue.
However, material seen by The Herald shows officials advising parents that the policy had been written and submitted to the legal department by 20 May, just one day after the council claims to have begun drafting the document. An explanation has been requested from the council.
READ MORE: 10 Glasgow areas set to have fireworks ban
A comparison of the surveys issued to each group also confirms that a key question about was not included in the parents version of the survey, although it was present in the versions that were issued to teachers and pupils.
Parents were asked the extent to which they support either a ban on phone use during lessons, or a ban on use during lessons unless their use is approved by a teacher.
However, the other versions of the survey also asked explicitly whether respondents support a ban on the use of phones during the whole school day.
The omission has provoked an angry response from some parents.
As a result of these and other concerns, formal complaints have now been submitted to East Dunbartonshire Council alleging that the 'flawed survey information and structure' is not fit for purpose, and that the views of parents have not been fully explored or fairly represented.
Commenting on behalf of the local Smartphone Free Childhood campaign group, one parent raised significant concerns about the council's approach:
'The fact that parents were the only group not asked about a full ban shocked us. But we were assured that the free text answers we gave would be properly looked at and considered.
'As a result, many parents left long, detailed and personal stories in response to this survey question.
'They shared heart-breaking stories of kids losing sleep at night after seeing things they shouldn't have. Other stories included girls and teachers being filmed without their consent - and kids being afraid to report the extent of what they're seeing in school because of peer pressure.
'There were long, careful responses outlining their concerns - where has this all gone?
'We have been told that an AI tool was used to summarise all this into five 'top-line' policy considerations. We're not sure if the rest was looked at?
'Not only is it not good enough - it's a betrayal of parents who have trusted the council to listen to their concerns.
'It's also not clear how they've shared and processed these highly personal responses from parents, children and teachers - some containing identifiable details, to an unknown 'AI platform' without our consent. We don't know who can access the data.'
The Herald contacted East Dunbartonshire Council asking whether the information in the open text boxes was checked for personal or identifying details before being submitted to AI systems. Officials were also asked to provide a copy of the council's current policy on AI use.
The response received from the council did not engage with these queries.
We also asked why the council had given two different dates in response to questions about when its new draft policy was completed, and whether the council has provided false information as a consequence.
READ MORE: 'Fun police': Decision made on the selling of ice cream in Glasgow parks
A spokesperson insisted that "the draft policy was formally submitted to Legal on 27 May for consideration" and asked to be provided with evidence suggesting otherwise so that they could investigate.
Finally, the council was asked to explain why the surveys for pupils and teachers included an explicit question about full bans on smartphones during the school day. Their spokesperson said:
"The pupil survey included a specific question on full day bans to gather targeted data from young people. The working group which consisted of Head Teachers, Depute Head Teachers, Quality Improvement Officers and an EIS representative, felt that the young people may be less likely to leave an additional comment in the open text box and so wanted to explicitly ask this question. Parents were intentionally given an open text box to avoid steering responses and to allow respondents to freely express their views. The open text box was used by parents to express their view on a full day ban which many did."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
5 hours ago
- Reuters
AI ‘vibe coding' startups burst onto scene with sky-high valuations
NEW YORK, NY, June 3 (Reuters) - Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development. So-called code generation or 'code-gen' startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers. Cursor, a code generation startup based in San Francisco that can suggest and complete lines of code and write whole sections of code autonomously, raised $900 million at a $10 billion valuation in May from a who's who list of tech investors, including Thrive Capital, Andreessen Horowitz and Accel. Windsurf, a Mountain View-based startup behind the popular AI coding tool Codeium, attracted the attention of ChatGPT maker OpenAI, which is now in talks to acquire the company for $3 billion, sources familiar with the matter told Reuters. Its tool is known for translating plain English commands into code, sometimes called 'vibe coding,' which allows people with no knowledge of computer languages to write software. OpenAI and Windsurf declined to comment on the acquisition. 'AI has automated all the repetitive, tedious work,' said Scott Wu, CEO of code gen startup Cognition. 'The software engineer's role has already changed dramatically. It's not about memorizing esoteric syntax anymore.' Founders of code-gen startups and their investors believe they are in a land grab situation, with a shrinking window to gain a critical mass of users and establish their AI coding tool as the industry standard. But because most are built on AI foundation models developed elsewhere, such as OpenAI, Anthropic, or DeepSeek, their costs per query are also growing, and none are yet profitable. They're also at risk of being disrupted by Google, Microsoft and OpenAI, which all announced new code-gen products in May, and Anthropic is also working on one as well, two sources familiar with the matter told Reuters. The rapid growth of these startups is coming despite competing on big tech's home turf. Microsoft's GitHub Copilot, launched in 2021 and considered code-gen's dominant player, grew to over $500 million in revenue last year, according to a source familiar with the matter. Microsoft declined to comment on GitHub Copilot's revenue. On Microsoft's earnings call in April, the company said the product has over 15 million users. As AI revolutionizes the industry, many jobs - particularly entry-level coding positions that are more basic and involve repetition - may be eliminated. Signalfire, a VC firm that tracks tech hiring, found that new hires with less than a year of experience fell 24% in 2024, opens new tab, a drop it attributes to tasks once assigned to entry-level software engineers are now being fulfilled in part with AI. Google's CEO also said in April that 'well over 30%' of Google's code is now AI-generated, and Amazon CEO Andy Jassy said, opens new tab last year the company had saved 'the equivalent of 4,500 developer-years' by using AI. Google and Amazon declined to comment. In May, Microsoft CEO Satya Nadella said at a conference that approximately 20 to 30% of their code is now AI-generated. The same month, the company announced layoffs of 6,000 workers globally, with over 40% of those being software developers in Microsoft's home state, Washington. 'We're focused on creating AI that empowers developers to be more productive, creative, and save time,' a Microsoft spokesperson said. 'This means some roles will change with the revolution of AI, but human intelligence remains at the center of the software development life cycle.' MOUNTING LOSSES Some 'vibe-coding' platforms already boast substantial annualized revenues. Cursor, with just 60 employees, went from zero to $100 million in recurring revenue by January 2025, less than two years since its launch. Windsurf, founded in 2021, launched its code generation product in November 2024 and is already bringing in $50 million in annualized revenue, according to a source familiar with the company. But both startups operate with negative gross margins, meaning they spend more than they make, according to four investor sources familiar with their operations. 'The prices people are paying for coding assistants are going to get more expensive,' Quinn Slack, CEO at coding startup Sourcegraph, told Reuters. To make the higher cost an easier pill to swallow for customers, Sourcegraph is now offering a drop-down menu to let users choose which models they want to work with, from open source models such as DeepSeek to the most advanced reasoning models from Anthropic and OpenAI so they can opt for cheaper models for basic questions. Both Cursor and Windsurf are led by recent MIT graduates in their twenties, and exemplify the gold rush era of the AI startup scene. 'I haven't seen people working this hard since the first Internet boom,' said Martin Casado, a general partner at Andreessen Horowitz, an investor in Anysphere, the company behind Cursor. What's less clear is whether the dozen or so code-gen companies will be able to hang on to their customers as big tech moves in. 'In many cases, it's less about who's got the best technology -- it's about who is going to make the best use of that technology, and who's going to be able to sell their products better than others,' said Scott Raney, managing director at Redpoint Ventures, whose firm invested in Sourcegraph and Poolside, a software development startup that's building its own AI foundation model. Most of the AI coding startups currently rely on the Claude AI model from Anthropic, which crossed $3 billion in annualized revenue in May in part due to fees paid by code-gen companies. But some startups are attempting to build their own models. In May, Windsurf announced its first in-house AI models that are optimized for software engineering in a bid to control the user experience. Cursor has also hired a team of researchers to pre-train its own large frontier-level models, which could enable the company to not have to pay foundation model companies so much money, according to two sources familiar with the matter. Startups looking to train their own AI coding models face an uphill battle as it could easily cost millions to buy or rent the computing capacity needed to train a large language model. Replit earlier dropped plans to train its own model. Poolside, which has raised more than $600 million to make a coding-specific model, has announced a partnership with Amazon Web Services and is testing with customers, but hasn't made any product generally available yet. Another code gen startup Magic Dev, which raised nearly $500 million since 2023, told investors a frontier-level coding model was coming in summer 2024 but hasn't yet launched a product. Poolside declined to comment. Magic Dev did not respond to a request for comment.


The Guardian
11 hours ago
- The Guardian
‘Nobody wants a robot to read them a story!' The creatives and academics rejecting AI – at work and at home
The novelist Ewan Morrison was alarmed, though amused, to discover he had written a book called Nine Inches Pleases a Lady. Intrigued by the limits of generative artificial intelligence (AI), he had asked ChatGPT to give him the names of the 12 novels he had written. 'I've only written nine,' he says. 'Always eager to please, it decided to invent three.' The 'nine inches' from the fake title it hallucinated was stolen from a filthy Robert Burns poem. 'I just distrust these systems when it comes to truth,' says Morrison. He is yet to write Nine Inches – 'or its sequel, Eighteen Inches', he laughs. His actual latest book, For Emma, imagining AI brain-implant chips, is about the human costs of technology. Morrison keeps an eye on the machines, such as OpenAI's ChatGPT, and their capabilities, but he refuses to use them in his own life and work. He is one of a growing number of people who are actively resisting: people who are terrified of the power of generative AI and its potential for harm and don't want to feed the beast; those who have just decided that it's a bit rubbish, and more trouble than it's worth; and those who simply prefer humans to robots. Go online, and it's easy to find AI proponents who dismiss refuseniks as ignorant luddites – or worse, smug hipsters. I possibly fall into both camps, given that I have decidedly Amish interests (board games, gardening, animal husbandry) and write for the Guardian. Friends swear by ChatGPT for parenting advice, and I know someone who uses it all day for work in her consultancy business, but I haven't used it since playing around after it launched in 2022. Admittedly ChatGPT might have done a better job, but this piece was handcrafted using organic words from my artisanal writing studio. (OK, I mean bed.) I could have assumed my interviewees' thoughts from plundering their social media posts and research papers, as ChatGPT would have done, but it was far more enjoyable to pick up the phone and talk, human to human. Two of my interviewees were interrupted by their pets, and each made me laugh in some way (full disclosure: AI then transcribed the noise). On X, where Morrison sometimes clashes with AI enthusiasts, a common insult is 'decel' (decelerationist), but it makes him laugh when people think he's the one who isn't keeping up. 'There's nothing [that stops] accelerationism more than failure to deliver on what you promised. Hitting a brick wall is a good way to decelerate,' he says. One recent study found that AI answered more than 60% of queries inaccurately. Morrison was drawn into the argument by what he would now call 'alarmist fears about the potential for superintelligence and runaway AI. The more I've got into it, the more I realise that's a fiction that's been dangled before the investors of the world, so they'll invest billions – in fact, half a trillion – into this quest for artificial superintelligence. It's a fantasy, a product of venture capital gone nuts.' There are also copyright violations – generative AI is trained on existing material – that threaten him as a writer, and his wife, screenwriter Emily Ballou. In the entertainment industry, he says, people are using 'AI algorithms to determine what projects get the go-ahead, and that means we're stuck remaking the past. The algorithms say 'More of the same', because it's all they can do.' Morrison says he has a long list of complaints. 'They've been stacking up over the past few years.' He is concerned about the job losses (Bill Gates recently predicted AI would lead to a two-day work week). Then there are 'tech addiction, the ecological impact, the damage to the education system – 92% of students are now using AI'. He worries about the way tech companies spy on us to make AI personalised, and is horrified at AI-enabled weapons being used in Ukraine. 'I find that ethically revolting.' Others cite similar reasons for not using AI. April Doty, an audiobook narrator, is appalled at the environmental cost – the computational power required to perform an AI search and answer is huge. 'I'm infuriated that you can't turn off the AI overviews in Google search,' she says. 'Whenever you look anything up now you're basically torching the planet.' She has started to use other search engines. 'But, more and more, we're surrounded by it, and there's no off switch. That makes me angry.' Where she still can, she says, 'I'm opting out of using AI.' In her own field, she is concerned about the number of books that are being 'read' by machines. Audible, the Amazon-owned audiobook provider, has just announced it will allow publishers to create audiobooks using its AI technology. 'I don't know anybody who wants a robot to read them a story, but I am concerned that it is going to ruin the experience to the point where people don't want to subscribe to audiobook platforms any more,' says Doty. She hasn't lost jobs to AI yet but other colleagues have, and chances are, it will happen. AI models can't 'narrate', she says. 'Narrators don't just read words; they sense and express the feelings beneath the words. AI can never do this job because it requires decades of experience in being a human being.' Emily M Bender, professor of linguistics at the University of Washington and co-author of a new book, The AI Con, has many reasons why she doesn't want to use large language models (LLMs) such as ChatGPT. 'But maybe the first one is that I'm not interested in reading something that nobody wrote,' she says. 'I read because I want to understand how somebody sees something, and there's no 'somebody' inside the synthetic text-extruding machines.' It's just a collage made from lots of different people's words, she says. Does she feel she is being 'left behind', as AI enthusiasts would say? 'No, not at all. My reaction to that is, 'Where's everybody going?'' She laughs as if to say: nowhere good. 'When we turn to synthetic media rather than authentic media, we are losing out on human connection,' says Bender. 'That's both at a personal level – what we get out of connecting to other people – and in terms of strength of community.' She cites Chris Gilliard, the surveillance and privacy researcher. 'He made the very important point that you can see this as a technological move by the companies to isolate us from each other, and to set things up so that all of our interactions are mediated through their products. We don't need that, for us or our communities.' Despite Bender's well-publicised position – she has long been a high-profile critic of LLMs – incredibly, she has seen students turn in AI-generated work. 'That's very sad.' She doesn't want to be policing, or even blaming, students. 'My job is to make sure students understand why it is that turning to a large language model is depriving themselves of a learning opportunity, in terms of what they would get out of doing the work.' Does she think people should boycott generative AI? 'Boycott suggests organised political action, and sure, why not?' she says. 'I also think that people are individually better off if they don't use them.' Some people have so far held out, but are reluctantly realising they may end up using it. Tom, who works in IT for the government, doesn't use AI in his tech work, but found colleagues were using it in other ways. Promotion is partly decided on annual appraisals they have to write, and he had asked a manager whose appraisal had impressed him how he'd done it, thinking he'd spent days on it. 'He said, 'I just spent 10 minutes – I used ChatGPT,'' Tom recalls. 'He suggested I should do the same, which I don't agree with. I made that point, and he said, 'Well, you're probably not going to get anywhere unless you do.'' Using AI would feel like cheating, but Tom worries refusing to do so now puts him at a disadvantage. 'I almost feel like I have no choice but to use it at this point. I might have to put morals aside.' Others, despite their misgivings, limit how they use it, and only for specific tasks. Steve Royle, professor of cell biology at the University of Warwick, uses ChatGPT for the 'grunt work' of writing computer code to analyse data. 'But that's really the limit. I don't want it to generate code from scratch. When you let it do that, you spend way more time debugging it afterwards. My view is, it's a waste of time if you let it try and do too much for you.' Accurate or not, he also worries that if he becomes too reliant on AI, his coding skills will atrophy. 'The AI enthusiasts say, 'Don't worry, eventually nobody will need to know anything.' I don't subscribe to that.' Part of his job is to write research papers and grant proposals. 'I absolutely will not use it for generating any text,' says Royle. 'For me, in the process of writing, you formulate your ideas, and by rewriting and editing, it really crystallises what you want to say. Having a machine do that is not what it's about.' Generative AI, says film-maker and writer Justine Bateman, 'is one of the worst ideas society has ever come up with'. She says she despises how it incapacitates us. 'They're trying to convince people they can't do the things they've been doing easily for years – to write emails, to write a presentation. Your daughter wants you to make up a bedtime story about puppies – to write that for you.' We will get to the point, she says with a grim laugh, 'that you will essentially become just a skin bag of organs and bones, nothing else. You won't know anything and you will be told repeatedly that you can't do it, which is the opposite of what life has to offer. Capitulating all kinds of decisions like where to go on vacation, what to wear today, who to date, what to eat. People are already doing this. You won't have to process grief, because you'll have uploaded photos and voice messages from your mother who just died, and then she can talk to you via AI video call every day. One of the ways it's going to destroy humans, long before there's a nuclear disaster, is going to be the emotional hollowing-out of people.' She is not interested. 'It is the complete opposite direction of where I'm going as a film-maker and author. Generative AI is like a blender – you put in millions of examples of the type of thing you want and it will give you a Frankenstein spoonful of it.' It's theft, she says, and regurgitation. 'Nothing original will come out of it, by the nature of what it is. Anyone who uses generative AI, who thinks they're an artist, is stopping their creativity.' Some studios, such as the animation company Studio Ghibli, have sworn off using AI, but others appear to be salivating at the prospect. In 2023, Dreamworks founder Jeffrey Katzenberg said AI would cut the costs of its animated films by 90%. Bateman thinks audiences will tire of AI-created content. 'Human beings will react to this in the way they react to junk food,' she says. Deliciously artificial to some, if not nourishing – but many of us will turn off. Last year she set up an organisation, Credo 23, and a film festival, to showcase films made without AI. She likens it to an 'organic stamp for films, that tells the audience no AI was used.' People, she says, will 'hunger for something raw, real and human'. In everyday life, Bateman is trying 'to be in a parallel universe, where I'm trying to avoid [AI] as much as possible.' It's not that she is anti-tech, she stresses. 'I have a computer science degree, I love tech. I love salt, too, but I don't put it on everything.' In fact, everyone I speak to is a technophile in some way. Doty describes herself as 'very tech-forward', but she adds that she values human connection, which AI is threatening. 'We keep moving like zombies towards a world that nobody really wants to live in.' Royle codes and runs servers, but also describes himself as a 'conscientious AI objector'. Bender specialises in computational linguistics and was named by Time as one of the top 100 people in AI in 2023. 'I am a technologist,' she says, 'but I believe that technology should be built by communities for their own purposes, rather than by large corporations for theirs.' She also adds, with a laugh: 'The Luddites were awesome! I would wear that badge with pride.' Morrison, too, says: 'I quite like the Luddites – people standing up to protect the jobs that keep their families and their communities alive.'


Geeky Gadgets
a day ago
- Geeky Gadgets
The Ultimate Guide to AI Tools What's Worth Your Money
What if the tools you rely on today could work smarter, faster, and more intuitively—saving you hours every week? With the explosion of AI tools flooding the market, the promise of greater productivity has never been more enticing. Yet, the sheer number of options can feel overwhelming, and not every tool delivers on its claims. From privacy concerns to hidden limitations in free versions, finding the right AI solution often feels like navigating a maze. Whether you're a professional juggling tight deadlines or a creative looking to streamline your workflow, understanding which tools are truly worth your investment is no longer a luxury—it's a necessity. In this comprehensive how-to by Rick Mulready, you'll uncover the strengths, weaknesses, and unique features of five leading AI tools: ChatGPT, Claude, Google Gemini, Perplexity, and Grok. What makes this guide stand out is its focus on helping you weigh value for money against your specific needs—whether it's privacy, versatility, or seamless integration with existing tools. You'll also gain insights into how these platforms handle tasks like brainstorming, coding, or managing large-scale projects. By the end, you'll not only know which tools deserve your attention but also feel empowered to make an informed decision that aligns with your goals. After all, choosing the right AI tool isn't just about saving time—it's about unlocking your full potential. Top 5 AI Tools Compared ChatGPT: A Versatile Powerhouse ChatGPT, created by OpenAI, stands out as a highly versatile AI tool capable of handling a wide range of tasks. From brainstorming ideas and analyzing documents to reasoning and even generating images, ChatGPT offers robust functionality. The paid version significantly enhances its utility with a 1-million-token context window, making it particularly effective for extensive research and long-term projects. The free version, while functional, is more restrictive, offering a smaller context window and limiting users to 10 messages every 3 hours. A notable drawback is ChatGPT's tendency to 'hallucinate,' or produce inaccurate or fabricated information. Privacy is another critical factor—by default, user conversations are used to train OpenAI's models, though opting out is possible. For those concerned about privacy, a temporary chat option is available. If you frequently require advanced features or extended access, upgrading to the paid version is a practical choice. Claude: Ethical AI with Privacy at Its Core Anthropic's Claude AI emphasizes ethical design and privacy, making it a standout option for users who prioritize data security. Claude excels in nuanced conversations, content creation, and coding tasks. Unlike many competitors, it does not use user data for training by default, offering stronger privacy protections. Its context window, while substantial at 200,000 tokens, is smaller than some alternatives, and it lacks the ability to generate images. The free version of Claude is limited in scope, which makes the paid plan more appealing for users who require advanced coding support or seamless integration with Google apps. If maintaining privacy and ethical AI practices are your top priorities, Claude is a compelling choice that aligns with these values. AI Tools : What's Worth Your Money? Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in AI tools comparison. Google Gemini: Multimodal AI for Complex Tasks Google Gemini is a multimodal AI tool that integrates seamlessly with Google's ecosystem, making it a natural fit for users already relying on Google apps. Its standout feature is a 1-million-token context window, which is particularly useful for processing large documents. Additionally, Gemini supports multimodal capabilities, allowing it to handle and generate content across various formats, including text and images. Despite its strengths, Gemini has notable limitations. It struggles with creative content generation and often delivers responses that lack personality. Accuracy issues and limited features in the free version further detract from its appeal. Privacy is governed by Google's policies, requiring users to opt out of data usage for training purposes. Gemini is best suited for users who frequently work with large documents or rely heavily on Google's suite of applications. Perplexity: Quick, Citation-Supported Searches Perplexity distinguishes itself with its focus on real-time information retrieval and search results backed by citations. This feature makes it an excellent choice for users who need quick, reliable answers supported by credible sources. The paid version unlocks access to advanced AI models, further enhancing its capabilities. However, Perplexity has its limitations. It lacks depth in research responses and restricts free searches to just three per day. Privacy concerns also arise from its default query logging, which users must manually disable. Perplexity is particularly well-suited for professionals or academics who require frequent, citation-supported searches to inform their work. Grok: Real-Time Insights for X Users Grok is uniquely tailored for users of X (formerly Twitter), offering real-time insights and a distinctive, edgy personality. It excels in providing up-to-date social media data, making it a valuable tool for users who rely on such information. However, its niche focus limits its broader applicability, and its accuracy can sometimes be inconsistent. Additionally, its limited integrations restrict its utility for users seeking a more versatile AI tool. Privacy is another consideration, as users must opt out of data usage for training purposes. A private chat option is available for those who prioritize confidentiality. Grok is best suited for heavy X users who value real-time data integration and social media insights for their specific needs. How to Choose the Right AI Tool Selecting the right AI tool depends on your unique requirements and priorities. Each tool offers distinct advantages and limitations, making it essential to evaluate how their features align with your goals. Below is a breakdown of the key strengths of each tool to help you decide: ChatGPT: A versatile option for brainstorming, research, and extended projects. Best for users who need advanced features and frequent access. A versatile option for brainstorming, research, and extended projects. Best for users who need advanced features and frequent access. Claude: Prioritizes privacy and ethical AI design. Ideal for coding tasks and users who value strong data protections. Prioritizes privacy and ethical AI design. Ideal for coding tasks and users who value strong data protections. Google Gemini: Excels in handling large documents and integrates seamlessly with Google apps. Best for users working within the Google ecosystem. Excels in handling large documents and integrates seamlessly with Google apps. Best for users working within the Google ecosystem. Perplexity: Perfect for quick, citation-backed searches. Suited for professionals and academics needing reliable, real-time information. Perfect for quick, citation-backed searches. Suited for professionals and academics needing reliable, real-time information. Grok: Designed for X users who rely on real-time social media insights. Best for niche use cases involving social media data. Making an Informed Decision No single AI tool can meet every need, so understanding your priorities is crucial. Whether you value versatility, privacy, multimodal capabilities, or real-time insights, there is an AI tool designed to support your objectives. By carefully assessing your workflows and the features most relevant to your tasks, you can confidently choose the AI solution that best aligns with your goals. Media Credit: Rick Mulready Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.