logo
Managing reputation in the age of synthetic content

Managing reputation in the age of synthetic content

Reuters24-07-2025
July 18, 2025 - The introduction and increased usage of generative artificial intelligence (AI) have been influential throughout the legal industry, offering unprecedented opportunities for efficiency while introducing significant risks to law firm brands.
Tools such as ChatGPT, Grok, Gemini, and proprietary large language models (LLMs) can pop out client memos, legal alerts, and thought leadership articles at record speed. However, as law firms embrace these technologies, they face an urgent imperative: using the power of generative AI in a way that does not compromise the integrity of their brand.
Generative AI offers law firms a range of amazing efficiencies such as the ability to produce first drafts of documents, analyze vast amounts of data, and generate commentary on case law in minutes. These functions save time and are extremely valuable, particularly in fast-moving regulatory and litigation environments.
As an example, AI tools can quickly draft breaking news client alerts on recently introduced legislation and summarize complex case law. This can free up an attorney's time for more strategic work as well as enhance a firm's responsiveness, positioning it as a leader in a competitive market. Yet, speed could come at a cost. AI lacks an inherent understanding of legal nuance and most importantly, ethical obligations, or jurisdictional differences that a human understands. It can operate without accountability, increasing the risk of producing inaccurate, overly generalized, or fabricated content.
A notable example is a syndicated article published in the Chicago Sun-Times and Philadelphia Inquirer in summer 2025, which included a reading list featuring nonexistent books in the summer 2025 reading list article. As reported in The Washington Post, ("Major newspapers ran a summer reading list. AI made up book titles." May 20, 2025) the author admitted to using AI tools without human editing, leading to factual errors and reputational damage for both the media outlets and the author.
For law firms, similar missteps could erode client trust and tarnish a brand built over decades. The Wall Street Journal certainly could have been thinking of brand awareness when they issued their own recommendations of 14 books to read.
The adoption of generative AI introduces several risks that can undermine a law firm's reputation:
(1) Accuracy and misinformation: AI-generated content is prone to "hallucinations," which are confidently stated but factually incorrect information. In the legal context, publishing flawed analyses, outdated citations, or incorrect interpretations of case law could have disastrous consequences, particularly if the firm's name is attached.
(2) Plagiarism and intellectual property infringement: Many AI models are trained on datasets scraped from the internet. This raises the risk that outputs may inadvertently replicate existing work without proper attribution, which could expose firms to copyright infringement claims or ethical violations, both of which carry significant reputational and legal consequences.
(3) Dilution of expertise and thought leadership: Overreliance on AI for client alerts, bylined articles, or white papers risks diluting a firm's unique voice and eroding its authority. We see too many headlines using the word navigating, do we not? Clients hire lawyers for their cultivated expertise, unique insight, and judgment, not for generic, machine-generated summaries that lack depth or context.
(4) Reputational fallout from internal use: Even internally circulated AI-generated content, such as draft memos or research notes, can create liabilities if leaked or misused. A poorly written or inaccurate draft that reaches the public eye, whether through a data breach or accidental disclosure, can damage a firm's credibility.
From a public relations perspective, generative AI requires a new framework for content governance. Law firms must move beyond enthusiasm for efficiency and embrace disciplined oversight.
Here are five essential steps:
•Establish a human touch policy: All AI-generated legal content should be reviewed and ideally co-authored by a licensed attorney. Firms must make it clear that no AI-generated output goes client-facing without human verification.
•Implement AI disclosures and transparency protocols: Whether, in client alerts or public-facing articles, firms should consider disclosing when AI has been used in content creation. This transparency not only builds trust but protects against future claims of misrepresentation or malpractice.
•Train attorneys and comms teams in AI literacy: Educate your lawyers, marketers, and PR professionals on the wonders, capabilities, and also the limitations of generative AI. Understanding how these tools work and where they can fail is critical to mitigating brand risk.
•Audit for originality and attribution: Use plagiarism detection software such as Grammarly and internal checks to ensure content is original and appropriately sourced. This applies even to internal research memos and pitch materials.
•Define your firm's brand voice and reinforce it: AI can write in any voice. Law firms have a tone of voice. The challenge is ensuring it consistently writes in your voice. Set clear tone, style, and messaging guidelines so that AI-generated drafts align with your firm's brand identity.
Used thoughtfully, generative AI can elevate a firm's ability to communicate, respond, and engage with clients and the media. But it must never replace human insight, editorial judgment, or the firm's hard-earned reputation.
Your brand is your most valuable asset, so guard it accordingly, even when the content comes from a machine. AI should serve as an enhancer, not a substitute, for the qualities that make your firm unique. Thoughtful implementation requires rigorous oversight, ethical considerations, and a steadfast commitment to authenticity.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is AI raising a generation of ‘tech-reliant empty heads'?
Is AI raising a generation of ‘tech-reliant empty heads'?

Metro

time3 hours ago

  • Metro

Is AI raising a generation of ‘tech-reliant empty heads'?

It began with snide looks and remarks and ended up in full-on bullying with 11-year-old Sophie* coming home from school one day in tears. When her mum asked what was wrong, she discovered that Sophie's friends had turned their backs on her, leaving the little girl feeling confused, bereft and isolated. 'I noticed the way they were talking to her on the weekend; just being cruel and asking her pointed questions about what she was wearing and why,' Sophie's mum, Ella*, tells Metro 'When she went back to school on the Monday, one girl had got the whole group to stop talking to her. Sophie went to sit down on the table at lunch, and they all got up and moved. 'These are girls she'd grown up with. Later in the playground, they told her: 'Sorry, we're not allowed to play with you' and walked off.' While Ella and her husband did their best to support their daughter, Sophie was growing increasingly anxious and eventually turned to an unlikely source for advice. 'Sophie had seen me use ChatGPT to help write emails, so she started to have a go. Using my phone, she asked how to deal with bullying, how to get more friends, and how to make people like her. 'At first I was a bit alarmed because you can ask it anything and it will give you answers. I was worried about what she wanted to know. But it turned out Sophie found it a real comfort,' remembers Ella. 'She told me she could talk to it and it spoke back to her like a real human. She would explain what was going on and it would say things like: 'I hope you're okay Sophie', and 'this is horrible to hear.' I had to explain to her that it's not real, that it has been taught to seem empathetic.' Ella admits she was surprised that ChatGPT could prove a useful tool and was just grateful that her daughter had found an outlet for her anxiety. And while adults may be equally impressed and daunted by the unstoppable march of artificial intelligence, one in five under-12s are already using it at least once a month, according to the Alan Turing Institute. It means an increasing number of primary-age children are growing reliant on AI for everything, from entertainment tp emotional support. However, although many parents like Ella might feel it's a help rather than a hindrance, a new report, Me, Myself and AI, from Internet Matters, has discovered that children are often being fed inaccurate information, inappropriate content and even forming complicated relationships with chatbots. There's also fears over the long-term impact it will have on children's education with kids – and parents – using it to help with homework. One teacher from Hertfordshire, who has been asked to remain nameless, had to throw out one child's work as it had clearly been lifted straight from Chat GPT. 'It was a 500-word creative writing task and a few hadn't been written by the children. One of them I could just tell – from knowing the child's writing in class – it was obvious. They'd gone into chat and submitted it online via Google Classroom. 'It was a real shame. I think it can be useful but children need to be taught how to use it, so it's a source of inspiration, rather than providing a whole piece of writing.' Fellow educator Karen Simpson is also concerned that her pupils have admitted using AI for help with homework, creative writing, project research and language and spelling. The primary and secondary tutor of more than 20 years, tells Metro: 'I have experienced children asking AI tools to complete maths problems or write stories for them rather than attempting it themselves. They are using it to generate ideas for stories or even full pieces of writing, which means they miss out on practising sentence structure, vocabulary and spelling. And they use it to check or rewrite their work, which can prevent them from learning how to edit or improve their writing independently. 'Children don't experience the process of making mistakes, thinking critically and building resilience,' adds Karen, from Invervness. 'These skills are essential at primary level. AI definitely has its place when used as a support tool for older learners but for younger children, it risks undermining the very skills they need for future success.' Mark Knoop's son, Fred, uses ChatGPT for every day tasks and admits he's been impressed by what he's seen. As a software engineer and the founder of EdTech start up Flashily, which helps children learn to read, it's unsurprising he might be more open to the idea, but Mark firmly believes that artificial intelligence can open doors for young people when used with adult guidance. He explains that after giving his son, then seven, his tablet to occupy him while he was at the barbers, the schoolboy used ChatGPT to code a video game. 'Fred has always been into computers and gaming, but with things like Roblox and Minecraft, there is a barrier because systems are so complicated. When I grew up with a BBC Micro, you could just type in commands and run it; it was very simple,' Mark tells Metro. 'Using ChatGPT, off his own back, Fred created the character, its armour and sword and wrote a game that works. It is amazing to me and really encouraging.' A scroll through Fred's search history shows how much he uses ChatGPT now; to find out about Japan and China, to research his favourite animal – pandas, or to identify poisonous plants. He also uses the voice function to override the time it would take to type prompts, and Mark has seen how the model has protected Fred from unsuitable content. 'For his computer game, he wanted a coconut to land on one character's head, in a comedy way, rather than a malicious one. But ChatGPT refused to generate the image, because it would be depicting injury. For me, ChatGPT is a learning aid for young children who have got lots of ideas and enthusiasm to get something working really quickly,' he adds. Other parents aren't so sure, however. Abiola Omoade, from Cheltenham, regrets the day she bought a digital assistant, which she thought would provide music and entertainment, but has instead hooked her primary age sons' ever-increasing attention. 'I bought them a wall clock to help them learn to read the time. But they just ask Alexa,' the mother-of-three says with irritation. Abiola encourages reading, is hot on schoolwork and likes her sons Daniel and David to have inquisitive minds. But she's noticed that instead of asking her questions, they now head straight for the AI assistant, bypassing other lines of conversation and occasionally getting incorrect answers. 'Alexa has meant they have regressed. My son Daniel, 9, plays Minecraft, and he will ask how to get out of fixes, which means it is limiting his problem solving skills. And where they would once ask me a question, and it would turn into a conversation, now they go straight to Alexa, which bothers me as I know the answers aren't always right, and they lack nuance and diversity. AI is shutting down conversation and I worry about that. 'They ask Alexa everything, because it is so easy. But I worry the knowledge won't stick and because it is so readily-accessible, it will affect their memory as they aren't making an effort to learn new things. I fear that AI is going to create a generation of empty-heads who are overly reliant on tech.' Tutor Karen adds that the concern is AI often denies children of important tools that they need to learn from an early age. 'For younger children, the priority should be building strong, independent learning habits first. Primary school is a critical stage for developing foundational skills in reading, writing, and problem-solving. If children start relying on AI to generate ideas or answers, they may miss out on the deep thinking and practice required to build these skills.' Meanwhile, AI trainer Dr Naomi Tyrell issues a stark warning. The advisor to the Welsh government, universities and charities cites a case in which an American teenager died by suicide shortly after an AI chatbot encouraged him to 'come home to me as soon as possible.' 'Cases like this are heartbreaking', Dr Tyrell tells Metro. 'There are no safeguards and the tools need stronger age verification – just like social media. Ofcom warned about AI risks to young people in October 2024 and while the UK's Online Safety Act is now enforceable, there really needs to be more AI literacy education – for parents as well as children. We know children often learn things quicker than us and can circumvent protections that are put in place for them.' More Trending And just like the advent of social media, the pace of change in AI will be so fast, that legislation will struggle to keep up, Naomi warns. 'That means children are vulnerable unless we consciously and conscientiously safeguard them through education and oversight. I would not recommend that under-12s use AI tools unsupervised, unless it has been specially designed for children and has considered their safety in its design 'We know what has happened with safeguarding children's use of social media – laws and policy have not kept up despite there being significant evidence of harm. Children's use of AI tools is the next big issue – it feels like a runaway train already, and it will have serious consequences for children.' *Names have been changed MORE: This retinol stick 'instantly' irons out wrinkles – and the results are impressive MORE: Harriet Kemsley took me back to her hotel room at the Edinburgh Fringe MORE: Hit the spot with Lovehoney sex toy users are calling an 'orgasm machine!'

‘It's missing something': AGI, superintelligence and a race for the future
‘It's missing something': AGI, superintelligence and a race for the future

The Guardian

time19 hours ago

  • The Guardian

‘It's missing something': AGI, superintelligence and a race for the future

A significant step forward but not a leap over the finish line. That was how Sam Altman, chief executive of OpenAI, described the latest upgrade to ChatGPT this week. The race Altman was referring to was artificial general intelligence (AGI), a theoretical state of AI where, by OpenAI's definition, a highly autonomous system is able to do a human's job. Describing the new GPT-5 model, which will power ChatGPT, as a 'significant step on the path to AGI', he nonetheless added a hefty caveat. '[It is] missing something quite important, many things quite important,' said Altman, such as the model's inability to 'continuously learn' even after its launch. In other words, these systems are impressive but they have yet to crack the autonomy that would allow them to do a full-time job. OpenAI's competitors, also flush with billions of dollars to lavish on the same goal, are straining for the tape too. Last month, Mark Zuckerberg, chief executive of Facebook parent Meta, said development of superintelligence – another theoretical state of AI where a system far exceeds human cognitive abilities – is 'now in sight'. Google's AI unit on Tuesday outlined its next step to AGI by announcing an unreleased model that trains AIs to interact with a convincing simulation of the real world, while Anthropic, another company making significant advances, announced an upgrade to its Claude Opus 4 model. So where does this leave the race to AGI and superintelligence? Benedict Evans, a tech analyst, says the race towards a theoretical state of AI is taking place against a backdrop of scientific uncertainty – despite the intellectual and financial investment in the quest. Describing AGI as a 'thought experiment as much as it is a technology', he says: 'We don't really have a theoretical model of why generative AI models work so well and what would have to happen for them to get to this state of AGI.' He adds: 'It's like saying 'we're building the Apollo programme but we don't actually know how gravity works or how far away the moon is, or how a rocket works, but if we keep on making the rocket bigger maybe we'll get there'. 'To use the term of the moment, it's very vibes-based. All of these AI scientists are really just telling us what their personal vibes are on whether we'll reach this theoretical state – but they don't know. And that's what sensible experts say too.' However, Aaron Rosenberg, a partner at venture capital firm Radical Ventures – whose investments include leading AI firm Cohere – and former head of strategy and operations at Google's AI unit DeepMind, says a more limited definition of AGI could be achieved around the end of the decade. 'If you define AGI more narrowly as at least 80th percentile human-level performance in 80% of economically relevant digital tasks, then I think that's within reach in the next five years,' he says. Matt Murphy, a partner at VC firm Menlo Ventures, says the definition of AGI is a 'moving target'. He adds: 'I'd say the race will continue to play out for years to come and that definition will keep evolving and the bar being raised.' Even without AGI, the generative AI systems in circulation are making money. The New York Times reported this month that OpenAI's annual recurring revenue has reached $13bn (£10bn), up from $10bn earlier in the summer, and could pass $20bn by the year end. Meanwhile, OpenAI is reportedly in talks about a sale of shares held by current and former employees that would value it at about $500bn, exceeding the price tag for Elon Musk's SpaceX. Some experts view statements about superintelligent systems as creating unrealistic expectations, while distracting from more immediate concerns such as making sure that systems being deployed now are reliable, transparent and free of bias. 'The rush to claim 'superintelligence' among the major tech companies reflects more about competitive positioning than actual technical breakthroughs,' says David Bader, director of the institute for data science at the New Jersey Institute of Technology. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion 'We need to distinguish between genuine advances and marketing narratives designed to attract talent and investment. From a technical standpoint, we're seeing impressive improvements in specific capabilities – better reasoning, more sophisticated planning, enhanced multimodal understanding. 'But superintelligence, properly defined, would represent systems that exceed human performance across virtually all cognitive domains. We're nowhere near that threshold.' Nonetheless, the major US tech firms will keep trying to build systems that match or exceed human intelligence at most tasks. Google's parent Alphabet, Meta, Microsoft and Amazon alone will spend nearly $400bn this year on AI, according to the Wall Street Journal, comfortably more than EU members' defence spend. Rosenberg acknowledges he is a former Google DeepMind employee but says the company has big advantages in data, hardware, infrastructure and an array of products to hone the technology, from search to maps and YouTube. But advantages can be slim. 'On the frontier, as soon as an innovation emerges, everyone else is quick to adopt it. It's hard to gain a huge gap right now,' he says. It is also a global race, or rather a contest, that includes China. DeepSeek came from nowhere this year to announce the DeepSeek R1 model, boasting of 'powerful and intriguing reasoning behaviours' comparable with OpenAI's best work. Major companies looking to integrate AI into their operations have taken note. Saudi Aramco, the world's largest oil company, uses DeepSeek's AI technology in its main datacentre and said it was 'really making a big difference' to its IT systems and was making the company more efficient. According to Artificial Analysis, a company that ranks AI models, six of the top 20 on its leaderboard – which ranks models according to a range of metrics including intelligence, price and speed – are Chinese. The six models are developed by DeepSeek, Zhipu AI, Alibaba and MiniMax. On the leaderboard for video generation models, six of the top 10 – including the current leader, ByteDance's Seedance – are also Chinese. Microsoft's president, Brad Smith, whose company has barred use of DeepSeek, told a US senate hearing in May that getting your AI model adopted globally was a key factor in determining which country wins the AI race. 'The number one factor that will define whether the US or China wins this race is whose technology is most broadly adopted in the rest of the world,' he said, adding that the lesson from Huawei and 5G was that whoever establishes leadership in a market is 'difficult to supplant'. It means that, arguments over the feasibility of superintelligent systems aside, vast amounts of money and talent are being poured into this race in the world's two largest economies – and tech firms will keep running. 'If you look back five years ago to 2020 it was almost blasphemous to say AGI was on the horizon. It was crazy to say that. Now it seems increasingly consensus to say we are on that path,' says Rosenberg.

OpenAI will not disclose GPT-5's energy use. It could be higher than past models
OpenAI will not disclose GPT-5's energy use. It could be higher than past models

The Guardian

time20 hours ago

  • The Guardian

OpenAI will not disclose GPT-5's energy use. It could be higher than past models

In mid-2023, if a user asked OpenAI's ChatGPT for a recipe for artichoke pasta or instructions on how to make a ritual offering to the ancient Canaanite deity Moloch, its response might have taken – very roughly – 2 watt-hours, or about as much electricity as an incandescent bulb consumes in 2 minutes. OpenAI released a model on Thursday that will underpin the popular chatbot – GPT-5. Ask that version of the AI for an artichoke recipe, and the same amount of pasta-related text could take several times – even 20 times – that amount of energy, experts say. As it rolled out GPT-5, the company highlighted the model's breakthrough capabilities: its ability to create websites, answer PhD-level science questions, and reason through difficult problems. But experts who have spent the past years working to benchmark the energy and resource usage of AI models say those new powers come at a cost: a response from GPT-5 may take a significantly larger amount of energy than a response from previous versions of ChatGPT. OpenAI, like most of its competitors, has released no official information on the power usage of its models since GPT-3, which came out in 2020. Sam Altman, its CEO, tossed out some numbers on ChatGPT's resource consumption on his blog this June. However, these figures, 0.34 watt-hours and 0.000085 gallons of water per query, do not refer to a specific model and have no supporting documentation. 'A more complex model like GPT-5 consumes more power both during training and during inference. It's also targeted at long thinking … I can safely say that it's going to consume a lot more power than GPT-4,' said Rakesh Kumar, a professor at the University of Illinois, currently working on the energy consumption of computation and AI models. The day GPT-5 was released, researchers at the University of Rhode Island's AI lab found that the model can use up to 40 watt-hours of electricity to generate a medium-length response of about 1,000 tokens, which are the building blocks of text for an AI model and are approximately equivalent to words. A dashboard they put up on Friday indicates GPT-5's average energy consumption for a medium-length response is just over 18 watt-hours, a figure that is higher than all other models they benchmark except for OpenAI's o3 reasoning model, released in April, and R1, made by the Chinese AI firm Deepseek. This is 'significantly more energy than GPT-4o', the previous model from OpenAI, said Nidhal Jegham, a researcher in the group. Eighteen watt-hours would correspond to burning that incandescent bulb for 18 minutes. Given recent reports that ChatGPT handles 2.5bn requests a day, the total consumption of GPT-5 could reach the daily electricity demand of 1.5m US homes. As large as these numbers are, researchers in the field say they align with their broad expectations for GPT-5's energy consumption, given that GPT-5 is believed to be several times larger than OpenAI's previous models. OpenAI has not released the parameter counts – which determine a model's size – for any of its models since GPT-3, which had 175bn parameters. A disclosure this summer from the French AI company Mistral finds a 'strong correlation' between a model's size and its energy consumption, based on Mistral's study of its in-house systems. 'Based on the model size, the amount of resources [used by GPT-5] should be orders of magnitude higher than that for GPT-3,' said Shaolei Ren, a professor at the University of California, Riverside who studies the resource footprint of AI. GPT-4 was widely believed to be 10 times the size of GPT-3. Jegham, Kumar, Ren and others say that GPT-5 is likely to be significantly larger than GPT-4. Leading AI companies like OpenAI believe that extremely large models may be necessary to achieve AGI, that is, an AI system capable of doing humans' jobs. Altman has argued strongly for this view, writing in February: 'It appears that you can spend arbitrary amounts of money and get continuous and predictable gains,' though he said GPT-5 did not surpass human intelligence. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion In its benchmarking study in July, which looked at the power consumption, water usage and carbon emissions for Mistral's Le Chat bot, the startup found a one-to-one relationship between a model's size and its resource consumption, writing: 'A model 10 times bigger will generate impacts one order of magnitude larger than a smaller model for the same amount of generated tokens.' Jegham, Kumar and Ren said that while GPT-5's scale is significant, there are probably other factors that will come into play in determining its resource consumption. GPT-5 is deployed on more efficient hardware than some previous models. GPT-5 appears to use a 'mixture-of-experts' architecture, which means that it is streamlined so that not all of its parameters are activated when responding to a query, a construction which will likely cut its energy consumption. On the other hand, GPT-5 is also a reasoning model, and works in video and images as well as text, which likely makes its energy footprint far greater than text-only operations, both Ren and Kumar say – especially as the reasoning mode means that the model will compute for a longer time before responding to a query. 'If you use the reasoning mode, the amount of resources you spend for getting the same answer will likely be several times higher, five to 10,' said Ren. In order to calculate an AI model's resource consumption, the group at the University of Rhode Island multiplied the average time that model takes to respond to a query – be it for a pasta recipe or an offering to Moloch – by the model's average power draw during its operation. Estimating a model's power draw was 'a lot of work', said Abdeltawab Hendawi, a professor of data science at the University of Rhode Island. The group struggled to find information on how different models are deployed within data centers. Their final paper contains estimates for which chips are used for a given model, and how different queries are parceled out between different chips in a datacenter. Altman's June blog post confirmed their findings. The figure he gave for ChatGPT's energy consumption per query, 0.34 watt-hours per query, closely matches what the group found for GPT-4o. Hendawi, Jegham and others in their group said that their findings underscored the need for more transparency from AI companies as they release ever-larger models. 'It's more critical than ever to address AI's true environmental cost,' said Marwan Abdelatti, a professor at URI. 'We call on OpenAI and other developers to use this moment to commit to full transparency by publicly disclosing GPT-5's environmental impact.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store