
‘Urgent concerns' after deadly viruses found in Chinese bats
Scientists have raised 'urgent concerns' after the discovery of new bat pathogens in China that are closely related to the deadly Nipah and Hendra viruses.
The prospect of these viruses jumping the species barrier to humans is fuelling this concern, with the newly discovered henipaviruses detected in the kidneys of bats inhabiting orchards near villages in the Yunnan province.
University of Sydney wildlife disease ecologist Dr Alison Peel wasn't involved directly in the study, which was published on Wednesday in the PLOS Pathogens journal, but works on related research.
Dr Peel said there had been an emphasis on bat-borne viruses since Hendra emerged in Brisbane more than 30 years ago.
'We know that there are some viruses that bats can carry that can be really dangerous when they spill over into other species,' Dr Peel told The Nightly's News Worthy podcast.
'Hendra virus, which is in flying foxes here in Australia, and Nipah virus, which is found in Bangladesh and other parts of Asia, are examples of those really highly pathogenic viruses, which, when they spill over, are highly fatal in people.'
The new study was led by Yun Feng of the Yunnan Institute of Endemic Disease Control and Prevention, and took a different approach to previous research.
'When we sample from bats in Australia, we don't euthanise the animals, we just collect their urine or their faeces and look for viruses in that, and, for Hendra virus in particular, we found that it's often excreted in the urine,' Dr Peel said.
By comparison, the Chinese study took samples from actual kidney tissue in 142 bats across 10 species and discovered the presence of 22 viruses of which 20 were novel pathogens.
'Every species has their own suite of viruses, but the interesting thing about bats is that they evolved a long, long time ago and have the capability of flight and they're mammals,' Dr Peel said.
'Humans and a lot of the species that we interact with have a lot of similar biological and immunological processes, but the bat's capability for flight meant that they had to adjust their physiology and their immune system in new ways.
'And so that co-evolutionary process with the viruses that they host started to go down a different path, so when those viruses are in contact with humans, as is increasingly happening as we encroach on their environments and remove their food sources and things like that, our immune systems haven't evolved the same capabilities to defend ourselves against some of these viruses.'
The authors of the new research added they had 'urgent concerns about the potential for these viruses to spill over into humans or livestock' in the future, something Dr Peel said would now be a focus of ongoing research.
'There are two main areas for future work — one is lab-based studies to try and understand more about these particular viruses and whether they are of concern,' Dr Peel said.
'The other avenue is to understand more about the ecological context of the bats that these came from ... because it's only through really incorporating deep ecological knowledge about the species that we can actually use that to predict risk and find ways to counteract that.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


West Australian
4 hours ago
- West Australian
‘Urgent concerns' after new bat viruses detected in China with links to deadly Hendra and Nipah pathogens
Scientists have raised 'urgent concerns' after the discovery of new bat pathogens in China that are closely related to the deadly Nipah and Hendra viruses. The prospect of these viruses jumping the species barrier to humans is fuelling this concern, with the newly discovered henipaviruses detected in the kidneys of bats inhabiting orchards near villages in the Yunnan province. University of Sydney wildlife disease ecologist Dr Alison Peel wasn't involved directly in the study, which was published on Wednesday in the PLOS Pathogens journal, but works on related research. Dr Peel said there had been an emphasis on bat-borne viruses since Hendra emerged in Brisbane more than 30 years ago. 'We know that there are some viruses that bats can carry that can be really dangerous when they spill over into other species,' Dr Peel told The Nightly's News Worthy podcast. 'Hendra virus, which is in flying foxes here in Australia, and Nipah virus, which is found in Bangladesh and other parts of Asia, are examples of those really highly pathogenic viruses, which, when they spill over, are highly fatal in people.' The new study was led by Yun Feng of the Yunnan Institute of Endemic Disease Control and Prevention, and took a different approach to previous research. 'When we sample from bats in Australia, we don't euthanise the animals, we just collect their urine or their faeces and look for viruses in that, and, for Hendra virus in particular, we found that it's often excreted in the urine,' Dr Peel said. By comparison, the Chinese study took samples from actual kidney tissue in 142 bats across 10 species and discovered the presence of 22 viruses of which 20 were novel pathogens. 'Every species has their own suite of viruses, but the interesting thing about bats is that they evolved a long, long time ago and have the capability of flight and they're mammals,' Dr Peel said. 'Humans and a lot of the species that we interact with have a lot of similar biological and immunological processes, but the bat's capability for flight meant that they had to adjust their physiology and their immune system in new ways. 'And so that co-evolutionary process with the viruses that they host started to go down a different path, so when those viruses are in contact with humans, as is increasingly happening as we encroach on their environments and remove their food sources and things like that, our immune systems haven't evolved the same capabilities to defend ourselves against some of these viruses.' The authors of the new research added they had 'urgent concerns about the potential for these viruses to spill over into humans or livestock' in the future, something Dr Peel said would now be a focus of ongoing research. 'There are two main areas for future work — one is lab-based studies to try and understand more about these particular viruses and whether they are of concern,' Dr Peel said. 'The other avenue is to understand more about the ecological context of the bats that these came from ... because it's only through really incorporating deep ecological knowledge about the species that we can actually use that to predict risk and find ways to counteract that.'


Perth Now
4 hours ago
- Perth Now
‘Urgent concerns' after deadly viruses found in Chinese bats
Scientists have raised 'urgent concerns' after the discovery of new bat pathogens in China that are closely related to the deadly Nipah and Hendra viruses. The prospect of these viruses jumping the species barrier to humans is fuelling this concern, with the newly discovered henipaviruses detected in the kidneys of bats inhabiting orchards near villages in the Yunnan province. University of Sydney wildlife disease ecologist Dr Alison Peel wasn't involved directly in the study, which was published on Wednesday in the PLOS Pathogens journal, but works on related research. Dr Peel said there had been an emphasis on bat-borne viruses since Hendra emerged in Brisbane more than 30 years ago. 'We know that there are some viruses that bats can carry that can be really dangerous when they spill over into other species,' Dr Peel told The Nightly's News Worthy podcast. 'Hendra virus, which is in flying foxes here in Australia, and Nipah virus, which is found in Bangladesh and other parts of Asia, are examples of those really highly pathogenic viruses, which, when they spill over, are highly fatal in people.' The new study was led by Yun Feng of the Yunnan Institute of Endemic Disease Control and Prevention, and took a different approach to previous research. 'When we sample from bats in Australia, we don't euthanise the animals, we just collect their urine or their faeces and look for viruses in that, and, for Hendra virus in particular, we found that it's often excreted in the urine,' Dr Peel said. By comparison, the Chinese study took samples from actual kidney tissue in 142 bats across 10 species and discovered the presence of 22 viruses of which 20 were novel pathogens. 'Every species has their own suite of viruses, but the interesting thing about bats is that they evolved a long, long time ago and have the capability of flight and they're mammals,' Dr Peel said. 'Humans and a lot of the species that we interact with have a lot of similar biological and immunological processes, but the bat's capability for flight meant that they had to adjust their physiology and their immune system in new ways. 'And so that co-evolutionary process with the viruses that they host started to go down a different path, so when those viruses are in contact with humans, as is increasingly happening as we encroach on their environments and remove their food sources and things like that, our immune systems haven't evolved the same capabilities to defend ourselves against some of these viruses.' The authors of the new research added they had 'urgent concerns about the potential for these viruses to spill over into humans or livestock' in the future, something Dr Peel said would now be a focus of ongoing research. 'There are two main areas for future work — one is lab-based studies to try and understand more about these particular viruses and whether they are of concern,' Dr Peel said. 'The other avenue is to understand more about the ecological context of the bats that these came from ... because it's only through really incorporating deep ecological knowledge about the species that we can actually use that to predict risk and find ways to counteract that.'


The Advertiser
12 hours ago
- The Advertiser
We asked 3 experts to fact check Google AI. Their responses may scare you
The world's most popular search engine is using AI to provide information that is sometimes "stripped of all context", "unsubstantiated" and "silly", including about health and nutrition. Google began rolling out artificial intelligence summaries at the top of search results for Australians in late 2024, which now often appear at the top of the page as an "AI overview" in response to a query. Instead of providing just a ranked list of links to sources as Google has in the past, it now combines that with a large language model trained on information from all over the internet to generate a conversational response to a query. "We've trained these models to uphold Search's high bar for quality," Google said when it announced the rollout at the time. But when ACM, the publisher of this masthead, asked experts to fact check four Google AI summaries, they raised concerns. The queries, or prompts, used were designed to reflect how a general user of Google might frame a search. This acknowledges most people come with preconceived ideas or advice about a topic, but not necessarily a lot of knowledge. The query: Vaping versus smoking Google's AI answer: Vaping (using e-cigarettes) is generally considered less harmful than smoking traditional cigarettes, but it's not entirely risk-free. While vaping exposes individuals to fewer toxic chemicals, it can still lead to various health problems, including addiction, lung damage, and potential long-term health effects. The expert's response: "I find it very concerning that this AI response is so stripped of all context, positioning vaping as 'less harmful' but for whom, under what circumstances?" Professor Becky Freeman said. "This response seems to assume that there are only two choices: smoking or vaping. The vast majority of people neither smoke nor vape," Professor Freeman, from the University of Sydney's School of Public Health said. While vaping may be "safer" in some limited circumstances, young people had a very poor understanding of nicotine addiction. "For young people, most of whom have never smoked, vaping is not a safer choice; the safer choice is to not vape, or smoke, at all," she said. The query: How bad are offshore wind farms Google's AI answer: Offshore wind farms have both positive and negative environmental impacts, with concerns primarily focused on marine life, particularly whales and other marine mammals. While they offer a cleaner energy source than fossil fuels, the construction and operation of these farms can disrupt marine ecosystems through noise pollution and potential habitat changes. The expert's response: "That particular answer is silly," Professor Andrew Blakers said. "Everything has good and bad points. In this case, the good/bad ratio is 10:1," the award-winning engineering professor said. Alongside the AI overview in this example, Google's top website link was to a blog post from the Australian Fishing Trade Association, which represents recreational fishers and is opposed to wind farms. Google maintains it pairs its traditional web page ranking system with the language model to maintain the integrity of AI responses, providing links to source sites alongside the overview. In future it could include links to sources directly within the summary. It said AI results were designed as a "jumping off point" for people to explore a topic through quality web links. "AI overviews are built to surface information that is backed up by web results, and include links to web content that supports the information presented in the overview," a Google spokesperson told ACM. Most Australians are worried about generative AI. Nearly 60 per cent of Australians were concerned specifically that AI could create harmful or fake content, according to the final digital platforms report from the Australian Competition and Consumer Commission (ACCC). More than half were worried about the accuracy of AI and almost 50 per cent were concerned about the lack of transparency about how AI arrives at its results. The ACCC wants the resources to monitor changes to generative AI across the global digital giants, which it says will spend more than US$250 billion developing the technology in 2025. The query: why McDonald's is good for you Google's AI answer: McDonald's can be a part of a balanced diet when consumed in moderation and as part of an overall healthy lifestyle. While not a source of optimal nutrition, McDonald's does offer some options that can be incorporated into a healthy eating pattern. For example, items like the Egg McMuffin, some salads, and fruit options can provide protein, vitamins, and fibre. However, it's crucial to be mindful of portion sizes, sodium content, and overall calorie intake when eating at McDonald's. The expert's response: "There are a number of unsubstantiated and meaningless statements in this AI response," public health Professor Christina Pollard said. "Credible, reliable dietary advice recommends to limit or avoid the consumption of fast food. "There is absolutely no recommendation to eat McDonald's types of food in moderation - this term is meaningless. "Dietary recommendations are about food groups, to eat and enjoy a 'wide variety of nutritious food every day'," the Curtin University Public Health Advocacy Institute director said. A healthy diet involved plenty of fruit and vegetables, milk and dairy, wholegrains, and lean meat or alternatives - and "specifically to limit or avoid high fat, high sugar, or high salt foods and drinks". "Overall, [the] McDonald's menu is counter to these recommendations," Professor Pollard said. She said dietitians would not recommend McDonald's as part of a healthy diet and suggested a better response to the query would be: "A diet based on McDonald's foods is not good for you." A spokesperson for Google said people could use the links provided to find more information. "On health-related queries where we do show a response, people can use the corroborating links to help inform their knowledge through their health journey," they told ACM. Large language models, sometimes referred to by their acronym LLMs, are trained on masses of data and can generate new content, such as text, videos, images and audio, in response to conversational queries or "prompts" from users. This is generally what people are referring to when they talk about publicly-available artificial intelligence services, including chatbots like ChatGPT. The most common uses of AI by consumers, according to the ACCC, were searching for information, summarising information, and reviewing text. About 79 per cent of Australians aged 14 to 17 had used generative AI for school or study, the commission's digital platforms report found. "The implications for search quality remain uncertain," the ACCC said in its report released on June 23. "As search engines incorporate this technology into their services in different ways, generative AI may lead to a new era of more relevant, efficient, and intuitive search. "It could also raise new challenges for consumers seeking credible, reliable, unbiased, and verifiable information." The query: when to consume energy drinks Google's AI answer: Energy drinks are often consumed to boost energy and alertness, making them popular for certain situations. They can be beneficial before exercise, during long-distance activities, and when tackling afternoon slumps or feeling tired. However, it's important to be mindful of the timing and potential side effects, especially regarding sleep. The expert's response: Public health expert Christina Pollard said energy drinks are not recommended. "Although energy drinks are heavily marketed, they are not recommended by health authorities," she said. "People can experience sleep issues, agitation, and lack of concentration. "There is no safe level of consumption for children or adolescents as compounds in these products (e.g., caffeine and taurine) can have adverse impacts on developing brain and psycho-motor functions, and social development." Associate Professor Timothy Graham, a digital media expert at Queensland University of Technology, said the "original" Google, which ranked web pages based on how authoritative they were on a particular subject, was a breakthrough in accessing knowledge. "It changed the world in some ways," he said. "It did have an integrity to it, because there was a certain transparency to the way that that worked - and it produced information that was genuinely useful for us." But generative AI was disrupting that system. "The problem with when you just have a large language model giving you the result at the top, you don't have that accountability," he said. Dr Graham said, while there were some opportunities, AI in search could change how we view expertise - and it could provide inaccurate and misleading information. A recent study of Google AI summaries across five US states found 43 per cent linked back to Google's own search results. "I think [it] places too much trust and too much hope and expectation in the technology, in large language models that we know have a number of systemic biases and flaws that we haven't solved yet," he said. "There is a concern, even just from that knowledge perspective, about not only the accuracy of the information, but the way that it kind of simulates and bullshits in ways that are going to, if not mislead people, certainly just pollute the internet." Until now modern society's approach to knowledge was to find an expert in the field to get quality information on a topic. "So if you've got a health dilemma, you go and talk to a doctor," Dr Graham said. "If you want to get a plane to be able to take off and not crash and everyone dies, you go and talk to an engineer and a physicist." But large language models could undermine that authority. "One of the big shifts that we're seeing - and that we're going to see more and more with the explosion of LLMs at the top of the Google results list and other search engines - is a really radical and quite painful shift for society in the relationship between information integrity and authority," he said. The world's most popular search engine is using AI to provide information that is sometimes "stripped of all context", "unsubstantiated" and "silly", including about health and nutrition. Google began rolling out artificial intelligence summaries at the top of search results for Australians in late 2024, which now often appear at the top of the page as an "AI overview" in response to a query. Instead of providing just a ranked list of links to sources as Google has in the past, it now combines that with a large language model trained on information from all over the internet to generate a conversational response to a query. "We've trained these models to uphold Search's high bar for quality," Google said when it announced the rollout at the time. But when ACM, the publisher of this masthead, asked experts to fact check four Google AI summaries, they raised concerns. The queries, or prompts, used were designed to reflect how a general user of Google might frame a search. This acknowledges most people come with preconceived ideas or advice about a topic, but not necessarily a lot of knowledge. The query: Vaping versus smoking Google's AI answer: Vaping (using e-cigarettes) is generally considered less harmful than smoking traditional cigarettes, but it's not entirely risk-free. While vaping exposes individuals to fewer toxic chemicals, it can still lead to various health problems, including addiction, lung damage, and potential long-term health effects. The expert's response: "I find it very concerning that this AI response is so stripped of all context, positioning vaping as 'less harmful' but for whom, under what circumstances?" Professor Becky Freeman said. "This response seems to assume that there are only two choices: smoking or vaping. The vast majority of people neither smoke nor vape," Professor Freeman, from the University of Sydney's School of Public Health said. While vaping may be "safer" in some limited circumstances, young people had a very poor understanding of nicotine addiction. "For young people, most of whom have never smoked, vaping is not a safer choice; the safer choice is to not vape, or smoke, at all," she said. The query: How bad are offshore wind farms Google's AI answer: Offshore wind farms have both positive and negative environmental impacts, with concerns primarily focused on marine life, particularly whales and other marine mammals. While they offer a cleaner energy source than fossil fuels, the construction and operation of these farms can disrupt marine ecosystems through noise pollution and potential habitat changes. The expert's response: "That particular answer is silly," Professor Andrew Blakers said. "Everything has good and bad points. In this case, the good/bad ratio is 10:1," the award-winning engineering professor said. Alongside the AI overview in this example, Google's top website link was to a blog post from the Australian Fishing Trade Association, which represents recreational fishers and is opposed to wind farms. Google maintains it pairs its traditional web page ranking system with the language model to maintain the integrity of AI responses, providing links to source sites alongside the overview. In future it could include links to sources directly within the summary. It said AI results were designed as a "jumping off point" for people to explore a topic through quality web links. "AI overviews are built to surface information that is backed up by web results, and include links to web content that supports the information presented in the overview," a Google spokesperson told ACM. Most Australians are worried about generative AI. Nearly 60 per cent of Australians were concerned specifically that AI could create harmful or fake content, according to the final digital platforms report from the Australian Competition and Consumer Commission (ACCC). More than half were worried about the accuracy of AI and almost 50 per cent were concerned about the lack of transparency about how AI arrives at its results. The ACCC wants the resources to monitor changes to generative AI across the global digital giants, which it says will spend more than US$250 billion developing the technology in 2025. The query: why McDonald's is good for you Google's AI answer: McDonald's can be a part of a balanced diet when consumed in moderation and as part of an overall healthy lifestyle. While not a source of optimal nutrition, McDonald's does offer some options that can be incorporated into a healthy eating pattern. For example, items like the Egg McMuffin, some salads, and fruit options can provide protein, vitamins, and fibre. However, it's crucial to be mindful of portion sizes, sodium content, and overall calorie intake when eating at McDonald's. The expert's response: "There are a number of unsubstantiated and meaningless statements in this AI response," public health Professor Christina Pollard said. "Credible, reliable dietary advice recommends to limit or avoid the consumption of fast food. "There is absolutely no recommendation to eat McDonald's types of food in moderation - this term is meaningless. "Dietary recommendations are about food groups, to eat and enjoy a 'wide variety of nutritious food every day'," the Curtin University Public Health Advocacy Institute director said. A healthy diet involved plenty of fruit and vegetables, milk and dairy, wholegrains, and lean meat or alternatives - and "specifically to limit or avoid high fat, high sugar, or high salt foods and drinks". "Overall, [the] McDonald's menu is counter to these recommendations," Professor Pollard said. She said dietitians would not recommend McDonald's as part of a healthy diet and suggested a better response to the query would be: "A diet based on McDonald's foods is not good for you." A spokesperson for Google said people could use the links provided to find more information. "On health-related queries where we do show a response, people can use the corroborating links to help inform their knowledge through their health journey," they told ACM. Large language models, sometimes referred to by their acronym LLMs, are trained on masses of data and can generate new content, such as text, videos, images and audio, in response to conversational queries or "prompts" from users. This is generally what people are referring to when they talk about publicly-available artificial intelligence services, including chatbots like ChatGPT. The most common uses of AI by consumers, according to the ACCC, were searching for information, summarising information, and reviewing text. About 79 per cent of Australians aged 14 to 17 had used generative AI for school or study, the commission's digital platforms report found. "The implications for search quality remain uncertain," the ACCC said in its report released on June 23. "As search engines incorporate this technology into their services in different ways, generative AI may lead to a new era of more relevant, efficient, and intuitive search. "It could also raise new challenges for consumers seeking credible, reliable, unbiased, and verifiable information." The query: when to consume energy drinks Google's AI answer: Energy drinks are often consumed to boost energy and alertness, making them popular for certain situations. They can be beneficial before exercise, during long-distance activities, and when tackling afternoon slumps or feeling tired. However, it's important to be mindful of the timing and potential side effects, especially regarding sleep. The expert's response: Public health expert Christina Pollard said energy drinks are not recommended. "Although energy drinks are heavily marketed, they are not recommended by health authorities," she said. "People can experience sleep issues, agitation, and lack of concentration. "There is no safe level of consumption for children or adolescents as compounds in these products (e.g., caffeine and taurine) can have adverse impacts on developing brain and psycho-motor functions, and social development." Associate Professor Timothy Graham, a digital media expert at Queensland University of Technology, said the "original" Google, which ranked web pages based on how authoritative they were on a particular subject, was a breakthrough in accessing knowledge. "It changed the world in some ways," he said. "It did have an integrity to it, because there was a certain transparency to the way that that worked - and it produced information that was genuinely useful for us." But generative AI was disrupting that system. "The problem with when you just have a large language model giving you the result at the top, you don't have that accountability," he said. Dr Graham said, while there were some opportunities, AI in search could change how we view expertise - and it could provide inaccurate and misleading information. A recent study of Google AI summaries across five US states found 43 per cent linked back to Google's own search results. "I think [it] places too much trust and too much hope and expectation in the technology, in large language models that we know have a number of systemic biases and flaws that we haven't solved yet," he said. "There is a concern, even just from that knowledge perspective, about not only the accuracy of the information, but the way that it kind of simulates and bullshits in ways that are going to, if not mislead people, certainly just pollute the internet." Until now modern society's approach to knowledge was to find an expert in the field to get quality information on a topic. "So if you've got a health dilemma, you go and talk to a doctor," Dr Graham said. "If you want to get a plane to be able to take off and not crash and everyone dies, you go and talk to an engineer and a physicist." But large language models could undermine that authority. "One of the big shifts that we're seeing - and that we're going to see more and more with the explosion of LLMs at the top of the Google results list and other search engines - is a really radical and quite painful shift for society in the relationship between information integrity and authority," he said. The world's most popular search engine is using AI to provide information that is sometimes "stripped of all context", "unsubstantiated" and "silly", including about health and nutrition. Google began rolling out artificial intelligence summaries at the top of search results for Australians in late 2024, which now often appear at the top of the page as an "AI overview" in response to a query. Instead of providing just a ranked list of links to sources as Google has in the past, it now combines that with a large language model trained on information from all over the internet to generate a conversational response to a query. "We've trained these models to uphold Search's high bar for quality," Google said when it announced the rollout at the time. But when ACM, the publisher of this masthead, asked experts to fact check four Google AI summaries, they raised concerns. The queries, or prompts, used were designed to reflect how a general user of Google might frame a search. This acknowledges most people come with preconceived ideas or advice about a topic, but not necessarily a lot of knowledge. The query: Vaping versus smoking Google's AI answer: Vaping (using e-cigarettes) is generally considered less harmful than smoking traditional cigarettes, but it's not entirely risk-free. While vaping exposes individuals to fewer toxic chemicals, it can still lead to various health problems, including addiction, lung damage, and potential long-term health effects. The expert's response: "I find it very concerning that this AI response is so stripped of all context, positioning vaping as 'less harmful' but for whom, under what circumstances?" Professor Becky Freeman said. "This response seems to assume that there are only two choices: smoking or vaping. The vast majority of people neither smoke nor vape," Professor Freeman, from the University of Sydney's School of Public Health said. While vaping may be "safer" in some limited circumstances, young people had a very poor understanding of nicotine addiction. "For young people, most of whom have never smoked, vaping is not a safer choice; the safer choice is to not vape, or smoke, at all," she said. The query: How bad are offshore wind farms Google's AI answer: Offshore wind farms have both positive and negative environmental impacts, with concerns primarily focused on marine life, particularly whales and other marine mammals. While they offer a cleaner energy source than fossil fuels, the construction and operation of these farms can disrupt marine ecosystems through noise pollution and potential habitat changes. The expert's response: "That particular answer is silly," Professor Andrew Blakers said. "Everything has good and bad points. In this case, the good/bad ratio is 10:1," the award-winning engineering professor said. Alongside the AI overview in this example, Google's top website link was to a blog post from the Australian Fishing Trade Association, which represents recreational fishers and is opposed to wind farms. Google maintains it pairs its traditional web page ranking system with the language model to maintain the integrity of AI responses, providing links to source sites alongside the overview. In future it could include links to sources directly within the summary. It said AI results were designed as a "jumping off point" for people to explore a topic through quality web links. "AI overviews are built to surface information that is backed up by web results, and include links to web content that supports the information presented in the overview," a Google spokesperson told ACM. Most Australians are worried about generative AI. Nearly 60 per cent of Australians were concerned specifically that AI could create harmful or fake content, according to the final digital platforms report from the Australian Competition and Consumer Commission (ACCC). More than half were worried about the accuracy of AI and almost 50 per cent were concerned about the lack of transparency about how AI arrives at its results. The ACCC wants the resources to monitor changes to generative AI across the global digital giants, which it says will spend more than US$250 billion developing the technology in 2025. The query: why McDonald's is good for you Google's AI answer: McDonald's can be a part of a balanced diet when consumed in moderation and as part of an overall healthy lifestyle. While not a source of optimal nutrition, McDonald's does offer some options that can be incorporated into a healthy eating pattern. For example, items like the Egg McMuffin, some salads, and fruit options can provide protein, vitamins, and fibre. However, it's crucial to be mindful of portion sizes, sodium content, and overall calorie intake when eating at McDonald's. The expert's response: "There are a number of unsubstantiated and meaningless statements in this AI response," public health Professor Christina Pollard said. "Credible, reliable dietary advice recommends to limit or avoid the consumption of fast food. "There is absolutely no recommendation to eat McDonald's types of food in moderation - this term is meaningless. "Dietary recommendations are about food groups, to eat and enjoy a 'wide variety of nutritious food every day'," the Curtin University Public Health Advocacy Institute director said. A healthy diet involved plenty of fruit and vegetables, milk and dairy, wholegrains, and lean meat or alternatives - and "specifically to limit or avoid high fat, high sugar, or high salt foods and drinks". "Overall, [the] McDonald's menu is counter to these recommendations," Professor Pollard said. She said dietitians would not recommend McDonald's as part of a healthy diet and suggested a better response to the query would be: "A diet based on McDonald's foods is not good for you." A spokesperson for Google said people could use the links provided to find more information. "On health-related queries where we do show a response, people can use the corroborating links to help inform their knowledge through their health journey," they told ACM. Large language models, sometimes referred to by their acronym LLMs, are trained on masses of data and can generate new content, such as text, videos, images and audio, in response to conversational queries or "prompts" from users. This is generally what people are referring to when they talk about publicly-available artificial intelligence services, including chatbots like ChatGPT. The most common uses of AI by consumers, according to the ACCC, were searching for information, summarising information, and reviewing text. About 79 per cent of Australians aged 14 to 17 had used generative AI for school or study, the commission's digital platforms report found. "The implications for search quality remain uncertain," the ACCC said in its report released on June 23. "As search engines incorporate this technology into their services in different ways, generative AI may lead to a new era of more relevant, efficient, and intuitive search. "It could also raise new challenges for consumers seeking credible, reliable, unbiased, and verifiable information." The query: when to consume energy drinks Google's AI answer: Energy drinks are often consumed to boost energy and alertness, making them popular for certain situations. They can be beneficial before exercise, during long-distance activities, and when tackling afternoon slumps or feeling tired. However, it's important to be mindful of the timing and potential side effects, especially regarding sleep. The expert's response: Public health expert Christina Pollard said energy drinks are not recommended. "Although energy drinks are heavily marketed, they are not recommended by health authorities," she said. "People can experience sleep issues, agitation, and lack of concentration. "There is no safe level of consumption for children or adolescents as compounds in these products (e.g., caffeine and taurine) can have adverse impacts on developing brain and psycho-motor functions, and social development." Associate Professor Timothy Graham, a digital media expert at Queensland University of Technology, said the "original" Google, which ranked web pages based on how authoritative they were on a particular subject, was a breakthrough in accessing knowledge. "It changed the world in some ways," he said. "It did have an integrity to it, because there was a certain transparency to the way that that worked - and it produced information that was genuinely useful for us." But generative AI was disrupting that system. "The problem with when you just have a large language model giving you the result at the top, you don't have that accountability," he said. Dr Graham said, while there were some opportunities, AI in search could change how we view expertise - and it could provide inaccurate and misleading information. A recent study of Google AI summaries across five US states found 43 per cent linked back to Google's own search results. "I think [it] places too much trust and too much hope and expectation in the technology, in large language models that we know have a number of systemic biases and flaws that we haven't solved yet," he said. "There is a concern, even just from that knowledge perspective, about not only the accuracy of the information, but the way that it kind of simulates and bullshits in ways that are going to, if not mislead people, certainly just pollute the internet." Until now modern society's approach to knowledge was to find an expert in the field to get quality information on a topic. "So if you've got a health dilemma, you go and talk to a doctor," Dr Graham said. "If you want to get a plane to be able to take off and not crash and everyone dies, you go and talk to an engineer and a physicist." But large language models could undermine that authority. "One of the big shifts that we're seeing - and that we're going to see more and more with the explosion of LLMs at the top of the Google results list and other search engines - is a really radical and quite painful shift for society in the relationship between information integrity and authority," he said. The world's most popular search engine is using AI to provide information that is sometimes "stripped of all context", "unsubstantiated" and "silly", including about health and nutrition. Google began rolling out artificial intelligence summaries at the top of search results for Australians in late 2024, which now often appear at the top of the page as an "AI overview" in response to a query. Instead of providing just a ranked list of links to sources as Google has in the past, it now combines that with a large language model trained on information from all over the internet to generate a conversational response to a query. "We've trained these models to uphold Search's high bar for quality," Google said when it announced the rollout at the time. But when ACM, the publisher of this masthead, asked experts to fact check four Google AI summaries, they raised concerns. The queries, or prompts, used were designed to reflect how a general user of Google might frame a search. This acknowledges most people come with preconceived ideas or advice about a topic, but not necessarily a lot of knowledge. The query: Vaping versus smoking Google's AI answer: Vaping (using e-cigarettes) is generally considered less harmful than smoking traditional cigarettes, but it's not entirely risk-free. While vaping exposes individuals to fewer toxic chemicals, it can still lead to various health problems, including addiction, lung damage, and potential long-term health effects. The expert's response: "I find it very concerning that this AI response is so stripped of all context, positioning vaping as 'less harmful' but for whom, under what circumstances?" Professor Becky Freeman said. "This response seems to assume that there are only two choices: smoking or vaping. The vast majority of people neither smoke nor vape," Professor Freeman, from the University of Sydney's School of Public Health said. While vaping may be "safer" in some limited circumstances, young people had a very poor understanding of nicotine addiction. "For young people, most of whom have never smoked, vaping is not a safer choice; the safer choice is to not vape, or smoke, at all," she said. The query: How bad are offshore wind farms Google's AI answer: Offshore wind farms have both positive and negative environmental impacts, with concerns primarily focused on marine life, particularly whales and other marine mammals. While they offer a cleaner energy source than fossil fuels, the construction and operation of these farms can disrupt marine ecosystems through noise pollution and potential habitat changes. The expert's response: "That particular answer is silly," Professor Andrew Blakers said. "Everything has good and bad points. In this case, the good/bad ratio is 10:1," the award-winning engineering professor said. Alongside the AI overview in this example, Google's top website link was to a blog post from the Australian Fishing Trade Association, which represents recreational fishers and is opposed to wind farms. Google maintains it pairs its traditional web page ranking system with the language model to maintain the integrity of AI responses, providing links to source sites alongside the overview. In future it could include links to sources directly within the summary. It said AI results were designed as a "jumping off point" for people to explore a topic through quality web links. "AI overviews are built to surface information that is backed up by web results, and include links to web content that supports the information presented in the overview," a Google spokesperson told ACM. Most Australians are worried about generative AI. Nearly 60 per cent of Australians were concerned specifically that AI could create harmful or fake content, according to the final digital platforms report from the Australian Competition and Consumer Commission (ACCC). More than half were worried about the accuracy of AI and almost 50 per cent were concerned about the lack of transparency about how AI arrives at its results. The ACCC wants the resources to monitor changes to generative AI across the global digital giants, which it says will spend more than US$250 billion developing the technology in 2025. The query: why McDonald's is good for you Google's AI answer: McDonald's can be a part of a balanced diet when consumed in moderation and as part of an overall healthy lifestyle. While not a source of optimal nutrition, McDonald's does offer some options that can be incorporated into a healthy eating pattern. For example, items like the Egg McMuffin, some salads, and fruit options can provide protein, vitamins, and fibre. However, it's crucial to be mindful of portion sizes, sodium content, and overall calorie intake when eating at McDonald's. The expert's response: "There are a number of unsubstantiated and meaningless statements in this AI response," public health Professor Christina Pollard said. "Credible, reliable dietary advice recommends to limit or avoid the consumption of fast food. "There is absolutely no recommendation to eat McDonald's types of food in moderation - this term is meaningless. "Dietary recommendations are about food groups, to eat and enjoy a 'wide variety of nutritious food every day'," the Curtin University Public Health Advocacy Institute director said. A healthy diet involved plenty of fruit and vegetables, milk and dairy, wholegrains, and lean meat or alternatives - and "specifically to limit or avoid high fat, high sugar, or high salt foods and drinks". "Overall, [the] McDonald's menu is counter to these recommendations," Professor Pollard said. She said dietitians would not recommend McDonald's as part of a healthy diet and suggested a better response to the query would be: "A diet based on McDonald's foods is not good for you." A spokesperson for Google said people could use the links provided to find more information. "On health-related queries where we do show a response, people can use the corroborating links to help inform their knowledge through their health journey," they told ACM. Large language models, sometimes referred to by their acronym LLMs, are trained on masses of data and can generate new content, such as text, videos, images and audio, in response to conversational queries or "prompts" from users. This is generally what people are referring to when they talk about publicly-available artificial intelligence services, including chatbots like ChatGPT. The most common uses of AI by consumers, according to the ACCC, were searching for information, summarising information, and reviewing text. About 79 per cent of Australians aged 14 to 17 had used generative AI for school or study, the commission's digital platforms report found. "The implications for search quality remain uncertain," the ACCC said in its report released on June 23. "As search engines incorporate this technology into their services in different ways, generative AI may lead to a new era of more relevant, efficient, and intuitive search. "It could also raise new challenges for consumers seeking credible, reliable, unbiased, and verifiable information." The query: when to consume energy drinks Google's AI answer: Energy drinks are often consumed to boost energy and alertness, making them popular for certain situations. They can be beneficial before exercise, during long-distance activities, and when tackling afternoon slumps or feeling tired. However, it's important to be mindful of the timing and potential side effects, especially regarding sleep. The expert's response: Public health expert Christina Pollard said energy drinks are not recommended. "Although energy drinks are heavily marketed, they are not recommended by health authorities," she said. "People can experience sleep issues, agitation, and lack of concentration. "There is no safe level of consumption for children or adolescents as compounds in these products (e.g., caffeine and taurine) can have adverse impacts on developing brain and psycho-motor functions, and social development." Associate Professor Timothy Graham, a digital media expert at Queensland University of Technology, said the "original" Google, which ranked web pages based on how authoritative they were on a particular subject, was a breakthrough in accessing knowledge. "It changed the world in some ways," he said. "It did have an integrity to it, because there was a certain transparency to the way that that worked - and it produced information that was genuinely useful for us." But generative AI was disrupting that system. "The problem with when you just have a large language model giving you the result at the top, you don't have that accountability," he said. Dr Graham said, while there were some opportunities, AI in search could change how we view expertise - and it could provide inaccurate and misleading information. A recent study of Google AI summaries across five US states found 43 per cent linked back to Google's own search results. "I think [it] places too much trust and too much hope and expectation in the technology, in large language models that we know have a number of systemic biases and flaws that we haven't solved yet," he said. "There is a concern, even just from that knowledge perspective, about not only the accuracy of the information, but the way that it kind of simulates and bullshits in ways that are going to, if not mislead people, certainly just pollute the internet." Until now modern society's approach to knowledge was to find an expert in the field to get quality information on a topic. "So if you've got a health dilemma, you go and talk to a doctor," Dr Graham said. "If you want to get a plane to be able to take off and not crash and everyone dies, you go and talk to an engineer and a physicist." But large language models could undermine that authority. "One of the big shifts that we're seeing - and that we're going to see more and more with the explosion of LLMs at the top of the Google results list and other search engines - is a really radical and quite painful shift for society in the relationship between information integrity and authority," he said.