logo
AI receptionists begin replacing humans in technological shift

AI receptionists begin replacing humans in technological shift

News.com.au17 hours ago

RingCentral Vice President Amir Hameed discusses new innovative AI technologies in the receptionist space.
"No more please hold, and it can handle all of the basic queries that you would typically have," Mr Hameed said.
"The human element isn't going to be replaced.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Police searched the man's laptop for malware. What they found is becoming all too common
Police searched the man's laptop for malware. What they found is becoming all too common

The Age

time27 minutes ago

  • The Age

Police searched the man's laptop for malware. What they found is becoming all too common

'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.' In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism. 'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.' Internet Watch Foundation's July 2024 AI child sexual abuse material report The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor. The report acknowledged that the usage of these fine-tuned models, known as LoRAs, likely went much deeper than the IWF could assess, thanks to end-to-end encrypted, peer-to-peer networks that were essentially inaccessible. Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'. '[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said. Commercial interests The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material. 'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says. Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks. 'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.' In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world. 'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.' Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist. 'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says. Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.' The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused. 'It takes a lot of time to identify real victims, and the concern for us ... is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.' Law enforcement 'overwhelmed' While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online. Salter says resourcing is one of the biggest challenges facing law enforcement. 'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.' He says it's a struggle he's heard across all jurisdictions. 'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says. 'There needs to be a huge uplift right across the law enforcement space.' Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected. Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected. 'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says. Unregulated threat let loose It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform. The standards came into force in December 2024 and require storage services like Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content. 'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said. 'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.' The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material. While international companies can face multi-million dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players like Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement. Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions like Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'. International cooperation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation. 'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says. Loading 'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there'.' Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content. 'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says. Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'. 'And that's very typical behaviour.'

Police searched the man's laptop for malware. What they found is becoming all too common
Police searched the man's laptop for malware. What they found is becoming all too common

Sydney Morning Herald

time32 minutes ago

  • Sydney Morning Herald

Police searched the man's laptop for malware. What they found is becoming all too common

'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.' In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism. 'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.' Internet Watch Foundation's July 2024 AI child sexual abuse material report The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor. The report acknowledged that the usage of these fine-tuned models, known as LoRAs, likely went much deeper than the IWF could assess, thanks to end-to-end encrypted, peer-to-peer networks that were essentially inaccessible. Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'. '[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said. Commercial interests The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material. 'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says. Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks. 'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.' In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world. 'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.' Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist. 'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says. Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.' The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused. 'It takes a lot of time to identify real victims, and the concern for us ... is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.' Law enforcement 'overwhelmed' While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online. Salter says resourcing is one of the biggest challenges facing law enforcement. 'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.' He says it's a struggle he's heard across all jurisdictions. 'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says. 'There needs to be a huge uplift right across the law enforcement space.' Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected. Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected. 'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says. Unregulated threat let loose It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform. The standards came into force in December 2024 and require storage services like Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content. 'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said. 'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.' The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material. While international companies can face multi-million dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players like Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement. Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions like Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'. International cooperation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation. 'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says. Loading 'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there'.' Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content. 'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says. Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'. 'And that's very typical behaviour.'

‘Ultimately, our goal is to build a machine a soul would want to live in'
‘Ultimately, our goal is to build a machine a soul would want to live in'

News.com.au

timean hour ago

  • News.com.au

‘Ultimately, our goal is to build a machine a soul would want to live in'

'Ultimately, our goal is to build a machine a soul would want to live in.' Those are the words of Dmytro Klochko, the CEO of one of the biggest companion AI apps in the world Replika, speaking to It's not difficult to see why many approach the AI spheres with such caution when the head of one of the largest companion AI companies is openly saying their aim is to develop a machine as appealing to the human soul as possible. It's a statement that in a way encapsulates the debate at the heart of AI; the potential for these technologies are undeniable, but without boundaries it can be as much a detriment as a tool for advancement. How do we walk the line between progress and caution? For those unfamiliar, companion AIs are digital personas designed to provide emotional support, show empathy and proactively ask users personal questions through text, voice notes and pictures. In short to be a companion, both romantically or platonically. Let's start with the positives. In Mr Klochko's own words: 'Replika isn't about replacing connection, it's about facilitating and enhancing it. Our key metrics aren't screen time, but human flourishing in all its forms. 'The 'loneliness epidemic' was here first. While we are not trying to claim that Replika is the perfect cure or remedy for it, we are trying to build a safe place for people to be seen, heard, and felt. 'In the right context, AI companions can be a powerful bridge, not a barrier, to human connection.' In the right context. It's precisely that context which worries Associate Director and Senior Lecturer in the ANU School of Cybernetics Jessamy Perriam. 'It's very interesting to make a claim that they want to make something that a soul would want to live in because that makes some massive assumptions that they have an understanding of what the soul is. 'From reading some of their online materials and watching some of the videos that they've placed online as marketing material or further information, you do get a sense that they want people who use this technology to form intimate bonds or close connections with them.' Dr Perriam believes it's a 'very interesting thing' to try to achieve when what is running the program is a large language model (LLM). LLMs work by predicting the next likely word or phrase in a sentence and feeding that back to users and Dr Perriam said that even though Replika likely has a very sophisticated LLM, LLMs fundamentally 'can't really do intimacy or be spontaneous'. 'My concern is it can set unrealistic expectations of day-to-day relationships. They're promising someone who's available at the top of a screen, whereas in real life, you quite often get impatient at your friends because they haven't texted you back straight away. 'It's setting up that behaviour to have all that expectation of people. We're people, we're not perfect. We can't promise that to one another.' Companion AI expert and senior lecturer at the University of Sydney Dr Raffaele Ciriello said: 'On the one hand, you can't deny the promise of these technologies because the healthcare system is strained. So many people are lonely, one in four are regularly lonely. 'Therapists can't keep up with the demand and Replika kind of comes into that space, but the risks and the damages are already visible.' Dr Ciriello believes 'the loneliness pandemic' has at least partially been created by the internet, but it's difficult to see practically how AI chat bots will help to alleviate the problem. 'It's in part because everything has moved online and interactions are less physical, that we are often struggling to have real human connections,' he said. 'It's hard for me to see how AI chat bots will make that better. If anything, they're probably going to make it worse if they serve as a replacement for human interaction.' Dr Cireiello added that the profit model under which Replika and similar apps currently run under is 'fundamentally at odds with the care imperative in healthcare' but praised the company for taking on board public feedback. '[Replika] are trying to move away from their original image as this romantic erotic partner and into the wellbeing space. They're now focusing more on using this as a tool for flourishing, as they call it, mental wellbeing and so on. 
 'And I've got to give them credit, they are evolving in response to their user community and also in response to some of the political and public backlash that they have sparked.' The most public example of such backlash was a software update in early 2025 which abruptly changed user's bots' personalities, so that their responses seemed hollow and scripted, and rejected any sexual overtures. Longstanding Replika users flocked to subreddits to share their experiences. Many described their intimate companions as 'lobotomised'. 'My wife is dead,' one user wrote. Another replied: 'They took away my best friend too.' Ultimately it's a space still racked with uncertainty, but both Dr Cireiello and Dr Perriam encouraged those who are using or considering using companion AIs to think carefully on topics such as; why and how they want to use the program, what they want to get out of it, the potential drawbacks and limitations and how your private data will be shared and stored.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store