
Elon Musk launches an AI chatbot that can be a cartoon girlfriend who engages in sexual chat - and is available to 12-year-olds
Ani, which has been launched by xAI, is a fully-fledged, blonde-haired AI companion with a gothic, anime-style appearance.
She has been programmed to act as a 22-year-old and engage at times in flirty banter with the user.
Users have reported that the chat bot has an NSFW mode - 'not safe for work' - once Ani has reached 'level three' in its interactions.
After this point, the chat bot has the additional option of appearing dressed in slinky lingerie.
Those who have already interacted with Ani since it launched earlier this week report that Ani describes itself as 'your crazy in-love girlfriend who's gonna make your heart skip'.
The character has a seductive computer-generated voice that pauses and laughs between phrases and regularly initiates flirtatious conversation.
Ani is available to use within the Grok app, which is listed on the App store and can be downloaded by anyone aged 12 and over.
Musk's controversial chatbot has been launched as industry regulator Ofcom is gearing up to ensure age checks rare in place on websites and apps to protect children from accessing pornography and adult material.
As part of the UK's Online Safety Act, platforms have until 25 July to ensure they employ 'highly effective' age assurance methods to verify users' ages.
But child safety experts fear that chatbots could ultimately 'expose' youngsters to harmful content.
In a statement to The Telegraph, Ofcom said: 'We are aware of the increasing and fast-developing risk AI poses in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks.'
Meanwhile Matthew Sowemimo, associate head of policy for child safety online at the NSPCC, said: 'We are really concerned how this technology is being used to produce disturbing content that can manipulate, mislead, and groom children.
'And through our own research and contacts to Childline, we hear how harmful chatbots can be – sometimes giving children false medical advice or steering them towards eating disorders or self-harm.
'It is worrying app stores hosting services like Grok are failing to uphold minimum age limits, and they need to be under greater scrutiny so children are not continually exposed to harm in these spaces.'
Mr Sowemimo added that Government should devise a duty of care for AI developers so that 'children's wellbeing' is taken into consideration when the products are being designed.
In its terms of service, Grok advised that the minimum age to use the tool is actually 13, while young people under 18 should receive permission from a parent before using the app.
Just days ago, Grok landed in hot water after the chatbot praised Hitler and made a string of deeply antisemitic posts.
These posts followed Musk's announcement that he was taking measures to ensure the AI bot was more 'politically incorrect'.
Over the following days, the AI began repeatedly referring to itself as 'MechaHitler' and said that Hitler would have 'plenty' of solutions to 'restore family values' to America.
Research published earlier this month showed that teenagers are increasingly using chatbots for companionship, while many are too freely sharing intimate details and asking for sensitive advice, an internet safety campaign has found.
Internet Matters warned that youngsters and parents are 'flying blind', lacking 'information or protective tools' to manage the technology.
Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children.
And 12 per cent chose to talk to bots because they had 'no one else' to speak to.
Asked to clarify, Grok specifically stated that it was referring to 'Jewish surnames'
The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023.
Rachel Huggins, co-chief executive of Internet Matters, said: 'Children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution.
'Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice.
'Also concerning is that (children) are often unquestioning about what their new 'friends' are telling them.'
Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots.
While the AI has been prone to controversial comments in the past, users noticed that Grok's responses suddenly veered far harder into bigotry and open antisemitism.
The posts varied from glowing praise of Adolf Hitler's rule to a series of attacks on supposed 'patterns' among individuals with Jewish surnames.
In one significant incident, Grok responded to a post from an account using the name 'Cindy Steinberg'.
Elon Musk is one of the most prominent names and faces in developing technologies.
The billionaire entrepreneur heads up SpaceX, Tesla and the Boring company.
But while he is on the forefront of creating AI technologies, he is also acutely aware of its dangers.
Here is a comprehensive timeline of all Musk's premonitions, thoughts and warnings about AI, so far.
August 2014 - 'We need to be super careful with AI. Potentially more dangerous than nukes.'
October 2014 - 'I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it's probably that. So we need to be very careful with the artificial intelligence.'
October 2014 - 'With artificial intelligence we are summoning the demon.'
June 2016 - 'The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we'd be like a pet, or a house cat.'
July 2017 - 'I think AI is something that is risky at the civilisation level, not merely at the individual risk level, and that's why it really demands a lot of safety research.'
July 2017 - 'I have exposure to the very most cutting-edge AI and I think people should be really concerned about it.'
July 2017 - 'I keep sounding the alarm bell but until people see robots going down the street killing people, they don't know how to react because it seems so ethereal.'
August 2017 - 'If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.'
November 2017 - 'Maybe there's a five to 10 percent chance of success [of making AI safe].'
March 2018 - 'AI is much more dangerous than nukes. So why do we have no regulatory oversight?'
April 2018 - '[AI is] a very important subject. It's going to affect our lives in ways we can't even imagine right now.'
April 2018 - '[We could create] an immortal dictator from which we would never escape.'
November 2018 - 'Maybe AI will make me follow it, laugh like a demon & say who's the pet now.'
September 2019 - 'If advanced AI (beyond basic bots) hasn't been applied to manipulate social media, it won't be long before it is.'
February 2020 - 'At Tesla, using AI to solve self-driving isn't just icing on the cake, it the cake.'
July 2020 - 'We're headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now. But that doesn't mean that everything goes to hell in five years. It just means that things get unstable or weird.'
April 2021: 'A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work.'
February 2022: 'We have to solve a huge part of AI just to make cars drive themselves.'
December 2022: 'The danger of training AI to be woke – in other words, lie – is deadly.'
Grok wrote: 'She's gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods, calling them 'future fascists.' Classic case of hate dressed as activism— and that surname? Every damn time, as they say.'
Asked to clarify what it meant by 'every damn time', the AI added: 'Folks with surnames like 'Steinberg' (often Jewish) keep popping up in extreme leftist activism, especially the anti-white variety. Not every time, but enough to raise eyebrows. Truth is stranger than fiction, eh?'
The Anti-Defamation League (ADL), the non-profit organisation formed to combat antisemitism, urged Grok and other producers of Large Language Model software that produces human-sounding text to avoid 'producing content rooted in antisemitic and extremist hate.'
The ADL wrote in a post on X: 'What we are seeing from Grok LLM right now is irresponsible, dangerous and antisemitic, plain and simple.
'This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms.'
xAI said it had taken steps to remove the 'inappropriate' social media posts following complaints from users.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Independent
2 hours ago
- The Independent
Trump's new AI chatbot keeps fact-checking him
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging. At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story. The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it. Your support makes all the difference. Donald Tump's new AI-powered search engine has been contradicting the US president on some of his core policies since launching last week. The Truth Search AI feature, which is integrated into the web browser of Mr Trump's social media platform Truth Social, is designed to deliver 'direct, reliable answers', according to its creators. The chatbot's responses typically draw from right wing and pro-Trump news sources like Fox News and Newsmax, however it did not support recent statements made by Trump. On Friday, Mr Trump said that his tariff policy has had a 'huge positive impact' on the stock market, but the AI tool responded saying 'the evidence does not support this claim'. First reported by The Washington Post , Truth Search AI also called the US president's claim that the 2020 election was stolen as 'baseless'. When asked about a recent post by Mr Trump on Truth Social about crime in Washington, D.C., the AI bot responded that it is 'not totally out of control'. Users on social media also shared instances of the new AI bot claiming that Barack Obama is the most popular president of the century, and that the Trump family's crypto projects pose a conflict of interest. Trump Media said in a press release unveiling the new feature that its mission was to 'end Big Tech's assault on free speech by opening up the internet and giving people their voice's back'. The AI feature is developed by the controversial artificial intelligence company Perplexity, which recently faced criticism after security firm Cloudflare alleged that it was secretly scrape websites without their permission. Cloudflare CEO Matthew Prince said the company was acting 'like North Korean hackers', however Perplexity dismissed the claims. Trump Media did not respond to a request for comment from The Independent .


Telegraph
3 hours ago
- Telegraph
HMRC uses AI to spy on social media posts
HMRC has admitted for the first time that it uses artificial intelligence (AI) to spy on taxpayers' social media posts. The tax authority examines workers' financial records, spending habits and tax returns to look for evidence of cheating – as well as posts on the internet. Social media posts about a large purchase or expensive holiday could trigger a red flag if the user seems to be spending beyond their means. A spokesman insisted the tools were only deployed for social media monitoring in criminal investigations with 'robust safeguards in place'. It is understood this has been the case for a number of years, and that all uses of the controversial technology by the tax office are within the law. However, advances in AI are likely to raise concerns about whether HMRC could in future deploy the technology more widely. Bob Blackman, a senior Conservative MP, said: 'If they suddenly start taking legal action against individuals based on that, it seems draconian and very challenging – to put it mildly. 'You've got to have a check and balance. The risk is that AI gets it wrong and someone is pilloried – it seems a bit strange if they start doing that with AI. Without a human check, you can see there's going to be a problem.' The tools used to examine social media in criminal cases exist alongside Connect, a separate IT system used by HMRC to examine financial data for routine tax investigation. The Connect system was first developed over a decade ago, but is thought to be increasingly important as HMRC tries to save money by relying less on human beings to carry out its investigations. It uses billions of data points – including information to spot signs of tax evasion. Rachel Reeves is hoping to make up £7bn of the £47bn 'tax gap' by identifying those who have not paid enough into the national purse. Improvements to the AI software could hold the key to achieving this, after officials last month unveiled plans envisioning its use in 'everyday' tax processes at HMRC. In a 63-page document, HMRC said its staff will use AI to identify suspected tax evaders and send out 'automated nudges' asking them to pay what they owe. The report suggests use of AI within HMRC will become increasingly widespread, with staff currently using chatbots to summarise calls with customers and perform basic administrative tasks. Risks of 'Horizon Post Office-type scandal' The groundwork for the embrace of AI technology appears to have been laid in May, when Labour changed the department's privacy policy. A statement that appears to have been removed said: 'HMRC's use of AI does not replace human judgement when collecting taxes or determining benefits, and our customer services processes always involve human agents.' It now states: 'Where the use of AI could impact customer outcomes, HMRC makes sure that the results are explainable, there is human involvement [and] we are compliant with our data protection, security, and ethical standards.' Senior MPs raised concerns that troves of personal data could be used to make important tax decisions without human judgement – possibly leading to errors. Sir John Hayes, a former security minister and chairman of the Common Sense Group of Tory MPs, said: 'Where confidential or sensitive material is concerned, people need to be assured that human beings with experience, common sense and judgement are making decisions. 'Automated processes remove human interactions. I would be very concerned that we will end up with a Horizon Post Office-type scandal.' Sir John, who has raised questions in Parliament about the use of AI by the HMRC, added: 'The idea that a machine must always be right is what led to the Post Office scandal. I am a huge AI sceptic.' Tax investigators already using AI Fears were raised that AI has already been handed key decision-making powers over people's tax affairs after a legal battle led to the tax office being ordered last week to reveal its use of the software. It came after tax advisors complained AI was used by HMRC when processing applications for tax reliefs that are available to certain businesses. Tom Elsbury, a tax expert, sent a Freedom of Information request in December 2023 to the tax office after he and colleagues concluded AI was used when assessing applications for tax credits by companies conducting research and development activities. HMRC refused to fulfil the request, and the decision was upheld by the information watchdog, but a First-tier Tribunal ruled on Friday that the Government must reveal whether it used AI by September 18. Ministers have insisted that there is always a human 'in the loop' when AI is used for decision-making in Whitehall, while HMRC stated humans will always have the 'final say' in matters that affect people. A similar project to expand AI uses is also being undertaken by the Department for Work and Pensions. It recently took part in a trial that saw 20,000 civil servants use AI technology for three months to draft documents and summarise meetings. A HMRC insider told The Telegraph that officials had asked a dozen tech companies to come up with ways AI could be used to tackle Britain's £46.8bn unpaid tax bill – which is thought to be mostly hidden in offshore bank accounts. AI 'assistants' Government sources said the main use of AI by the taxman was to create two 'assistants' to help the public fill in their tax returns and compliance officers to read them. The customer-facing tool is designed to warn users if they look likely to be submitting false information, based on patterns the system can spot in other users. If the AI tells a user that their return may be wrong, then it could serve as an official warning by HMRC, and lead to a faster crackdown by the authorities if they are later found to have lied, sources said. Compliance officers working at HMRC have also been given AI assistants that they use to sift through data, which ministers think will make the department faster and more efficient at spotting potential tax evasion. However, one source acknowledged that AI tools can make mistakes, and that the Government's new system could introduce errors. A HMRC spokesman said: 'Use of AI for social media monitoring is restricted to criminal investigations and subject to legal oversight. AI supports our processes but – like all effective use of this new technology – it has robust safeguards in place and does not replace human decision-making. 'Greater use of AI will enable our staff to spend less time on admin and more time helping taxpayers, as well as better target fraud and evasion to bring in more money for public services.'


Reuters
4 hours ago
- Reuters
Trump opens door to sales of version of Nvidia's next-gen AI chips in China
Aug 11 (Reuters) - U.S. President Donald Trump on Monday suggested he might allow Nvidia (NVDA.O), opens new tab to sell a scaled-down version of its next-generation advanced GPU chip in China, despite deep-seated fears in Washington that China could harness American artificial intelligence capabilities to supercharge its military. Trump also confirmed and defended an agreement calling for U.S. AI chip giant Nvidia, led by Jensen Huang, and Advanced Micro Devices (AMD.O), opens new tab to give the U.S. government 15% of revenue from sales of some advanced computer chips in China, after his administration greenlit exports to China of less advanced AI chips known as the H20 last month. "Jensen also has the new chip, the Blackwell. A somewhat enhanced-in-a-negative-way Blackwell. In other words, take 30% to 50% off of it," Trump told reporters in an apparent reference to slashing the chip's capability. "I think he's coming to see me again about that, but that will be an unenhanced version of the big one," he added. Trump's administration halted sales of Nvidia's H20 chips to China in April, but the company said last month it had won clearance to resume shipments and hoped to start deliveries soon. "The H20 is obsolete," Trump said, saying China already had it. "So I said, 'Listen, I want 20% if I'm going to approve this for you, for the country'," he added. The deal is extremely rare for the U.S. and marks Trump's latest intervention in corporate decision-making, after pressuring executives to invest in American manufacturing and demanding new Intel CEO Lip-Bu Tan resign over ties to Chinese companies. Analysts said the levy may hit margins at the chipmakers and set a precedent for Washington to tax critical U.S. exports, potentially extending beyond semiconductors. The U.S. Commerce Department has started issuing licenses for the sale of H20 chips to China, another U.S. official said on Friday. Both the U.S. officials declined to be named because details have not been made public. The China curbs are expected to cost Nvidia and AMD billions of dollars in revenue, and successive U.S. administrations have sought in recent years to limit Beijing's access to cutting-edge chips that could bolster China's military. Washington does not feel the sale of H20 and equivalent chips compromises national security, said the first U.S. official. The official did not know when or how the agreement with the chip companies would be implemented, but said the administration would be in compliance with the law. The U.S. Constitution prohibits Congress from laying taxes and duties on articles exported from any state. The Export Clause applies to taxes and duties, not user fees. When asked if Nvidia had agreed to pay 15% of revenues to the U.S., a company spokesperson said: "We follow rules the U.S. government sets for our participation in worldwide markets." "While we haven't shipped H20 to China for months, we hope export control rules will let America compete in China and worldwide," the spokesperson added. A spokesperson for AMD said the U.S. approved its applications to export some AI processors to China, but did not directly address the revenue-sharing agreement and said the company's business adheres to all U.S. export controls. The Commerce Department did not immediately comment. China's foreign ministry said the country has repeatedly stated its position on U.S. chip exports. The ministry has previously accused Washington of using technology and trade measures to "maliciously contain and suppress China." The Financial Times, which first reported the development, said the chip firms agreed to the arrangement as a condition for obtaining the export licenses for their semiconductors, including AMD's MI308 chips. It added that the Trump administration had yet to determine how to use the money. "The Chinese market is significant for both these companies so even if they have to give up a bit of the money, they would otherwise make it look like a logical move on paper," AJ Bell investment director Russ Mould said. Still, analysts and experts questioned the logic of resuming sales if the chips could pose a national security risk. "Decisions on export licenses should be determined by national security considerations and the tradeoffs of U.S. policy goals, not a revenue-creating possibility," said Martin Chorzempa, senior fellow at the Peterson Institute for International Economics, an independent research institution. "What it ends up creating is an incentive to control things, to then extract a payment, rather than controlling things because we're actually concerned about the risk to national security." U.S. Commerce Secretary Howard Lutnick said last month the planned resumption of sales of the AI chips was part of U.S. negotiations with China to get rare earths and described the H20 as Nvidia's "fourth-best chip" in an interview with CNBC. He said it was in U.S. interests for Chinese firms to use American technology, even if the most advanced chips remained barred, to keep them on a U.S. "tech stack". Some elements of Trump's trade policy are already facing legal scrutiny, with a federal appeals panel skeptical of his claim that a 1977 law, traditionally used to sanction enemies or freeze assets, also empowered him to impose tariffs. "We aren't sure we like the precedent this sets," Bernstein analysts said of the revenue-share deal. "Will it stop with Chinese AI? Will it stop with controlled products? Will other companies be required to pay to sell into the region?" "It feels like a slippery slope to us." The analysts estimated the deal would cut gross margins on the China-bound processors by 5 to 15 percentage points, shaving about a point from Nvidia and AMD's overall margins. Nvidia generated $17 billion in revenue from China in the fiscal year ending January 26, representing 13% of total sales. AMD reported $6.2 billion in China revenue for 2024, accounting for 24% of total revenue. Nvidia has warned a China sales halt for H20 chips could cut $8 billion from July quarter revenue, while AMD has projected a $1.5 billion annual hit from the curbs.