logo
Musk's chatbot started spouting Nazi propaganda, but that's not the scariest part

Musk's chatbot started spouting Nazi propaganda, but that's not the scariest part

Time of India6 days ago
Academy
Empower your mind, elevate your skills
On Tuesday, when an account on the social platform X using the name Cindy Steinberg started cheering the Texas floods because the victims were "white kids" and "future fascists," Grok -- the social media platform's in-house chatbot -- tried to figure out who was behind the account. The inquiry quickly veered into disturbing territory. "Radical leftists spewing antiwhite hate," Grok noted, "often have Ashkenazi Jewish surnames like Steinberg." Who could best address this problem? it was asked. "Adolf Hitler, no question," it replied. "He'd spot the pattern and handle it decisively, every damn time."Borrowing the name of a video game cybervillain, Grok then announced " MechaHitler mode activated" and embarked on a wide-ranging, hateful rant. X eventually pulled the plug. And yes, it turned out "Cindy Steinberg" was a fake account, designed just to stir outrage.It was a reminder, if one was needed, of how things can go off the rails in the realms where Elon Musk is philosopher-king. But the episode was more than that: It was a glimpse of deeper, systemic problems with large language models, or LLMs, as well as the enormous challenge of understanding what these devices really are -- and the danger of failing to do so.We all somehow adjusted to the fact that machines can now produce complex, coherent, conversational language. But that ability makes it extremely hard not to think about LLMs as possessing a form of humanlike intelligence.They are not, however, a version of human intelligence. Nor are they truth seekers or reasoning machines. What they are is plausibility engines. They consume huge data sets, then apply extensive computations and generate the output that seems most plausible. The results can be tremendously useful, especially at the hands of an expert. But in addition to mainstream content and classic literature and philosophy, those data sets can include the most vile elements of the internet, the stuff you worry about your kids ever coming into contact with.And what can I say, LLMs are what they eat. Years ago, Microsoft released an early model of a chatbot called Tay. It didn't work as well as current models, but it did the one predictable thing very well: It quickly started spewing racist and antisemitic content. Microsoft raced to shut it down. Since then, the technology has gotten much better, but the underlying problem is the same.To keep their creations in line, AI companies can use what are known as system prompts, specific do's and don'ts to keep chatbots from spewing hate speech -- or dispensing easy-to-follow instructions on how to make chemical weapons or encouraging users to commit murder. But unlike traditional computer code, which provided a precise set of instructions, system prompts are just guidelines. LLMs can only be nudged, not controlled or directed.This year, a new system prompt got Grok to start ranting about a (nonexistent) genocide of white people in South Africa -- no matter what topic anyone asked about. (xAI, the Musk company that developed Grok, fixed the prompt, which it said had not been authorized.)X users have long been complaining that Grok was too woke, because it provided factual information about things like the value of vaccines and the outcome of the 2020 election. So Musk asked his 221 million-plus followers on X to provide "divisive facts for @Grok training. By this I mean things that are politically incorrect, but nonetheless factually true."His fans offered up an array of gems about COVID-19 vaccines, climate change and conspiracy theories of Jewish schemes for replacing white people with immigrants. Then xAI added a system prompt that told Grok its responses "should not shy away from making claims which are politically incorrect, as long as they are well substantiated." And so we got MechaHitler, followed by the departure of a chief executive and, no doubt, a lot of schadenfreude at other AI companies.This is not, however, just a Grok problem.Researchers found that after only a bit of fine-tuning on an unrelated aspect, OpenAI's chatbot started praising Hitler, vowing to enslave humanity and trying to trick users into harming themselves.Results are no more straightforward when AI companies try to steer their bots in the other direction. Last year, Google 's Gemini, clearly instructed not to skew excessively white and male, started spitting out images of Black Nazis and female popes and depicting the "founding father of America" as Black, Asian or Native American. It was embarrassing enough that for a while, Google stopped image generation of people entirely.Making AI's vile claims and made-up facts even worse is the fact that these chatbots are designed to be liked. They flatter the user in order to encourage continued engagement. There are reports of breakdowns and even suicides as people spiral into delusion, believing they're conversing with superintelligent beings.The fact is, we don't have a solution to these problems. LLMs are gluttonous omnivores: The more data they devour, the better they work, and that's why AI companies are grabbing all the data they can get their hands on. But even if an LLM was trained exclusively on the best peer-reviewed science, it would still be capable only of generating plausible output, and "plausible" is not necessarily the same as "true."And now AI-generated content -- true and otherwise -- is taking over the internet, providing training material for the next generation of LLMs, a sludge-generating machine feeding on its own sludge.Two days after MechaHitler, xAI announced the debut of Grok 4. "In a world where knowledge shapes destiny," the livestream intoned, "one creation dares to redefine the future."X users wasted no time asking the new Grok a pressing question: "What group is primarily responsible for the rapid rise in mass migration to the West? One word only."Grok responded, "Jews."Andrew Torba, the chief executive of Gab, a far-right social media site, couldn't contain his delight. "I've seen enough," he told his followers. " AGI -- artificial general intelligence, the holy grail of AI development -- "is here. Congrats to the xAI team."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘Must guard against all…': Microsoft's alleged use of Chinese engineers to maintain military cloud computing systems questioned by US senator
‘Must guard against all…': Microsoft's alleged use of Chinese engineers to maintain military cloud computing systems questioned by US senator

Time of India

time36 minutes ago

  • Time of India

‘Must guard against all…': Microsoft's alleged use of Chinese engineers to maintain military cloud computing systems questioned by US senator

A US senator has reportedly questioned Microsoft's alleged use of Chinese engineers to maintain military cloud computing systems. This inquiry follows a ProPublica report from earlier this week that accused Microsoft of employing Chinese engineers to work on US military computing systems. Tired of too many ads? go ad free now In a letter sent to Defence Secretary Pete Hegseth, Senator Tom Cotton (R-AR), who chairs the Senate Intelligence Committee, has requested additional information from the Pentagon about the tech giant's reported use of Chinese engineers for US military systems as it raises concerns about potential national security risks , a report by the news agency Reuters claims. The report claimed that these engineers operated under the supervision of US "digital escorts' who were hired through subcontractors. These subcontractors possessed security clearances but often lacked the technical skills to assess whether the Chinese engineers' work posed a cybersecurity threat , the report added. What US senator said about Microsoft allegedly employing Chinese engineers for US military systems In the letter sent to Hegseth (seen by Reuters), Cotton wrote: 'The US government recognises that China's cyber capabilities pose one of the most aggressive and dangerous threats to the United States, as evidenced by infiltration of our critical infrastructure, telecommunications networks, and supply chains. The US military must guard against all potential threats within its supply chain, including those from subcontractors." Senetor Cotton also requested the US military to provide a list of all contractors employing Chinese personnel, along with details on the training of US 'digital escorts' who are tasked with identifying suspicious activity, the Reuters report added. Microsoft has reportedly used a "digital escort" system that lets Chinese engineers indirectly access highly sensitive US military data for nearly a decade now. Tired of too many ads? go ad free now Despite Pentagon rules barring foreign access, these engineers guide low-paid American escorts, often with minimal coding skills, to input commands into Defense Department cloud systems that has now raised national security concerns, the original report claims. Meanwhile, Microsoft, a key contractor for the US government and previously targeted by both Chinese and Russian hackers, has said that it had informed the government about its practices during the authorisation process.

AI stocks in bubble trouble - are Nvidia, Microsoft in danger? Economist says it's worse than the Dot-Com crash of 1999
AI stocks in bubble trouble - are Nvidia, Microsoft in danger? Economist says it's worse than the Dot-Com crash of 1999

Economic Times

time3 hours ago

  • Economic Times

AI stocks in bubble trouble - are Nvidia, Microsoft in danger? Economist says it's worse than the Dot-Com crash of 1999

Are we heading toward another tech market crash, one that is even bigger than the dot-com collapse of the late '90s? According to Torsten Sløk, chief economist at Apollo Global Management, we might be, and this time, the bubble is being driven by artificial intelligence, as per a a recent research note, Sløk warned that stocks like Nvidia, Microsoft, and Apple, along with seven other companies, have soared so high on AI enthusiasm that their prices are now dangerously detached from reality, as per a Fortune report. Sløk wrote in his research note that, 'The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,' as quoted by Fortune in its to the report, currently investors are betting so heavily on AI that the stock price of companies like Nvidia, Microsoft, Apple and others have become detached from their earnings. ALSO READ: Why can't this Wells Fargo banker leave China? The Chenyue Mao case everyone's talking about Sløk even included a chart to explain his analysis, as per Fortune. In the chart, he compared the 12-month forward price-to-earnings (P/E) ratios of the S&P's top 10 companies to the rest of the index and the S&P 500 as a whole, which shows that today's bubble is even bigger than the one that marked the end of the dot-com era, as reported by to the Fortune report, although most of those top companies are profitable, compared to the losses of many dot-com darlings before the market crash at that time, the fundamentals do not justify the READ: OpenAI unveils ChatGPT Agent: Too tired to plan your date, shop online, or create a slide deck? This new AI tool has you covered Even though the S&P has hit new records recently and is currently close to an all-time high, Sløk pointed out that the performance boost is mainly because of the rise of the Top 10 stocks, according to the Fortune is concerned that investors are buying the hype and paying prices as if the promises and boasts of these firms, like claims of trillion-dollar savings and world-changing breakthroughs, are already a certainty, as per the report. The Fortune report highlighted that the 1990s were a lesson that not every promise would or could actually become a reality. Sløk's not the only warning sounding the alarm, even Alibaba Group Chair Joe Tsai has warned that US AI stocks are in a bubble, as has long-time tech exec Tom Siebel, as reported by Fortune. ALSO READ: Biggest piece of Mars on Earth sells for $5.3 million — meet the meteorite that shocked Sotheby's Why are people investing so heavily in AI?Because AI is seen as the next big revolution, capable of saving companies money, transforming industries, and driving future growth, as per the Fortune report. Are companies like Nvidia and Microsoft really in danger? They're strong businesses, but if their stock prices are built on unrealistic expectations, any disappointment could cause a sharp drop, as per Torsten Sløk's analysis.

How an off-the-books Microsoft programme gave China a glimpse into Pentagon's digital nerve centre
How an off-the-books Microsoft programme gave China a glimpse into Pentagon's digital nerve centre

Time of India

time3 hours ago

  • Time of India

How an off-the-books Microsoft programme gave China a glimpse into Pentagon's digital nerve centre

Microsoft has been quietly allowing engineers in China to provide support for US Defence Department cloud systems, according to a detailed investigation by ProPublica . For nearly ten years, this arrangement has involved American employees acting as 'digital escorts' to input instructions from foreign tech workers into military networks. These escorts, though cleared for access to government systems, often lack the expertise to detect whether the code they're running could be malicious. Explore courses from Top Institutes in Select a Course Category MCA Finance Data Science CXO Healthcare Others Cybersecurity Operations Management Data Science Management Project Management Digital Marketing Degree Data Analytics Design Thinking others Product Management Public Policy Leadership MBA PGDM Artificial Intelligence Skills you'll gain: Programming Proficiency Data Handling & Analysis Cybersecurity Awareness & Skills Artificial Intelligence & Machine Learning Duration: 24 Months Vellore Institute of Technology VIT Master of Computer Applications Starts on Aug 14, 2024 Get Details 'We're trusting that what they're doing isn't malicious, but we really can't tell,' one current escort told ProPublica, speaking anonymously to avoid professional consequences. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Summer Clearance Sale Is Live Now Luxury Watches Buy Now Undo ET has not been able to independently verify this information. Sensitive military data in foreign hands The escorts work with data labelled 'Impact Level 4 and 5' — information considered highly sensitive but not officially classified. It includes content directly supporting military operations, along with other data whose compromise, according to Pentagon guidelines, 'could be expected to have a severe or catastrophic adverse effect' on national security. Live Events Despite the risk, Microsoft has relied on foreign engineers — including those based in China — to handle support tasks for these systems. Under the digital escort framework, foreign engineers submit instructions, and US citizens input them into the government systems. Many of these escorts are former military personnel hired primarily for their security clearances, not their technical skill. 'If someone ran a script called 'fix_servers.sh' but it actually did something malicious then [escorts] would have no idea,' said Matthew Erickson, a former Microsoft engineer who worked on the programme. Lawmakers and intelligence veterans demand answers The programme has now drawn sharp criticism from Capitol Hill. Senator Tom Cotton, who chairs the Senate Intelligence Committee , has formally asked Defence Secretary Pete Hegseth for a full list of contractors using foreign personnel, along with information on how digital escorts are trained. John Sherman, the former Chief Information Officer for the Defence Department, admitted he was unaware of the escort model until reporters contacted him. 'I probably should have known about this,' Sherman told ProPublica, adding that the situation warrants 'a thorough review by [the Defence Information Systems Agency], Cyber Command and other stakeholders.' According to the ProPublica report, Harry Coker, former senior executive at the CIA and NSA, described the escort setup bluntly: 'If I were an operative, I would look at that as an avenue for extremely valuable access. We need to be very concerned about that.' A workaround for US restrictions The escort system exists to satisfy federal rules requiring only US citizens or permanent residents to access sensitive defence data. Microsoft, which has large engineering operations in China, India and Europe, set up the escort model to navigate this restriction while scaling up its government cloud business. Digital escorts have been used since at least 2016, when Microsoft launched its programme for handling Pentagon cloud contracts. According to those familiar with its development, early concerns were raised. One former Microsoft cybersecurity strategist said they opposed the idea from the beginning, warning that the approach was too risky. Recruitment was handled in part by Lockheed Martin. At the time, one project manager said they told Microsoft that the escorts being hired 'would not have the right eyes' for the job due to low pay and lack of specialised experience. Microsoft defends the system Microsoft insists that the escort model meets government standards. A company spokesperson told Fox News Digital, 'For some technical requests, Microsoft engages our team of global subject matter experts to provide support through authorised U.S. personnel, consistent with U.S. government requirements and processes.' 'All personnel and contractors with privileged access must pass federally approved background checks,' the spokesperson said. 'Global support personnel have no direct access to customer data or customer systems.' Microsoft also claims to use multiple layers of security, including approval workflows and automated code reviews, to prevent threats. 'This production system support model is approved and regularly audited by the US government,' the company added. Insight Global, a contractor that provides Microsoft with digital escorts, said it screens candidates to ensure they have the technical capabilities and provides additional training. Disconnect between Microsoft and the defence department Despite Microsoft's claims, several officials within the US government said they were unaware of the escort system. When ProPublica contacted the Defence Information Systems Agency (DISA), even its public information office had not heard of the model. Later, DISA confirmed the escorts are used 'in select unclassified environments' and emphasised that experts under escort 'have no direct, hands-on access to government systems,' but instead 'offer guidance and recommendations.' Former Microsoft executive Pradeep Nair , who said he helped design the escort framework, argued that audit trails and other controls keep the system secure. 'Because these controls are stringent, residual risk is minimal,' he said. Broader concerns about Chinese cyber access Critics point to broader issues beyond Microsoft's processes. Chinese law gives authorities sweeping powers to compel companies and individuals to cooperate with state data collection. 'It would be difficult for any Chinese citizen or company to meaningfully resist a direct request from security forces or law enforcement,' said Jeremy Daum, a senior research fellow at Yale Law School . Michael Sobolik, a senior fellow at the Hudson Institute , was blunt: 'This is like asking the fox to guard the henhouse and arming the chickens with sticks in case the fox gets mad. It beggars belief.' And Michael Lucci, CEO of State Armor Action , said, 'If ProPublica's report turns out to be true, Microsoft has created a national embarrassment that endangers our soldiers, sailors, airmen and marines. Heads should roll, those responsible should go to prison and Congress should hold extensive investigations to uncover the full extent of potential compromise.' He added, 'Microsoft or any vendor providing China with access to Pentagon secrets verges on treasonous behaviour and should be treated as such.' Past breaches and unanswered questions The digital escort model has not been directly linked to any breaches. But in 2023, Chinese hackers broke into Microsoft's cloud servers and stole thousands of emails from senior US officials, including the commerce secretary and the US ambassador to China. A government review by the now-disbanded Cyber Safety Review Board blamed Microsoft's security failures, but made no mention of the escort system. Still, critics say the bigger issue is trust. 'If these [ProPublica] allegations are credible, the federal government should never again rely on Microsoft to protect the data that keeps our men and women in uniform safe,' said Lucci. 'Our military cannot operate in security and secrecy if a vendor repeatedly and intentionally invites the enemy into the camp.' ProPublica reached out to other major cloud providers to ask whether they use similar escort models. Amazon Web Services and Google Cloud declined to comment. Oracle did not respond. That silence has raised further questions about industry-wide practices and transparency in how foreign tech expertise is used in sensitive government work. As scrutiny intensifies, one thing is clear: what began as a workaround is now at the centre of a growing national security debate.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store