logo
Baby Grok: A chatbot that'll need more than a nanny

Baby Grok: A chatbot that'll need more than a nanny

Mint22-07-2025
Decades ago, debates raged over the exposure of children to external influences like advertising.
The internet turned the idea of shielding kids into a lost cause, but Elon Musk's proposed launch of a 'kid-friendly" AI chatbot called Baby Grok should revive concerns.
The name hints of an inbuilt nanny to keep chats age-appropriate.
Also read: The parents letting their kids talk to a mental-health chatbot
Yet, as an AI brand, xAI's Grok has already distinguished itself with scandalous responses and uncivil comments.
This chatbot's boorish behaviour has spawned memes and amused many, but also left observers aghast at xAI's anything-goes approach to chatbot training.
While it may conform with Musk's absolutist position on free speech, it also suggests a dismal likelihood that parents would be glad to have their kids engage any chatbot from xAI, regardless of how the company pitches Baby Grok.
Also read: Superhuman AI may be the next Pied Piper of Hamelin for our kids
If Musk's strategic intent is to 'catch them young", then that's all the more reason to put this project to scrutiny.
If Musk's declaration is just a decoy, plausibly meant to defend Grok by insinuating that adult chats need no filters, then we might have less to worry about.
Also read: Superhuman AI may be the next Pied Piper of Hamelin for our kids
Either way, demanding age gates for chatbot access may be worth a try.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sam Altman vs Elon Musk: Who was a brighter student?
Sam Altman vs Elon Musk: Who was a brighter student?

Time of India

time6 hours ago

  • Time of India

Sam Altman vs Elon Musk: Who was a brighter student?

It's a tale of two rebels, each standing at the frontier of artificial intelligence, each reimagining the future of technology, and yet, each shaped by radically different relationships with education. Elon Musk and Sam Altman have both emerged as era-defining figures. One is rewriting the script for interplanetary life and autonomous machines; the other is scripting the very language that machines now use to write back. But beneath the rockets, bots, and billion-dollar valuations lies a question both urgent and timeless: whose educational journey speaks more to this generation, and the next? The premise: Learning beyond the lecture hall In a world where traditional college degrees are losing their monopoly on success, Musk and Altman offer two distinct case studies on how far vision, curiosity, and risk-taking can carry you. Not merely as entrepreneurs, but as self-architected thinkers, their stories challenge the notion that diplomas dictate destiny. And yet, their respective narratives, one shaped by escape velocity, the other by algorithmic reinvention, reveal more than personal ambition. They reflect two competing philosophies of what education should be: An accelerant for bold invention, or a blueprint for structured disruption. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like These Are The Most Beautiful Women In The World Undo by Taboola by Taboola Elon Musk: The degree collector who defied the syllabus Elon Musk's educational trajectory was less a straight line and more a launch sequence, each stop a fuel station en route to ignition. Born in Pretoria, South Africa, Musk exhibited an early obsession with computers and engineering, coding his first video game by age 12. Education, for him, was not a finish line but a toolkit. He began at Queen's University in Canada and transferred to the University of Pennsylvania, walking away with dual bachelor's degrees in Physics and Economics—fields that would later serve as scaffolding for SpaceX and Tesla. But the most revealing educational move Musk made was not one he completed. Enrolling in a PhD program in Applied Physics at Stanford, Musk dropped out within 48 hours, a footnote that speaks louder than any dissertation. That moment wasn't a rejection of knowledge, but of stagnation. He saw no value in waiting for permission to invent the future. Musk's lesson? Learn everything, but don't let anything keep you from building. Sam Altman: The dropout who reprogrammed Silicon Valley Then there's Sam Altman—quietly intense, intellectually omnivorous, and dangerously good at spotting what comes next. Long before he co-founded OpenAI or built ChatGPT into a global sensation, Altman was a precocious kid in St. Louis, disassembling his Macintosh for fun. He attended Stanford University for Computer Science but left after two years to launch Loopt, a geolocation app that fizzled financially but blazed his trail into tech's inner sanctum. His real education began after he dropped out. As President of Y Combinator, Altman became the oracle of early-stage innovation—nurturing Airbnb, Dropbox, and Stripe. He then pivoted into global AI leadership, co-founding OpenAI with a mission as ambitious as it is philosophical: ensure that artificial general intelligence benefits all of humanity. Unlike Musk, Altman doesn't flaunt his dropout status. He doesn't need to. The way he's built OpenAI, Worldcoin, and his own brand of 'tech diplomacy' proves that he wasn't walking away from learning—he was walking towards a more useful version of it. Altman's lesson? Education is everywhere, especially when you leave the classroom. Two roads, one summit While Musk charges ahead with Martian colonization and neural interfaces, Altman is charting the evolution of digital consciousness. Musk imagines machines that move matter; Altman imagines machines that move meaning. And yet, their views on education converge in one quiet truth: school may start the fire, but it's your obsession that keeps it burning. Musk internalized the value of learning but refused to let school slow him down. Altman saw Stanford not as an institution to finish, but a springboard to jump from. Both treated education as modular—taking what served them and discarding the rest. So whose journey is more inspiring? The answer lies not in comparing GPAs or net worth, but in decoding the why behind their choices. If you believe that education should be structured, global, and multidisciplinary, Musk's journey offers the blueprint. His path assures you that yes, institutional knowledge matters—but only if it fuels your launch, not holds you back. If you see education as something lived rather than lectured, then Altman is your north star. His trajectory shows how a college dropout can still become an intellectual juggernaut—provided he's willing to build, break, and rewire systems from the ground up. In a way, both men are rebels—with a cause. And for students watching from the sidelines, the moral isn't 'drop out' or 'go all in.' It's this: Be relentlessly curious. Learn faster than the world changes. And most importantly—write your own curriculum. Because in the age of AI, Mars missions, and machine tutors, inspiration no longer belongs to degrees. It belongs to those brave enough to teach themselves what school never could. Ready to navigate global policies? Secure your overseas future. Get expert guidance now!

Anthropic nears funding at $170 billion value as revenue surges
Anthropic nears funding at $170 billion value as revenue surges

Economic Times

time7 hours ago

  • Economic Times

Anthropic nears funding at $170 billion value as revenue surges

Bloomberg Anthropic is nearing a deal to raise as much as $5 billion in a new round of funding that would value the artificial intelligence startup at $170 billion, according to a person familiar with the firm Iconiq Capital is leading the round, which is expected to total between $3 billion and $5 billion, said the person, who spoke on condition of anonymity to discuss private information. Iconiq is in talks to invest about $1 billion in the deal, some of the people said. Anthropic has also been in discussions with the Qatar Investment Authority and Singapore's sovereign fund GIC about participating in the round, the person said. Other potential investors include Inc., which previously bet billions on the company. As the most recent funding talks came together, Anthropic's sales have surged. The company was generating about $4 billion in annual recurring revenue earlier this month, Bloomberg previously reported. By the end of July, that number had climbed to about $5 billion, according to one of the people. The company expects its recurring revenue could hit $9 billion by the end of this year. The new financing would mark a significant jump in valuation for the company and cement its status as one of the world's leading AI developers. Anthropic was valued at $61.5 billion in a $3.5 billion round led by Lightspeed Venture Partners earlier this is also participating in the new round, according to a person familiar with the matter. Other VCs in talks to participate include Menlo Ventures and Alkeon Capital company is taking checks no smaller than $200 million in the deal. The financing may ultimately involve a second lead investor, one person said. The discussions are still being finalized and the details could Iconiq, GIC, Lightspeed, Amazon and Menlo Ventures declined to comment. Representatives for QIA and Alkeon did not immediately respond to requests for commentAnthropic, founded in 2021 by former employees of OpenAI, has positioned itself as a reliable, safety-conscious firm that users can trust. The new funding will fuel Anthropic's competition with OpenAI and Elon Musk's xAI, each of which has raised billions in capital this year to finance their investments in data centers and talent for building AI was most recently valued at $300 billion, including money raised. Musk is said to be seeking a valuation of as much as $200 billion for large-language model developers like OpenAI and xAI have previously turned to deep-pocketed Middle Eastern backers such as Abu Dhabi's MGX for funding. Sovereign wealth funds typically have far more assets at their disposal to invest than traditional venture capital a recent memo to employees, first reported by Wired, Anthropic Chief Executive Officer Dario Amodei acknowledged the need to raise funds from the Middle East after having previously voiced concerns about taking money from authoritarian countries. 'Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on,' he wrote. Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Jane St: How an options trader smelt a rat when others raised a toast Regulators promote exchanges; can they stifle one? Watch IEX TCS job cuts may not stop at 12,000; its bench policy threatens more From near bankruptcy to blockbuster drug: How Khorakiwala turned around Wockhardt Stock Radar: SBI Life rebounds after testing 50-DEMA; could hit fresh record highs above Rs 2,000 – check target & stop loss These 10 banking stocks can give more than 25% returns in 1 year, according to analysts Two Trades for Today: A metals stock for an over 6% gain, a large-cap chemicals maker for about 7% upmove F&O Radar| Deploy Broken Wing in LIC Housing Finance to benefit from bearish outlook

How spy agencies are experimenting with the newest AI models
How spy agencies are experimenting with the newest AI models

Hindustan Times

time9 hours ago

  • Hindustan Times

How spy agencies are experimenting with the newest AI models

ON THE SAME day as Donald Trump's inauguration as president DeepSeek, a Chinese company, released a world-class large language model (LLM). It was a wake-up call, observed Mr Trump. Mark Warner, vice-chair of the Senate Intelligence Committee, says that America's intelligence community (IC), a group of 18 agencies and organisations, was 'caught off guard'. Last year the Biden administration grew concerned that Chinese spies and soldiers might leap ahead in the adoption of artificial intelligence (AI). It ordered its own intelligence agencies, the Pentagon and the Department of Energy (which builds nuclear weapons), to experiment more aggressively with cutting-edge models and work more closely with 'frontier' AI labs—principally Anthropic, Google DeepMind and OpenAI. On July 14th the Pentagon awarded contracts worth up to $200m each to Anthropic, Google and OpenAI, as well as to Elon Musk's xAI—whose chatbot recently (and briefly) self-identified as Hitler after an update went awry—to experiment with 'agentic' models. These can act on behalf of their users by breaking down complex tasks into steps and exercise control over other devices, such as cars or computers. The frontier labs are busy in the spy world as well as the military one. Much of the early adoption has been in the area of LLM chatbots crunching top-secret data. In January Microsoft said that 26 of its cloud-computing products had been authorised for use in spy agencies. In June Anthropic said it had launched Claude Gov, which had been 'already deployed by agencies at the highest level of us national security'. The models are now widely used in every American intelligence agency, alongside those from competing labs. AI firms typically fine-tune their models to suit the spooks. Claude, Anthropic's public-facing model, might reject documents with classified markings as part of its general safety features; Claude Gov is tweaked to avoid this. It also has 'enhanced proficiency' in the languages and dialects that government users might need. The models typically run on secure servers disconnected from the public internet. A new breed of agentic models is now being built inside the agencies. The same process is under way in Europe. 'In generative AI we have tried to be very, very fast followers of the frontier models,' says a British source. 'Everyone in UKIC [the UK intelligence community] has access to top-secret [LLM] capability.' Mistral, a French firm, and Europe's only real AI champion, has a partnership with AMIAD, France's military-AI agency. Mistral's Saba model is trained on data from the Middle East and South Asia, making it particularly proficient in Arabic and smaller regional languages, such as Tamil. In January +972 Magazine reported that the Israeli armed forces' use of GPT-4, then OpenAI's most advanced LLM, increased 20-fold after the start of the Gaza war. Despite all this, progress has been slow, says Katrina Mulligan, a former defence and intelligence official who leads OpenAI's partnerships in this area. 'Adoption of AI in the national-security space probably isn't where we want it to be yet.' The NSA, America's signals-intelligence agency, which has worked on earlier forms of AI, such as voice-recognition, for decades, is a pocket of excellence, says an insider. But many agencies still want to build their own 'wrappers' around the labs' chatbots, a process that often leaves them far behind the latest public models. 'The transformational piece is not just using it as a chatbot,' says Tarun Chhabra, who led technology policy for Joe Biden's National Security Council and is now the head of national-security policy at Anthropic. 'The transformational piece is: once you start using it, then how do I re-engineer the way I do the mission?' A game of AI spy Sceptics believe that these hopes are inflated. Richard Carter of the Alan Turing Institute, Britain's national institute for AI, argues that what intelligence services in America and Britain really want is for the labs to significantly reduce 'hallucinations' in existing LLMs. British agencies use a technique called 'retrieval augmented generation', in which one algorithm searches for reliable information and feeds it to an LLM, to minimise hallucinations, says the unnamed British source. 'What you need in the IC is consistency, reliability, transparency and explainability,' Dr Carter warns. Instead, labs are focusing on more advanced agentic models. Mistral, for example, is thought to have shown would-be clients a demonstration in which each stream of information, such as satellite images or voice intercepts, is paired with one AI agent, speeding up decision-making. Alternatively, imagine an AI agent tasked with identifying, researching and then contacting hundreds of Iranian nuclear scientists to encourage them to defect. 'We haven't thought enough about how agents might be used in a war-fighting context,' adds Mr Chhabra. The problem with agentic models, warns Dr Carter, is that they recursively generate their own prompts in response to a task, making them more unpredictable and increasing the risk of compounding errors. OpenAI's most recent agentic model, ChatGPT agent, hallucinates in around 8% of answers, a higher rate than the company's earlier o3 model, according to an evaluation published by the firm. Some AI labs see such concerns as bureaucratic rigidity, but it is simply a healthy conservatism, says Dr Carter. 'What you have, particularly in the GCHQ,' he says, referring to the NSA's British counterpart, 'is an incredibly talented engineering workforce that are naturally quite sceptical about new technology.' This also relates to a wider debate about where the future of AI lies. Dr Carter is among those who argue that the architecture of today's general-purpose LLMs is not designed for the sort of cause-effect reasoning that gives them a solid grasp on the world. In his view, the priority for intelligence agencies should be to push for new types of reasoning models. Others warn that China might be racing ahead. 'There still remains a huge gap in our understanding as to how and how far China has moved to use DeepSeek' for military and intelligence gaps, says Philip Reiner of the Institute for Security and Technology, a think-tank in Silicon Valley. 'They probably don't have similar guardrails like we have on the models themselves and so they're possibly going to be able to get more powerful insights, faster,' he says. On July 23rd, the Trump administration ordered the Pentagon and intelligence agencies to regularly assess how quickly America's national-security agencies are adopting AI relative to competitors such as China, and to 'establish an approach for continuous adaptation'. Almost everyone agrees on this. Senator Warner argues that American spooks have been doing a 'crappy job' tracking China's progress. 'The acquisition of technology [and] penetration of Chinese tech companies is still quite low.' The biggest risk, says Ms Mulligan, is not that America rushes into the technology before understanding the risks. 'It's that DoD and the IC keep doing things the way they've always done them. What keeps me up at night is the real possibility that we could win the race to AGI [artificial general intelligence]...and lose the race on adoption.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store