The rise of AI-driven Search prompts a shift in SEO strategy
AI tools like ChatGPT and Gemini are reshaping how people search for information online. Barcelona-based agency La Teva Web is pioneering a new approach to SEO that helps companies gain visibility not just on Google, but also in generative AI responses.
Barcelona - May 19, 2025 - Generative AI is disrupting how users search and discover information online. As tools like ChatGPT, Google Gemini, Perplexity, and Microsoft Copilot become embedded in daily habits, the way people access content is shifting from traditional search engine listings to natural language answers. For businesses, this change marks a new frontier: being present in the response itself, not just ranking on a search page.
Leading this transformation is La Teva Web, a digital agency based in Barcelona and Google Partner Premier 2025, which has become a pioneer in SEO for AI. This emerging discipline — also known as LLMO (Large Language Model Optimization) — focuses on ensuring that a brand's content is not only optimized for Google, but also clearly understood and cited by large language models.
The agency helps clients analyze their current visibility in AI tools, adapt their content to meet LLM expectations, and implement technical best practices such as structured data and llms.txt. The goal: to increase the likelihood of being cited as a trusted source in AI-generated answers.
'With more users turning to AI assistants to find answers, make decisions, or compare products, this isn't a future trend — it's today's challenge,' says Díaz. 'Businesses that act now will gain a competitive edge in a space that's still underexploited.'
Media Contact
Company Name: La Teva Web SL
Contact Person: Francesc
Email: Send Email
Phone: 0034932503586
City: Barcelona
State: Catalonia
Country: Spain
Website: https://www.latevaweb.com/en/
Press Release Distributed by ABNewswire.com
To view the original version on ABNewswire visit: The rise of AI-driven Search prompts a shift in SEO strategy

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
14 minutes ago
- Yahoo
Apple dismisses Elon Musk's claims that App Store favors OpenAI over other AI apps
Apple has rejected Elon Musk's accusations that its App Store is biased against AI apps that compete with OpenAI. 'We feature thousands of apps through charts, algorithmic recommendations and curated lists selected by experts using objective criteria,' the BBC quoted Apple as saying. The response comes after Musk threatened to sue Apple via xAI, claiming the iPhone maker was 'behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store.' Apple and OpenAI entered a partnership last year to integrate the AI company's models and ChatGPT into Apple products, like Siri and iOS. But there is no evidence that the App Store favors OpenAI over other AI companies — indeed, AI apps like Perplexity and DeekSeek have topped the App Store charts over the last year. In previous years, Apple would have likely ignored claims of chart manipulation. Today, however, the company faces regulatory pressure and new laws around the world to rein in its power in the app distribution market. Apple has also recently been taken to task by a U.S. district judge in its case with Epic Games over not implementing policy changes as the court instructed. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data
Yahoo
14 minutes ago
- Yahoo
Is Baby Grok really for kids? X's AI past says otherwise
Elon Musk's xAI unveils plans for Baby Grok, a kid-friendly AI chatbot, amid questions over safety, trust, and Grok's controversial past Elon Musk's xAI is reportedly developing 'Baby Grok', a child-friendly version of its Grok AI chatbot. Early reports suggest that it could serve as an educational assistant for children, answering questions and guiding learning in a safe and age-appropriate manner. But creating a truly trustworthy AI for kids isn't just about censoring bad words; it requires rebuilding the AI's training and safety systems from the ground up, ensuring it can protect young users from harmful, biased, or misleading content. With AI now present in classrooms, homes, and even children's toys, the stakes are high. If Baby Grok succeeds, it could become a valuable tool for modern education. If it fails, it could become another high-profile example of AI safety gone wrong. So, how exactly would Baby Grok work? What challenges will it face, and can Musk's team really make it safe enough for kids? Let's dig into the details. Why is Elon Musk building Baby Grok now? Musk's AI company, xAI, launched Grok in 2023 as a bold alternative to ChatGPT, integrating it directly into X (formerly Twitter) for premium subscribers. Grok quickly made headlines for its humor, quick wit, and intentionally 'edgy' tone, a style that won over some fans but also drew sharp criticism when the AI produced inappropriate or offensive responses to user prompts. The pivot toward Baby Grok mirrors a broader industry shift toward AI in education. Tech giants like OpenAI, Google, and Meta are racing to build AI-driven tutoring systems, interactive study aids, and personalized learning companions. The global AI in education market is projected to grow from USD 3.79 billion in 2022 to USD 20.54 billion in 2027, at a CAGR of 45.6% from 2022 to 2027. For Musk, a 'safe-mode' version of Grok could serve multiple purposes: appealing to parents who are wary of unfiltered AI, offering schools a controlled digital learning assistant, and easing concerns from regulators pushing for stricter AI safety laws. At the same time, it could shield xAI from the brand damage that unmoderated, free-form AI can cause, something Grok's initial rollout proved all too well. What Baby Grok promises, and how it would work? Baby Grok is designed to be a smart, child-friendly learning assistant, not just a filtered version of Grok. Its key goals include. Translating complex topics into age-appropriate explanations Presenting subjects like math, science, or history in ways that match a child's comprehension level. Avoiding profanity, explicit content, and violence Going beyond simple word filters, ensuring context-sensitive awareness to keep conversations safe and respectful. Offering interactive learning games and storytelling Features like games and stories can make learning fun while supporting skill development, an approach that has been shown to improve engagement and emotional growth in children. Encouraging curiosity without overwhelming Providing thoughtful, manageable responses that spark interest without cognitive overload. Creating this experience takes more than flipping a 'safe mode' switch; it requires filtered training data, child-centered design, and continuous oversight to ensure Baby Grok is both safe and genuinely educational. How would Baby Grok actually keep conversations kid-friendly? Musk hasn't released technical specifications yet, but AI safety researchers stress that a truly safe children's AI requires a blend of technology, human oversight, and continuous updates: Custom training dataset: Rather than being trained on open internet data, which often includes adult, violent, or misleading content, Baby Grok would need a highly curated, education-first dataset (such as National Geographic Kids, BBC Bitesize, or NASA's Climate Kids). Real-time content filtering: Outputs should pass through advanced filters similar to Microsoft's Content Safety API or Google SafeSearch, which automatically block inappropriate language, explicit imagery, or unsafe suggestions. Human moderation: Since no filter is flawless, platforms like Roblox and Discord rely on moderation teams to review flagged content. Baby Grok would need the same level of vigilance. Adaptive safeguards: Online risks evolve fast. Ongoing updates guided by child psychologists, researchers, and organizations like eSafety are essential to keeping the AI responsive to emerging threats. Importantly, a 2025 benchmark study introduced MinorBench, a tool specifically designed to assess language models' compliance with child safety safeguards, and it revealed significant variability even among top AI systems like Grok in how well they refuse unsafe or inappropriate queries. Can an edgy adult AI be remade for children? Musk's challenge is credibility. Grok's 'uncensored' style was originally marketed as a feature for adults, a stark contrast to the thoughtful tone parents expect from child‑friendly tools. Platforms like YouTube Kids demonstrate how even strict filters can fail, and with generative AI, the risk is amplified; every response is created in real time, not pulled from a pre‑approved library. Researchers at the Center for Humane Technology emphasize that without deep safety integration, child-facing AI can still reproduce bias, misinformation, or harmful advice. A recent framework introduced in 2025, 'LLMs and Childhood Safety', underscores this risk by proposing protective measures for language models used with children, pointing out that current systems often lack the standards needed to reliably prevent harmful outputs, even when they're designed for younger audiences. Before you dive in, take a moment to watch this short video for a quick overview. It sets the stage perfectly. Once you're done, scroll back up and keep reading for the full story. Why is building AI for kids more complex than it looks? Developing AI for children involves meeting stricter legal and ethical standards that go well beyond typical AI design. First, there are data privacy protections. Tools like Baby Grok must comply with stringent child-specific laws such as COPPA in the U.S. and GDPR-K in Europe, which demand explicit parental consent and careful handling of minors' data. Equally important is bias reduction, ensuring the AI avoids reinforcing harmful stereotypes and treats all children fairly, regardless of background. Then there's the need for educational accuracy, meaning the AI's responses must be grounded in trusted sources like Britannica Kids or Scholastic Learn at Home. A 2025 study published in Nature Humanities and Social Sciences Communications reinforces that ethical design, transparency, and verified educational content are critical for AI systems targeting children. The challenge is not just building a smart tool but earning lasting trust from parents, educators, and regulators alike. What's next for Baby Grok? If xAI follows its usual rollout strategy, Baby Grok may begin in a closed beta reserved for Premium+ X subscribers. The company could then expand through integrations with classroom platforms or learning devices, partnerships with educational publishers, and features such as parent dashboards that offer conversation logs and usage controls. But success will largely depend on early safety testing. A single viral misstep, such as a child receiving harmful advice, could severely damage its reputation before it's widely adopted. A 2025 study on AI educational tools emphasizes the importance of establishing strong safety guardrails from the start, including prompt engineering, human-in-the-loop moderation, and continuous evaluation to ensure content remains curriculum-aligned and age-appropriate Can Baby Grok handle the weight of childhood? Baby Grok could redefine AI in education if it balances innovation with strong safety and transparency. But Musk's history of pushing fast, sometimes at the cost of polish, raises questions. Baby Grok represents xAI's bold move into child-focused AI amid rising demand for educational tech. Building a safe AI for kids isn't easy; it requires more than filters, involving retraining, oversight, and ethical design. Musk's credibility is under scrutiny, especially given Grok's edgy origins and controversial responses. Past platform failures show the risks of repurposing adult AI for children without deep safeguards. Trust will be the ultimate test, from parents, educators, and regulators alike. If successful, Baby Grok could set a new standard for child-safe AI. If not, it may serve as a cautionary tale in the history of AI development. The key issue isn't just 'Can Baby Grok be built?' but 'Will parents trust it?'. If xAI meets the highest safety benchmarks and proves its reliability through independent audits, it could set a new standard for kid-friendly AI. If not, it risks becoming yet another cautionary tale in the growing list of AI examples. This story is made with AI assistance and human editing. Solve the daily Crossword


Fast Company
15 minutes ago
- Fast Company
Agentic AI in three moves: Connect, co-operate, concierge
A spring storm wipes out the last Thursday flight from Dallas. One in-app request later, an invisible digital assistant secures a nearby hotel, drops a meal voucher into your wallet, reroutes your suitcase, updates tomorrow's Teams meeting, and (because it senses weariness in your voice) orders an Uber Black. The experience feels less like self-service and more like having a well-briefed assistant orchestrating every detail. The result is not merely a rescued itinerary, but a deeper affinity with the very airline that just disrupted your plans. That feeling is the promise of agentic AI: software that breaks a goal into tasks, taps outside systems, remembers context across channels, and hands control back to a person whenever judgment matters. The timing is right. IBM research shows 76% of executives are piloting autonomous agents and 86% expect them to reshape workflows by 2027. Customers are more impatient than ever; 17% quit after one bad interaction, 59% after a few, and 70% expect the next employee or interface to know their context without being told. Agentic AI can meet that bar, but only if we build in the right order. Think of it as a three-stage path that begins with connecting, moves into the cooperate stage, and ends with the promise land of concierge. Here's how it can work: 1. CONNECT: GIVE EVERY CHANNEL THE SAME MEMORY First comes plumbing. Funnel every click, order, chat, and loyalty swipe into a lightweight feature store, a living memory capsule any touchpoint can open. This capsule powers an AI copilot beside frontline staff. The copilot drafts answers, suggests next moves, and flags land mines such as a refund that would void a bundle discount before a rep hits send. Customer Benefit: Instant, consistent replies in one brand voice, regardless of whether the message arrives by SMS, email, or IVR. Business Benefit: Average handle time drops an estimated 25% and brittle integrations surface early, long before full autonomy. Design work changes focus. Instead of polishing pixels, teams choreograph conversations: tone curves, escalation triggers, and data contracts replace static mock-ups. Brand language is encoded once, then reused everywhere. 2. CO-OPERATE: LET A SUPERVISED AGENT OWN HIGH-VOLUME TASKS With the pipes proven, elevate one high-volume journey—where is my order, a password reset, an in-store return—to a supervised agent that still knows when to wave in a human. Because it reads from the same memory capsule, the agent can finish a sizing chat that began on TikTok and push a shipping update straight to Apple Wallet without losing context. If confidence drops, the full thread with rationale lands on an agent's screen. Customer Benefit: Friction disappears. When a shoe size is gone online, the system checks nearby stores and offers same-day pickup. At the airport, kiosks spot a risky connection and print a later boarding pass before panic sets in. Business Benefit: Problems are solved on first contact and revenue pops fast; one fashion pilot lifted buy-online-pick-up-in-store conversions by 22% when agents brokered inventory in real time. Designers double down on micro interactions, surfacing one-tap options and keeping language tight. Strategy teams track completion and sentiment, not cost alone, to keep the experience human. 3. CONCIERGE: ORCHESTRATE SPECIALISTS EVERYWHERE The final move adds an orchestrator that spawns specialist sub-agents for billing, loyalty, inventory, payments, and more, all sharing state. Open-source stacks such as LangGraph give engineers fine-grained control without vendor lock-in. In ecommerce, the effect is dramatic: A shopping companion can search, compare, and check out inside the brand app, while new payment rails let AI complete a purchase securely on any site. Customer Benefit: The brand seems to anticipate needs. Wish lists refresh when sizes restock, pantry staples reorder themselves within a spending cap, and a smart mirror recalls everything tried at home. An edit log and a one-tap undo keep the human in charge. Business Benefit: Service savings persist, but growth becomes the headline. Cart abandonment falls as agents juggle price, stock, and rewards in milliseconds. Every interaction pushes live insight back into product, pricing, and media models. Design, tech, and strategy converge around explainability and seamless human hand-offs; a live expert can join the chat without forcing the shopper to start over. GUARDRAILS THAT MAKE IT SAFE AND CREDIBLE Autonomy delights only when it is accountable. Validate with pre-launch testing of generated copy against brand-voice guidelines. Keep a visible skills registry so compliance teams know which agent touches which data. Run security and privacy drills at least quarterly. These checkpoints matter as much to experience quality as typography or color. THE PAYOFF: LESS FRICTION, MORE GROWTH Execute connect, co-operate, and concierge in sequence, and customers notice what is missing: repeated forms, channel seams, surprise fees, awkward transfers. A purchase becomes a sentence such as, 'Find me a vegan gift under fifty dollars, deliver Tuesday, use points first,' and service stays invisible until empathy is needed, at which point a person appears fully briefed. For the organization, efficiency shows up first, growth second. Support costs drop, conversion climbs, and the feature store becomes a feedback loop that sharpens product road maps and media buys. Strategists rank journeys by value and complexity, technologists instrument every agent step, and designers craft feelings of continuity, anticipation, and control. Brands that master connect, co-operate, concierge will own both the moments that suit automation and the critical times when a human must lead. Those that wait may find their next customer already served instantly by someone else's digital assistant.