Dex is an AI-powered camera device that helps children learn new languages
The newly launched gadget is called Dex and resembles a high-tech magnifying glass with a camera lens on one side and a touchscreen on the other. When kids use the device to take pictures of objects, the AI utilizes image recognition technology to identify the object and translates the word into the selected language. It also features interactive story lessons and games.
While kid-focused language learning apps like Duolingo Kids exist, Dex argues that it takes a more engaging approach that emphasizes hands-on experiences, allowing children to immerse themselves in the language.
'We're trying to teach authentic language in the real world in a way that's interactive,' Cao told TechCrunch. 'The kids are not only listening or doing what they are told to do, but rather, they are actually thinking, creating, interacting, running around, and just being curious about things, and acquire the necessary language associated with those concepts and objects.'
Dex is designed for kids ages 3 to 8 years old and currently supports Chinese, French, German, Hindi, Italian, Japanese, Korean, and Spanish. It also offers support for 34 dialects, including Egyptian Arabic, Taiwanese Mandarin, and Mexican Spanish.
In addition to object recognition, Dex features a library of interactive stories that encourage children to actively participate in the narrative. As the story unfolds, kids are prompted to respond, such as greeting characters in the language they are learning.
The device comes with a dedicated app for parents to see a detailed overview of their child's progress, including the vocabulary words they've learned, the stories they've engaged with, and the number of consecutive days they've used Dex.
Additionally, Dex is currently developing a feature that allows kids to ask an AI chatbot questions and engage in free-form conversations. This feature is already available to some testers, but the company admits it isn't ready for a wider rollout. Parents might also be cautious about introducing AI chatbots to their children.
During our testing of Dex, we had concerns about the possibility of a child learning inappropriate words. Cao assured us that 'rigid safety prompts' are included whenever the large language model is used across vision, reasoning, and text-to-speech.
He said, 'We have an always-on safety agent that evaluates conversations in real-time and filters conversations with a safe stop word list. The agent will suppress conversation if any of the stop words are mentioned, including but not limited to those related to sexuality, religion, politics, etc. Parents will soon be able to further add to personalized stop word lists.'
Plus, it said that the AI is trained using vocabulary standards similar to those found in Britannica Kids and other children's encyclopedias.
In our testing, the AI successfully ignored topics related to nudity. However, it did recognize and accurately translate the term 'gun,' something parents should consider when purchasing the device.
In response to our findings, Cao told us, 'Regulation-wise, I'm not worried, but I do think this presents a concern, especially among [some] parents.' He added that these concerns have pushed the company to soon introduce an option in settings to filter out specific words, such as guns, cigarettes, vape pens, fireworks, marijuana, and beer bottles.
Dex also has a zero data retention policy. While this means there's no risk of sensitive or personal images being stored, one downside could be that parents are left in the dark about the type of content their kids may be capturing.
Dex is also actively working towards obtaining COPPA certification, which would make it compliant with the Children's Online Privacy Protection Act.
The company secured funding from ClayVC, EmbeddingVC, Parable, and UpscaleX. Notable angel investors include Pinterest founder Ben Silbermann, Curated co-founder Eduardo Vivas, Lillian Weng, who is the former head of safety at OpenAI, and Richard Wong (ex-Coursera).
The device is priced at $250, which feels steep for a product designed for children. However, Dex positions itself as a more affordable alternative to hiring a tutor, which can charge up to $80 per hour, or attending a language immersion school, which can cost several hundred to even thousands of dollars.
Dex says that hundreds of families have already purchased the device.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
a few seconds ago
- The Hill
House panel eyes AI use in rental car damage assessments
A House Oversight panel is seeking information from a major car rental company about its use of artificial intelligence (AI) to inspect cars for damage. The Subcommittee on Cybersecurity, Information Technology, and Government Innovation recently requested a meeting with officials from Hertz to discuss the technology's use. 'Some other car rental companies reportedly use AI as a tool but require human staff to review any damage flagged by the scanning system before billing customers; however, Hertz is apparently the only car rental company in the U.S. that issues damage assessments to customers without human review,' Subcommittee Chair Nancy Mace (R-S.C.) wrote in a letter to Hertz CEO Gil West. Hertz's AI scanners, which the company has been rolling out in airports this year, assess automobiles as the exit and enter the location, then quickly charge the user based on AI-documented damage without an independent review from a human. Reports have spread since the scanning system began about questionable charges over damages detected by AI. MotorTrend reported in April that the scanners measure treadwear and can analyze a car's undercarriage, body and glass. The system is powered by Israeli tech company UVeye.


Tom's Guide
a few seconds ago
- Tom's Guide
Hundreds of thousands of Grok chatbot conversations are showing up in Google Search — here's what happened
If you've ever chatted with Grok, the Elon Musk-backed AI assistant from xAI, and hit the 'Share' button, your conversation might be searchable on Google. According to a recent Forbes report, more than 300,000 Grok conversations have been indexed by search engines, exposing user chats that were likely intended to be shared privately, not broadcast online. This comes just weeks after ChatGPT users discovered their shared chats were searchable on Google too. This discovery once again raises serious concerns about how AI platforms handle content sharing and user privacy. Built by Elon Musk's xAI, Grok includes a feature that lets users share conversations through a unique link. Once users share, those chats are no longer private, and are getting picked up by Google's web crawlers. From there, the chat are turning up in search results, meaning anyone can access and read them. In both the ChatGPT cases and this one, the issue comes down to how "shareable" URLs are structured and whether AI companies are doing enough to protect users who may not realize their content is publicly viewable. The implications go beyond just a few embarrassing screenshots. Some Grok chats could contain sensitive, personal or even private details. Many users likely had no idea their shared conversations were being published for the world to see until now. This privacy breach highlights a growing challenge in AI product design: how to balance transparency and social sharing with greater privacy protections. When users hit 'Share,' they should be clearly informed that their link will be publicly accessible, and AI companies need to use best practices like noindex tags or restricted access URLs to avoid surprises like this. To prevent these kinds of privacy slip-ups, AI platforms should: Get instant access to breaking news, the hottest reviews, great deals and helpful tips. This isn't the first controversy Grok has faced. the chatbot has come under fire in recent months for questionable responses and controversial content. But this latest issue hits at a core trust concern: how user data is managed and whether privacy is truly being prioritized. As more people turn to AI tools for personal, educational, wellness and emotional support, platforms like xAI will need to step up their safeguards or risk eroding the very trust they're trying to build. Although the damage is already done and deleting the chats won't make them completely disappear from searches, users can still take action. Start by safeguarding your chats by not sharing them until Grok has more secure privacy settings. Screenshots are another option if you must share your chats with others. As more people turn to AI tools for personal, educational, wellness and emotional support, platforms like xAI will need to step up their safeguards or risk eroding the very trust they're trying to Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Gizmodo
a few seconds ago
- Gizmodo
AI Assistants Are Just Alexa All Over Again
Hundreds of billions of dollars have been poured into the development of artificial intelligence models and the infrastructure needed to support them, all with the promise that AI will eventually take over everything. But in the average person's day-to-day life, AI has yet to serve as much more than a slightly smarter Siri. According to new data collected by polling firm YouGov, even though companies have made a big to-do about infusing smart assistants with AI brains, people have barely changed how they interact with the tools. YouGov asked people how they interact with their smart assistants like Amazon's Alexa (upgraded to the AI-powered Alexa+ earlier this year), Google's Assistant (recently turned into a more conversational assistant with the help of Google's Gemini AI), and Apple's Siri (still dumb, but they're working on it) now that they've gotten their AI upgrade. The answer: they give the assistants the most basic of tasks and not much else. According to the survey, 59% of people use their smart assistant to check the weather, 51% use it to play music, 47% ask it to look up answers to questions, and 40% use it to set alarms and timers. Notably, these are all things that these devices could functionally do more than a decade ago when they first started making their way onto our phones and into our homes via smart speakers. As for the more advanced features that companies would absolutely love for users to adopt, the real-world interest still isn't there. Just 19% of people use their AI assistant to control internet-connected devices in their home (RIP to the Internet of Things). Less than one in 10 people interact with third-party actions like Alexa Skills, which was supposed to be like an app store for smart speakers, but never caught on. Much to the dismay of the corporate overlords who provide these assistants, only 14% of people do any shopping through their AI helper. AI assistant-supported shopping has been projected to be a $30 billion industry in the next decade. It's off to a rough start, reasonably enough! So the question for the tech companies that insist AI companions are still the way, there's a big, looming question: Why don't these things stick? The biggest hurdle is one that you'd think smarter features may address, but simply hasn't put a dent in: 42% of people simply don't see the need to use a smart assistant. Among those who do use them, though, there's still an issue of a communication barrier between human and machine. More than one in four said their biggest challenge in using a smart assistant is the AI not understanding their request. Another 12% said the assistants have accuracy issues, and 10% said the tools just are 'not as smart as I expected.' To that end, the most desired feature of smart assistants is not the ability to hold a conversation or complete multi-step tasks; it's to better understand speech. Seems like a simple ask, but much like the smart assistants they make, it also seems like one that tech companies just can't understand.