
Before Going to Tokyo, I Tried Learning Japanese With ChatGPT
Feb 19, 2025 7:00 AM WIRED tests the benefits and limitations of using generative AI, specifically ChatGPT's Advanced Voice Mode, as a language tutor for travelers. Photo-illustration: WIRED Staff; Getty Images
On the final day of my visit to Japan, I'm alone and floating in some skyscraper's rooftop hot springs, praying no one joins me. For the last few months, I've been using ChatGPT's Advanced Voice Mode as an AI language tutor, part of a test to judge generative AI's potential as both a learning tool and a travel companion. The excessive talking to both strangers and a chatbot on my phone was illuminating as well as exhausting. I'm ready to shut my yapper for a minute and enjoy the silence.
When OpenAI launched ChatGPT late in 2022, it set off a firestorm of generative AI competition and public interest. Over two years later, many people are still unsure whether it can be useful in their daily lives outside of work.
A video from OpenAI in May of 2024 showing two researchers chatting back and forth, one in English and the other in Spanish, with ChatGPT acting as a low-latency interpreter, stuck in my memory. I wondered how practical the Advanced Voice Mode could be for learning how to speak bits of a new language and whether it's a worthwhile app for travelers.
To better understand how AI voice tools might transform the future of language learning, I spent a month practicing Japanese with the ChatGPT smartphone app before traveling to Tokyo for the first time. Outside of watching some anime, I had zero working knowledge of the language. During conversation sessions with the Advanced Voice Mode that usually lasted around 30 minutes, I often approached it as my synthetic, over-the-phone language tutor, practicing basic travel phrases for navigating transportation, restaurants, and retail shops.
On a previous trip, I'd used Duolingo, a smartphone app with language-learning quizzes and games, to brush up on my Spanish. I was curious how ChatGPT would compare. I often test new AI tools to understand their benefits and limitations, and I was eager to see if this approach to language learning could be the killer feature that makes these tools more appealing to more people. Me and My AI Language Tutor
Jackie Shannon, an OpenAI product lead for multimodal AI and ChatGPT, claims to use the chatbot to practice Spanish vocabulary words as she's driving to the office. She suggests beginners like me start by using it to learn phrases first—more knowledgeable learners can immediately try free-flowing dialogs with the AI tool. 'I think they should dive straight into conversation,' she says. 'Like, 'Help me have a conversation about the news on X.' Or, 'Help me practice ordering dinner.''
So I worked on useful travel phrases with ChatGPT and acting out roleplaying scenarios, like pretending to order food and making small talk at an izakaya restaurant. Nothing really stuck during the first two weeks, and I began to get nervous, but around week three I started to gain a loose grip on a few key Japanese phrases for travelers, and I felt noticeably less anxious about the impending interactions in another language.
ChatGPT is not necessarily designed with language acquisition in mind. 'This is a tool that has a number of different use cases, and it hasn't been optimized for language learning or translation yet,' says Shannon. The generalized nature of the chatbot's default settings can lead to a frustrating blandness of interactions at first, but after a few interactions ChatGPT's memory feature caught on fairly quickly that I was planning for a Japan trip and wanted speaking practice.
The 'memory' instructions for ChatGPT are passively updated by the software during conversations, and they impact how the AI talks to you. Go into the account settings to adjust or delete any of this information. An active way you can adjust the tool to be better suited for learning languages is to open the 'custom instructions' options and lay out your goals for the learning experience.
What frustrated me most was the incessant, unspecific guideline violation alerts during voice interactions, which ruined the flow of the conversation. ChatGPT would trigger a warning when I asked it to repeat a phrase multiple times, for example. (Extreme repetition is sometimes a method used by people hoping to break a generative AI tool's guardrails.) Shannon says OpenAI rolled out improvements related to what triggers a violation for Advanced Voice Mode and is looking to find a balance that prioritizes safety.
Also, be warned that Advanced Voice Mode can be a bit of a yes-man. If you don't request it to role-play as a tough-ass tutor, you may find the personality to be saccharine and annoying—I did. A handful of times ChatGPT congratulated me for doing a fabulous job after I definitely butchered a Japanese pronunciation. When I asked it to provide more detailed feedback to really teach me the language, the tool still wasn't perfect, but it was able to respond in a manner that fit my learning style better.
Comparing the overall experience to my past time with Duolingo, OpenAI's chatbot was more elastic, with a wider range of learning possibilities, whereas Duolingo's games were more habit forming and structured. Are ChatGPT's language abilities an existential threat to Duolingo? Not according to Klinton Bicknell, Duolingo's head of AI. 'If you're motivated right now, you can go to ChatGPT and get it to teach you something, including a language,' he says. 'Duolingo's success is providing a fun experience that's engaging and rewarding.'
The company partnered with OpenAI in the past and is currently using its AI models to power a feature where users can have conversations with an animated character to practice speaking skills. Putting ChatGPT to the Test in Tokyo
ChatGPT really became useful when I wanted to practice a phrase or two before saying it while out and about in Tokyo. Over and over, I whispered into my smartphone on the sidewalk, requesting reminders of how to ask for food recommendations or confess that I don't understand Japanese very well.
Using Advanced Voice Mode to translate back and forth live may be great for longer conversations you'd want to have in more intimate settings, but at a buzzy restaurant, crowded shrine, or other common tourist spots in Japan, it's just easier to do asynchronous translations with the tool.
At a barbecue spot with an all-you-can-drink special and a mini-keg of lemon sour right under the table, the food came out but not the requested drinking mugs. I had a tough time requesting them. The waitress was patient with us as I spoke a few lines into ChatGPT and showed her the translation on my smartphone. She then explained I hadn't yet signed a waiver promising not to drink and drive and brought out a form to sign. A few minutes later, she returned with the mug. In this instance, OpenAI's chatbot was quite helpful, but I likely would have been just fine using the Google Translate app.
More times than I would like to admit, though, the phrases I thought I had down pat by practicing with ChatGPT ended up sloshing around in my head and embarrassing me. For example, while trying to get back to the hotel around 10 pm via the train, I got disoriented looking for the correct station exit. I was able to ask for help from one of the station staff members, but instead of saying 'thank you' ( arigato gozaimasu ) at the end, my tired mind blurted out the phrase for 'this one, please' ( kore wo onegaishimasu ) as I confidently strode away.
After a month of ChatGPT practice, did I really know Japanese? Of course not. But a few of the polite greetings and touristy phrases stuck well enough, most of the time at least, to navigate my way around Tokyo and feel like I could really enjoy the thrill of adventure in a new country.
As generative AI tools improve, they will keep getting better at helping language learners practice speaking skills, as well as their reading skills. Tomotaro Akizawa, an associate professor and program coordinator at Stanford's Inter-University Center for Japanese Language Studies in Yokohama, gives me an example. 'Students who have just completed the beginner level can now try to read challenging literary works from the Shōwa era by using AI for translations, explanations, and word lists,' he says.
If students eventually end up relying only on generative AI tools and go their entire language learning journey sans human instructor, then the complexities of spoken language and communication may get flattened over time.
'The opportunity to personally experience the human elements embedded in the target language—such as emotions, thoughts, hesitations, or struggles—would be lost,' says associate professor Akizawa. 'Words spoken in conversation are not always as structured as those from a large language model.' AI may be more patient with you than a human tutor, but language learners risk losing the rough edges and experience-based insights.
Have you tried to learn to do anything with AI? Would you feel confident using AI to help with translation in public? Let us know your experiences by emailing hello@wired.com or commenting below.
Page 2

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNET
40 minutes ago
- CNET
2 Ways I'm Using ChatGPT Advanced Voice to Improve My Life
Last fall, my artist mom and I were invited to give a presentation at the Cambridge Science Festival about the intersection of AI and art. It was an exciting opportunity. But I also hadn't done an in-person hands-on workshop like this before. I needed someone – or something – to help me talk out my ideas. That something turned out to be ChatGPT's advanced voice feature. This feature came out in the summer of 2024, but often isn't the first use case that comes to mind. As a full-time creator of over 10 years, I'm constantly sussing out new tools to see which ones are actually helpful, versus which features are just more hot air. And with how fast ChatGPT has been rolling out new features and upgrades, OpenAI has kept me busy. It's also made me realize a lot of people using ChatGPT aren't aware of all the different things the chatbot can currently do. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Meet industry creators, contributors and emerging thought leaders that have paired with CNET's award-winning editorial team to provide you with unique content from different perspectives. Meet our contributors The difference between ChatGPT's standard and advanced voice modes The main difference between ChatGPT's standard voice and advanced voice is that standard voice uses text-to-speech engines, whereas advanced voice uses a unified model. In standard mode, the AI creates its response in text first and then a separate voice tool reads the text aloud. While the voice may sound decent, it can feel unnatural and often delayed. A unified model like advanced voice doesn't separate writing and speaking. According to OpenAI's website, advanced voice mode's multimodal model (GPT-4o) works more like a human and adjusts its tone in a smooth flow. In the ChatGPT mobile app's voice interface, standard mode is represented by a black circle in the center of the conversation screen; for advanced mode, it's a blue orb. Advanced mode is a paid feature, but users on ChatGPT's free plan can get limited usage of it each day. The blue orb is listening. Screenshot from Fei Wu What might you use a tool like ChatGPT advanced voice for? Here are two ways I'm incorporating it into my everyday life. Let AI act as a sounding board I'm excited about using advanced voice as a strategic thinking partner to help me work through important and challenging problems. One limitation of ChatGPT is that its base training data only goes up to a certain month and year. While it draws from a wide range of books, articles and web content, it may lack up-to-date knowledge or insight on niche, highly specialized topics. This changes when certain features are enabled, and you can enable these features while also using advanced voice. My favorite ChatGPT prompt features are: Search. Toggle this feature on to have ChatGPT browse the internet and access online information. Toggle this feature on to have ChatGPT browse the internet and access online information. Deep Research. Have ChatGPT search the web and return more detailed insights. (I find this helpful when exploring less mainstream topics.) Have ChatGPT search the web and return more detailed insights. (I find this helpful when exploring less mainstream topics.) Upload. Share files, project briefs or other documents from your device or cloud storage. (Click the "+" icon to access this.) To access one of these features on a computer, click the appropriate feature to enable it, then click the voice button on the right. On desktop, special features can be toggled on before you submit a prompt. Screenshot: Fei Wu Screenshot: Fei Wu Screenshot: Fei Wu To enable ChatGPT features on mobile using advanced voice: Tap on the slider icon. Choose the feature you want to enable. (You'll know it's enabled because its icon will appear below the prompt bubble.) Tap the advanced voice button. Allow advanced voice to respond. Exit the voice window once the response is complete to see the response in writing. Any web sources used to inform the response will appear in the control panel. Prompt features can be toggled on in mobile prior to using advanced voice. If doing web search or deep research, ChatGPT will include some of its sources. Screenshots: Nick Wolny Back to the festival I mentioned at the beginning. Xiang Li is my mom and the artist behind a massive collection of Chinese empresses painted on silk using gemstone watercolors. When I used ChatGPT's advanced voice and asked what it knew about Xiang Li Art, it quickly referenced information we had only updated recently. From interactive AI-powered art exploration to live AI demos, panel discussion and youth engagement activities, we were able to implement several practical ideas during our live event in Cambridge, and they were very well-received. You can be very specific with your questions, and can ask follow-ups to go even deeper. I often like to treat advanced voice like a friend or listening partner rather than a search engine as I work through ideas. More nuanced translation Thanks to advanced voice, when my partner (who primarily speaks English) communicates with my mom (who only speaks Mandarin Chinese), the translations feel more natural. ChatGPT's advanced voice can speak over 50 languages. This model feels much more natural, as it can think, talk, pause and react. This can be a slightly tricky experiment if you're using advanced voice for this purpose for the first time. My prompt usually goes something like this: "Hey ChatGPT, I have two speakers in the room: Adam and my mom Xiang. Adam speaks English, and Xiang speaks Mandarin Chinese. I want you to act as a translator between them. After Adam's done talking, please translate it into Mandarin Chinese for Mom, and vice versa." The only trouble we experience sometimes is timing. ChatGPT may jump in a bit early while someone's still talking. To improve this, we told ChatGPT to listen for the word "Go" before providing translation. I find this type of fine-tuning can be helpful because our speech patterns and intonation differ from person to person, making it challenging for ChatGPT to decipher how to react. After using the feature regularly, I notice it's picking up context in more complex situations. It can recall information in longer conversations, understand subtle nuances and respond to my emotions more accurately. I'm expecting advanced voice to become more intelligent and intuitive over time. Start exploring ChatGPT advanced voice for yourself Advanced voice can answer a broad range of questions, making it a versatile tool for creativity, content creation, problem-solving and even strategic partnership. Currently, advanced voice is available to all ChatGPT users; free users receive a daily limit on advanced voice usage, whereas the limit is much higher for Plus, Pro and Team users. Check out my real-time advanced voice demo here on my YouTube channel. And if you have any questions or ideas for how to grow with advanced voice, connect with me on YouTube and LinkedIn to say hello. Opinions expressed by CNET Perspectives contributors are their own.


Business Wire
an hour ago
- Business Wire
Wallbox Announces $15 Million in Private Placement Financings
BARCELONA, Spain--(BUSINESS WIRE)--Wallbox (NYSE: WBX), a global leader in electric vehicle (EV) charging and energy management solutions, today announced two recent private placements of the Company's Class A ordinary shares for an aggregate of $15 million. The first is an investment of more than $9 million from the Government of Spain through the Spanish Society for Technological Transformation (SETT), which joins Wallbox's shareholder base as part of the Company's financing round from February 2025. Additionally, existing investors including Inversiones Financieras Perseo, S.L. (a subsidiary of Iberdrola), Orilla Asset Management, S.L., Wallbox CEO and Cofounder Enric Asunción, and others contributed more than $5 million through a second investment, reaffirming their continued support for Wallbox's growth and technological leadership. The Spanish Society for Technological Transformation (SETT), a state-owned enterprise under the Ministry for Digital Transformatión and Public Administration, is a public entity dedicated to financing and promoting advanced and transformative technologies. With this investment, SETT supports Wallbox's leadership as a technological frontrunner in the energy transition. This investment by SETT is made through the Next Tech initiative, which aims to encourage private investment and improve access to financing in strategic sectors of the Spanish economy through disruptive technologies. Next Tech is part of the Recovery, Transformation, and Resilience Plan and is one of the key instruments managed by SETT. In addition, SETT oversees two other financial tools designed to strengthen Spain's technological business ecosystem: PERTE Chip, focused on microelectronics and semiconductors, and the Spain Audiovisual Hub, which supports the digitalization and growth of the audiovisual sector. These investments strengthen Wallbox's balance sheet and provide the resources to accelerate the global adoption of its EV charging solutions, advance the digitalization of its solutions, and drive the development of cutting-edge technologies for smarter, more efficient energy management. 'The SETT's support reflects its confidence in our long-term vision and underscores Spain's strategic role in the global energy transition,' said Enric Asunción, CEO and co-founder of Wallbox. 'This investment, along with the continued support from our existing shareholders, strengthens our commitment to developing advanced EV charging and energy management solutions while driving long-term growth.' Supported by public-sector backing and renewed commitment from strategic investors, Wallbox continues to advance its long-term strategy and growth plans. These funding rounds underscore a shared commitment from both the public and private sector to accelerate the transition toward a more sustainable, efficient, and digital mobility. The investment by the Government of Spain through SETT is part of Wallbox's February 2025 financing round and is expected to close on June 13, 2025. The second transaction, backed by existing investors, was successfully completed on June 2, 2025. About Wallbox Wallbox is a global technology company, dedicated to changing the way the world uses energy. Wallbox creates advanced electric vehicle charging and energy management systems that redefine the relationship between users and the network. Wallbox goes beyond charging electric vehicles to give users the power to control their consumption, save money and live more sustainably. Wallbox offers a complete portfolio of charging and energy management solutions for residential, semi-public, and public use in more than 100 countries around the world. Founded in 2015 in Barcelona, where the company's headquarters are located, Wallbox currently has offices across Europe, Asia, and America. For more information, visit Forward Looking Statements This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. We intend such forward-looking statements to be covered by the safe harbor provisions for forward-looking statements contained in Section 27A of the Securities Act of 1933, as amended (the 'Securities Act') and Section 21E of the Securities Exchange Act of 1934, as amended (the 'Exchange Act'). All statements contained in this press release other than statements of historical fact should be considered forward-looking statements, including, without limitation, statements regarding the expected closing of the private placement with SETT, the expected benefits from the private placement investments, market adoption of digital mobility and EV charging solutions, Wallbox's future operating results and financial position, long term profitability and costs optimization, business strategy and plans and market opportunity. The words 'anticipate,' 'believe,' 'can,' 'continue,' 'could,' 'estimate,' 'expect,' 'focus,' 'forecast,' 'intend,' 'likely,' 'may,' 'might,' 'plan,' 'possible,' 'potential,' 'predict,' 'project,' 'should,' ''target,' will,' 'would' and similar expressions are intended to identify forward-looking statements, though not all forward-looking statements use these words or expressions. These statements are neither promises nor guarantees, but involve known and unknown risks, uncertainties and other important factors that may cause our actual results, performance or achievements to be materially different from any future results, performance or achievements expressed or implied by the forward-looking statements, including, but not limited to: Wallbox's history of operating losses as an early stage company; the adoption and demand for electric vehicles including the success of alternative fuels, changes to rebates, tax credits and the impact of government incentives; political and economic uncertainty and macroeconomic factors, such as impacts from tariffs and trade barriers, consumer spending, inflation and foreign exchange rates; Wallbox's ability to successfully manage its growth; the accuracy of Wallbox's forecasts and projections including those regarding its market opportunity; competition; risks related to losses or disruptions in Wallbox's supply or manufacturing partners; impacts resulting from geopolitical conflicts; risks related to macro-economic conditions and inflation; Wallbox's reliance on the third-parties outside of its control; risks related to Wallbox's technology, intellectual property and infrastructure; executive orders and regulatory changes under the U.S. political administration and uncertainty therefrom, as well as the other important factors discussed under the caption 'Risk Factors' in Wallbox's Annual Report on Form 20-F for the fiscal year ended December 31, 2024, as such factors may be updated from time to time in its other filings with the Securities and Exchange Commission (the 'SEC'), accessible on the SEC's website at and the Investors Relations section of Wallbox's website at Any such forward-looking statements represent management's estimates as of the date of this press release. Any forward-looking statement that Wallbox makes in this press release speaks only as of the date of such statement. Except as required by law, Wallbox disclaims any obligation to update or revise, or to publicly announce any update or revision to, any of the forward-looking statements, whether as a result of new information, future events or otherwise.

Business Insider
an hour ago
- Business Insider
Google's AI CEO explains why he's not interested in taking LSD in his quest to understand 'the nature of reality'
The Google DeepMind CEO said he's never taken LSD and doesn't want to. In a recent interview with Wired's Steven Levy, the AI boss was asked about his pursuit of understanding the "nature of reality," as his X bio states. More specifically, Hassabis was asked if acid had ever helped him get a glimpse of the nature of reality. The short answer is no. "I didn't do it like that," Hassabis said. "I just did it through my gaming and reading a hell of a lot when I was a kid, both science fiction and science." Hassabis set out as a child to understand the universe better, and the quest is ongoing. He's hoping AI and, eventually, artificial general intelligence will help reach his goal. While some tech leaders have talked about using psychedelics, Hassabis said he's "too worried about the effects on the brain." "I've sort of finely tuned my mind to work in this way," he said. "I need it for where I'm going." Google DeepMind is the research lab behind the company's AI projects, including chatbot Gemini. Hassabis is leading Google's charge toward the AI race's holy grail — AGI. Google DeepMind didn't immediately respond to a request for comment from Business Insider. Over the years, Silicon Valley has embraced the use of psychedelics, such as microdosing to improve productivity or going on ayahuasca retreats. Some investors have banked on their popularity, backing psychedelic startups that are seeking to turn the drugs into medical treatments or expand the industry in other ways. However, that's not a green light to take acid or magic mushrooms on the clock. In 2021, CEO Justin Zhu, cofounder and CEO of a startup called Iterable, said he was fired for microdosing LSD before a meeting. He hoped it would improve his focus, he said. Some of Hassabis's tech peers have been open about using LSD as established bosses or as college students. Microsoft cofounder Bill Gates, for example, took acid for the first time as a teenager, according to his memoir, " Source Code: My Beginnings." For Gates, dropping acid was exhilarating at first and a "cosmic" experience when he did it again. However, he ended up thinking his brain could delete his memories like a computer. "That would be one of the last times I would do LSD," Gates said. It didn't have that effect on Apple cofounder Steve Jobs, who told his biographer, Walter Isaacson, that it was "a profound experience, one of the most important things in my life." OpenAI's Sam Altman has also spoken positively about his experience with psychedelics. Although he didn't specify exactly what drug he took, he said it changed him from a "very anxious, unhappy person" to "calm." "If you had told me that, like, one weekend-long retreat in Mexico was going to significantly change that, I would have said absolutely not," Altman said. "And it really did."