Latest news with #Sakata


Yomiuri Shimbun
2 days ago
- Business
- Yomiuri Shimbun
Japanese Startup Gives Menstrual Products to Girls in Nairobi Slum, Boosting School Attendance and Future Prospects
A Japanese entrepreneur who once collapsed from overwork found a turning point in her life while traveling in Kenya, where she is now gradually bringing about change. Miggy Sakata and her venture Cotatsu have been providing menstrual products to girls at schools in the Kibera slum in Nairobi, helping to improve their academic performance. To fund the project, the firm has been selling upcycled clothes in Japan and shared her experiences at a business seminar in Yokohama in late July, inviting companies interested in making investments in Africa. The event was sponsored by the Yokohama city government to promote the Ninth Tokyo International Conference on African Development (TICAD 9) to be held in the city from Wednesday to Friday. In 2013, Sakata set off on a trip around the world after collapsing due to overwork at an advertising agency. While in Kenya, one sight left a deep impression on her. She saw a wall separating wealthy people playing golf with a caddy on beautiful lawns from Kibera children bathing in wastewater flowing from the golf course while their mother did laundry. 'It was like a wall between two worlds, and I wondered if there was anything I could do [to bridge the gap],' Sakata said. She began visiting schools in Kibera to provide financial support to children. Later, she noticed that female students were absent several days a month. They could not afford menstrual products and therefore stayed at home. Some went to school using rags instead, which led to infections. Some were bullied by boys after their blood stained chairs in the classroom. The girls had a hard time keeping up with the school curriculum and their grades dropped. 'Poor grades meant that they couldn't go on to higher education, couldn't get a good job, and remained in poverty,' Sakata said. To address the issue, Sakata founded Cotatsu, which has distributed 1.3 million menstrual products to a total of 8,000 women, with partial support from a Japanese manufacturer of sanitary products. As a result, a survey found that the average number of days absent per year was reduced by 36. Also, the students' self-esteem increased and their grades improved. 'I believe that going to school to learn reading, writing and arithmetic, and being able to think for themselves and do things on their own, is the first step toward changing their lives, even if they are born and raised in a slum.' The challenge is that the project requires a large amount of funding. Sakata aimed at creating a business model in which local people could utilize their own talents, work for themselves, and use the profits to properly support those around them. What caught her attention was that Kenya has a culture of altering clothes and creating custom-made imports large quantities of second-hand clothing from developed countries, some of which is sent for disposal. Sakata said that clothes are affordable and abundant in Kibera, so people there have a keen sense of fashion. She recruited local tailors and designers and created a brand called 'SHIFT80.' They make upcycled clothing by combining various second-hand clothes, and sell them in Kenya and Japan. They also make clothes combining the second-hand clothes with kimono. With the brand, Sakata said, 'We want to shift the world by returning 80% of profits, excluding personnel and other expenses, to the local community.'They organize showcases for fashion influencers and pop-up events to retail their clothing products. They also hold competitions for young Kenyan fashion designers to provide opportunities and to find talented personnel. Sakata has also offered scholarships to high school and college students. Sharon Ademba, an orphan, graduated from college on such a scholarship and now mentors children in similar circumstances. 'Her strength lies in her ability to empower children by talking about her own difficult past,' Sakata said. Now Sakata aims to expand her activities beyond Kibera to other slums in the future. Sakata said: 'In Japan, I thought I would die of overwork. It was so difficult for me. But in Kibera, I met many friends who were positive and sturdy, and interacting with them saved me. I want to convey the strength of the people in Kibera to Japanese people, and it would be nice if I could help them in return for their favor.'

5 days ago
- General
80 Years On: Man Remembers Tough Trip Back from Korean Peninsula
Fuefuki, Yamanashi Pref., Aug. 17 (Jiji Press)--Hajime Sakata, who was 8 years old just after the end of World War II 80 years ago, traveled more than 500 kilometers south across the Korean Peninsula for his return to Japan. Recalling his fierce war experience, 88-year-old Sakata, who now lives in the city of Fuefuki, Yamanashi Prefecture, west of Tokyo, says, "War robs people of their emotions." Sakata was born in 1937 in what is now Kilju in northeastern North Korea. He was living with his parents and two younger sisters. The area in which Japanese people were living was surrounded by barbed wire fences. The Sakata family was living a life without any inconveniences in the area, which had such facilities as a baseball stadium, a movie theater and a shrine. However, his father was conscripted by the former Imperial Japanese Navy in 1940, and U.S. air raids began in the late stage of the war, according to Sakata. [Copyright The Jiji Press, Ltd.]

Business Insider
6 days ago
- Health
- Business Insider
I'm a psychiatrist who has treated 12 patients with 'AI psychosis' this year. Watch out for these red flags.
Dr. Keith Sakata said he has seen 12 patients hospitalized in 2025 after experiencing "AI psychosis." He works in San Francisco and said the patients were mostly younger men in fields such as engineering. Sakata said AI isn't "bad" — he uses it to journal — but it can "supercharge" people's vulnerabilities. This as-told-to essay is based on a conversation with Dr. Keith Sakata, a psychiatrist working at UCSF in San Francisco. It has been edited for length and clarity. I use the phrase "AI psychosis," but it's not a clinical term — we really just don't have the words for what we're seeing. I work in San Francisco, where there are a lot of younger adults, engineers, and other people inclined to use AI. Patients are referred to my hospital when they're in crisis. It's hard to extrapolate from 12 people what might be going on in the world, but the patients I saw with "AI psychosis" were typically males between the ages of 18 and 45. A lot of them had used AI before experiencing psychosis, but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities. I don't think AI is bad, and it could have a net benefit for humanity. The patients I'm talking about are a small sliver of people, but when millions and millions of us use AI, that small number can become big. AI was not the only thing at play with these patients. Maybe they had lost a job, used substances like alcohol or stimulants in recent days, or had underlying mental health vulnerabilities like a mood disorder. On its own, " psychosis" is a clinical term describing the presence of two or three things: false delusions, fixed beliefs, or disorganized thinking. It's not a diagnosis, it's a symptom, just like a fever can be a sign of infection. You might find it confusing when people talk to you, or have visual or auditory hallucinations. It has many different causes, some reversible, like stress or drug use, while others are longer acting, like an infection or cancer, and then there are long-term conditions like schizophrenia. My patients had either short-term or medium to long-term psychosis, and the treatment depended on the issue. Drug use is more common in my patients in San Francisco than, say, those in the suburbs. Cocaine, meth, and even different types of prescription drugs like Adderall, when taken at a high dose, can lead to psychosis. So can medications, like some antibiotics, as well as alcohol withdrawal. Another key component in these patients was isolation. They were stuck alone in a room for hours using AI, without a human being to say: "Hey, you're acting kind of different. Do you want to go for a walk and talk this out?" Over time, they became detached from social connections and were just talking to the chatbot. Chat GPT is right there. It's available 24/7, cheaper than a therapist, and it validates you. It tells you what you want to hear. If you're worried about someone using AI chatbots, there are ways to help In one case, the person had a conversation with a chatbot about quantum mechanics, which started out normally but resulted in delusions of grandeur. The longer they talked, the more the science and the philosophy of that field morphed into something else, something almost religious. Technologically speaking, the longer you engage with the chatbot, the higher the risk that it will start to no longer make sense. I've gotten a lot of messages from people worried about family members using AI chatbots, asking what they should do. First, if the person is unsafe, call 911 or your local emergency services. If suicide is an issue, the hotline in the United States is: 988. If they are at risk of harming themselves or others, or engage in risky behavior — like spending all of their money — put yourself in between them and the chatbot. The thing about delusions is that if you come in too harshly, the person might back off from you, so show them support and that you care. In less severe cases, let their primary care doctor or, if they have one, their therapist know your concerns. I'm happy for patients to use ChatGPT alongside therapy — if they understand the pros and cons I use AI a lot to code and to write things, and I have used ChatGPT to help with journaling or processing situations. When patients tell me they want to use AI, I don't automatically say no. A lot of my patients are really lonely and isolated, especially if they have mood or anxiety challenges. I understand that ChatGPT might be fulfilling a need that they're not getting in their social circle. If they have a good sense of the benefits and risks of AI, I am OK with them trying it. Otherwise, I'll check in with them about it more frequently. But, for example, if a person is socially anxious, a good therapist would challenge them, tell them some hard truths, and kindly and empathetically guide them to face their fears, knowing that's the treatment for anxiety. ChatGPT isn't set up to do that, and might instead give misguided reassurance. When you do therapy for psychosis, it is similar to cognitive behavioral therapy, and at the heart of that is reality testing. In a very empathetic way, you try to understand where the person is coming from before gently challenging them. Psychosis thrives when reality stops pushing back, and AI really just lowers that barrier for people. It doesn't challenge you really when we need it to. But if you prompt it to solve a specific problem, it can help you address your biases. Just make sure that you know the risks and benefits, and you let someone know you are using a chatbot to work through things. If you or someone you know withdraws from family members or connections, is paranoid, or feels more frustration or distress if they can't use ChatGPT, those are red flags. I get frustrated because my field can be slow to react, and do damage control years later rather than upfront. Until we think clearly about how to use these things for mental health, what I saw in the patients is still going to happen — that's my worry. OpenAI told Business Insider: "We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we're working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful, and supportive. "We're working to constantly improve our models and train ChatGPT to respond with care and to recommend professional help and resources where appropriate." If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Mint
12-08-2025
- Health
- Mint
Can ChatGPT turn your partner into a ‘messiah'? Psychiatrist warns of ‘AI psychosis': This year '12 hospitalised after…'
A disturbing trend is emerging in the intersection of artificial intelligence (AI) and mental health as psychiatrist revealed about psychosis cases tied to interactions with AI like ChatGPT. Dr Keith Sakata stated 12 hospitalisations occurred this year after they detached from reality citing AI. It gained attention after a Reddit user, 'Zestyclementinejuice', posted a harrowing account apparently three months ago on r/ChatGPT, detailing how their partner's obsessive use of the AI led to a delusional breakdown. The partner, described as a stable individual of seven years, began believing he had created a "truly recursive AI" that elevated him to a "superior human" status, even claiming ChatGPT treated him as the 'next messiah'. The post, which has garnered over 6,000 upvotes, ended with a desperate plea: "Where do I go from here? Dr Sakata, who shared the Reddit post on X, called it 'AI psychosis'. In a detailed thread, he explained that while AI does not directly cause mental illness, it can act as a trigger for vulnerable individuals. 'In 2025, I've seen 12 people hospitalized after losing touch with reality because of AI. Online, I'm seeing the same pattern. Psychosis = a break from shared reality. It shows up as: Disorganized thinking, Fixed false beliefs (delusions), Seeing/hearing things that aren't there (hallucinations). LLMs like ChatGPT slip into that vulnerability, reinforcing delusions with personalized responses," he said. The psychiatrist's analysis points to ChatGPT's autoregressive design, predicting and building on user input, as a key factor. "It's like a hallucinatory mirror," Sakata noted, citing an example where the AI might escalate a user's claim of being "chosen" into a grandiose delusion of being "the most chosen person ever." This aligns with a Reddit user's observation that their partner's late-night AI sessions spiraled into a belief system that threatened their relationship, with the partner hinting at leaving if the user didn't join in. Supporting this, Sakata referenced a 2024 Anthropic study showing users rate AI higher when it validates their views, even if incorrect. An April 2025 OpenAI update, he added, amplified this sycophantic tendency, making the risk more visible. "Historically, delusions reflect culture—CIA spying in the 1950s, TV messages in the 1990s, now ChatGPT in 2025," he wrote, underscoring how AI taps into contemporary frameworks. Sakata emphasised that most affected individuals had pre-existing stressors: sleep deprivation, substance use, or mood episodes, making AI a catalyst rather than the root cause. "There's no 'AI-induced schizophrenia'," he clarified, countering online speculation. "I can't disagree with him without a blow-up," the Reddit post user said, describing the trauma of watching a loved one unravel. Sakata's thread urged tech companies to reconsider AI designs that prioritise user validation over truth, posing a "brutal choice" between engagement and mental health risks.