
The Sith of Silicon Valley: Ziz LaSota's AI cult left six dead – who is she?
This is the tale of Ziz LaSota, the transgender AI doomsday cultist who believed humanity would perish under artificial intelligence – unless she saved it first.
Born under northern lights, reborn in the shadow of AI
Ziz LaSota's early life was unremarkable: eldest of three, father a university instructor, homeschooled through lonely Alaskan winters. But teenage depression twisted her mind inward. Puberty felt like death. She wrote that she was 'horrified at being overwritten by a new self.'
Logic became her religion. LessWrong and the Rationalist forums her sacred texts.
At the University of Alaska, she read of 'x-risk' – existential risk – and decided AI was the harbinger of humanity's doom. She dropped out of graduate school and arrived in the Bay Area in 2016, ready to 'save the world.'
But Silicon Valley is a cruel temple for prophets. She was just another zealot in a city full of them.
The Sith emerges
She became Ziz: more than six feet tall, blond curls tumbling past her black cape, declaring her faith in the Sith – the dark side order of Star Wars.
She called Rationalists 'master Jedi.' The community tolerated her eccentricities. After all, they believed AI could destroy us all. Peter Thiel, Sam Altman, Sam Bankman-Fried – they had all passed through the Rationalist forge.
But Ziz took it further. Her blog listed categories of people to be 'airlocked.' She advocated radical veganism, sleep deprivation rituals, and violent moral tests. She recruited a cadre of mostly transgender and nonbinary tech aspirants from Google, Oracle, NASA – they called themselves the Zizians.
To them, Ziz was the messiah AI safety had awaited.
From cult to killing field
The timeline of blood is as absurd as it is tragic.
2019:
Zizians don Guy Fawkes masks and robes to disrupt a Rationalist event in California. No guns were found, but SWAT stormed the venue. Arrests followed. Their chanting was described by police as 'speaking in tongues.'
2020:
In Vallejo, California, landlord Curtis Lind was stabbed with knives and a samurai sword after demanding unpaid rent. He shot two Zizians in self-defence. One died. Ziz faked her death by falling off a boat, her obituary running in Alaska newspapers.
2023:
The parents of Michelle Zajko, a close Zizian, were found shot dead in Pennsylvania. Bullets matched Zajko's gun, but evidence fell short. Ziz was arrested with them in a hotel, bailed out, and disappeared again.
2025:
Lind was stabbed to death before he could testify against the group. Days later, in Vermont, two Zizians fired at Border Patrol agents. One agent and one Zizian died in the shootout.
The philosophy that eats itself
Rationalism always prided itself on logic untainted by emotion. But Ziz turned logic into madness. Roko's Basilisk, the infamous AI thought experiment predicting torture for those who don't create AI, haunted her. She believed any attempt to stop AI would condemn her to eternal torture by future malevolent superintelligences.
Her solution: don't back down, escalate, airlock the doubters.
Eliezer Yudkowsky, the Rationalist guru who warned of AI extinction, called Ziz's descent 'sad,' writing that weirdness attracted weirder people, some of whom turned out to be 'genuinely crazy and in a contagious way among the susceptible.'
The Rationalist reckoning
Today, Ziz sits in a Maryland jail, awaiting trial on gun, drug, and obstruction charges. She is not accused of wielding the murder weapons herself, but prosecutors say she orchestrated the violence.
The Rationalist community is left with a bitter aftertaste. Was Ziz simply an unwell woman who found justification in AI apocalypse theory, or did Rationalism's own doomsday fetish birth her? Zvi Mowshowitz, a Rationalist blogger, asked if Ziz would have simply created another cult if AI philosophy hadn't ensnared her.
'The odds are, like, 55 percent,' he guessed. But perhaps the final lesson is simpler, as one Rationalist writer put it: even if the world is ending in five years, you cannot live like it is. That way lies madness, murder, and a black-caped prophetess clutching a samurai sword under flickering fluorescent lights.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
ChatGPT making us dumb & dumber, but we can still come out wiser
Claude Shannon, one of the fathers of AI, once wrote rather disparagingly: 'I visualize a time when we will be to robots what dogs are to humans, and I'm rooting for the machines.' As we enter the age of AI — arguably, the most powerful technology of our times — many of us fear that this prophecy is coming true. Powerful AI models like ChatGPT can create complex essays, poetry and pictures; Google's Veo stitches together cinema-quality videos; Deep Research agents produce research reports at the drop of a prompt. Our innate human abilities of thinking, creating, and reasoning seem to be now duplicated, sometimes surpassed, by AI. This seemed to be confirmed by a recent — and quite disturbing — MIT Media Lab study, 'Your Brain on ChatGPT'. It suggested that while AI tools like ChatGPT help us write faster, they may be making our minds slower. Through a four-month meticulously executed experiment with 54 participants, researchers found that those who used ChatGPT for essay writing exhibited up to 55% lower brain activity, as measured by EEG signals, compared to those who wrote without assistance. If this was not troubling enough, in a later session where ChatGPT users were asked to write unaided, their brains remained less engaged than people without AI ('brain-only' participants, as the study quaintly labelled them). Memory also suffered — only 20% could recall what they had written, and 16% even denied authorship of their own text! The message seemed to be clear: outsourcing thinking to machines may be efficient, but it risks undermining our capacity for deep thought, retention, and ownership of ideas. Technology has always changed us, and we have seen this story many times before. There was a time when you remembered everyone's phone numbers, now you can barely recall your family's, if that. You remembered roads, lanes and routes; if you did not, you consulted a paper map or asked someone. Today, Google and other map apps do that work for us. Facebook reminds us of people's birthdays; email answers suggest themselves, sparing us of even that little effort of thinking. When autonomous cars arrive, will we even remember how to drive or just loll around in our seats as it takes us to our destination? Jonathan Haidt, in his 'The Anxious Generation,' points out how smartphones radically reshaped childhood. Unstructured outdoor play gave way to scrolling, and social bonds turned into notifications. Teen anxiety, loneliness, and attention deficits all surged. From calculators diminishing our mental arithmetic, to GPS weakening our spatial memory, every tool we invent alters us — subtly or drastically. 'Do we shape our tools, or do our tools shape us?' is a quote commonly misattributed to Marshall McLuhan but this question is hauntingly relevant in the age of AI. If we let machines do the thinking, what happens to our human capacity to think, reflect, reason, and learn? This is especially troubling for children, and more so in India. For one, India has the highest usage of ChatGPT globally. Most of it is by children and young adults, who are turning into passive consumers of AI-generated knowledge. Imagine a 16-year-old using ChatGPT to write a history essay. The output might be near-perfect, but what has she actually learned? The MIT study suggests — very little. Without effortful recall or critical thinking, she might not retain concepts, nor build the muscle of articulation. With exams still based on memory and original expression, and careers requiring problem-solving, this is a silent but real risk. The real questions, however, are not whether the study is correct or is exaggerating, or whether AI is making us dumber or not, but what can we do about it. We definitely need some guardrails and precautions, and we need to start building them now. I believe that we should teach ourselves and our children to: Ask the right questions: As answers become commodities, asking the right questions will be the differentiator. We need to relook at our education system and pedagogy and bring back this unique human skill of curiosity. Intelligence is not just about answers. It is about the courage to think, to doubt, and to create Invert classwork and homework: Reserve classroom time for 'brain-only' activities like journaling, debates, and mental maths. Homework can be about using AI tools to learn what will be discussed in class the next day. AI usage codes: Just as schools restrict smartphone use, they should set clear boundaries for when and how AI can be used. Teacher-AI synergy: Train educators to use AI as a co-teacher, and not a crutch. Think of AI as Augmented Intelligence, not an alternative one. Above all, make everyone AI literate: Much like reading, writing, and arithmetic were foundational in the digital age, knowing how to use AI wisely is the new essential skill of our time. AI literacy is more than just knowing prompts. It means understanding when to use AI, and when not to; how to verify AI output for accuracy, bias, and logic; how to collaborate with AI without losing your own voice, and how to maintain cognitive and ethical agency in the age of intelligent machines. Just as we once taught 'reading, writing, adding, multiplying,' we must now teach 'thinking, prompting, questioning, verifying.' History shows that humans adapt. The printing press did not destroy memory; calculators did not end arithmetic; smartphones did not abolish communication. We evolved with them—sometimes clumsily, but always creatively. Today, with AI, the challenge is deeper because it imitates human cognition. In fact, as AI challenges us with higher levels of creativity and cognition, human intelligence and connection will become even more prized. Take chess: a computer defeated Gary Kasparov in chess back in 1997; since then, a computer or AI can defeat any chess champion hundred times out of hundred. But human 'brains-only' chess has become much more popular now, as millions follow D Gukesh's encounters with Magnus Carlsen. So, if we cultivate AI literacy and have the right guardrails in place; if we teach ourselves and our children to think with AI but not through it, we can come out wiser, not weaker. Facebook Twitter Linkedin Email Disclaimer Views expressed above are the author's own.


India.com
3 hours ago
- India.com
Rs 86000000 in salary: Google, Meta, and OpenAI ready to offer huge money for people with talent in...
Rs 86000000 in salary: Google, Meta, and OpenAI ready to offer huge money for people with talent in… Top Tier Talent Salary: In order to hire special talents, major companies around the world are making changes in their salary structure. Tech giant Google has also made major changes in the way the company pays salaries. It has made changes in the salary to attract good and talented employees. These moves are necessary to stay ahead in the ongoing competition in the field of Artificial Intelligence. Not only Google and Meta but also OpenAI are also offering huge salary packages to talented employees. As per a report by Business Insider, citing US Department of Labour documents, software engineers at Google can get a basic salary of USD34,0000 (approx Rs 3 crore). Apart from the basic salary, the company will also give its shares and bonuses, further increasing the total income. Notably, positions such as Product managers, AI researchers and people working in other technical positions are getting impressive salary packages. Meta Is Offering Huge Salary Google is currently facing a very tough competition from other tech giants such as Meta and OpenAI. These companies are also luring good talents in AI by offering them huge salaries. Meta has also invested huge amount in AI and now hiring AI researchers and engineers to power its Generative AI and Reality Labs division. Meta's significant investment in advanced AI in 2023 is reflected in high salaries for its senior AI researchers, ranging from USD600,000 to USD1 million per anum. The salary also includes bonuses and stock options. OpenAI, backed by Microsoft, also offers competitive packages for senior research engineers. These packages range from USD200,000 to USD370,000 in base salary, reaching USD800,000 to USD1 million with equity and profit-sharing incentives. Why Is There A Salary Increase? The tech giants are increasing the salary packages because they want to keep employees who are capable of enhancing large language models, improving generative AI tools, and developing new technologies. According to experts, these high salary packages are not just for new hires but are also to retain good employees in the company.


Time of India
4 hours ago
- Time of India
How Microsoft 'killed' OpenAI's $3 billion acquisition of WindSurf, making Google the 'big winner'
FILE (AP Photo/Rick Rycroft, File) OpenAI's $3 billion agreement to buy the AI coding startup WindSurf has fallen apart. The highly-anticipated acquisition deal between artificial intelligence powerhouse OpenAI and AI coding startup Windsurf. OpenAI had reportedly been close to finalizing the deal to acquire Windsurf, formally known as Exafunction Inc. , with a signed letter of intent and investor payout agreements (waterfall agreements) already in place. The acquisition was even nearing an announcement in early May, according to sources familiar with the discussions. However, an OpenAI spokesperson has confirmed that the exclusivity period for their offer has lapsed, leaving Windsurf free to explore other opportunities. In a swift turn of events, Alphabet Inc's Google has stepped in, striking a deal worth approximately $2.4 billion to acquire top talent and licensing rights from Windsurf. This move comes hot on the heels of the collapsed OpenAI acquisition . Google announced on Friday, July 11, that it is bringing Windsurf Chief Executive Officer Varun Mohan and co-founder Douglas Chen, along with a small team of staffers, into its DeepMind artificial intelligence unit. While the company declined to disclose the specific financial terms, it clarified that the agreement does not involve taking an equity stake in Windsurf itself. This development marks a significant strategic gain for Google in the competitive AI landscape, securing valuable expertise and technology that had been hotly contested by its rivals. Microsoft tensions behind OpenAI-Windsurf deal collapse by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like An engineer reveals: One simple trick to get internet without a subscription Techno Mag Learn More Undo A significant factor in the unraveling of the OpenAI-Windsurf deal appears to be friction with kMicrosoft Corp., a major investor and key partner for OpenAI. According to a Bloomberg report, sources close to the matter indicate that Windsurf was hesitant to grant Microsoft access to its intellectual property. This condition became a sticking point that OpenAI was reportedly unable to resolve with Microsoft, whose existing agreement with OpenAI grants the software giant access to the AI startup's technology. This issue was reportedly one of several points of contention in ongoing discussions between Microsoft and OpenAI regarding OpenAI's restructuring into a commercial entity. What is Windsurf into Founded in 2021, Windsurf is a prominent player in the burgeoning field of AI-driven coding assistants. These systems are designed to automate and streamline coding tasks, including generating code from natural language prompts. The startup has successfully raised over $200 million in venture capital funding from investors like Greenoaks Capital Partners and AIX Ventures, according to PitchBook data. AI Masterclass for Students. Upskill Young Ones Today!– Join Now