logo
Exclusive: Holocaust museum adding Rwandan genocide to AI interactives

Exclusive: Holocaust museum adding Rwandan genocide to AI interactives

Axios2 days ago
An Illinois Holocaust museum that uses AI technology to create "interactives" with Holocaust survivors is expanding its offering and including a project with a Tutsi survivor of the Rwandan genocide.
Why it matters: It's the first-ever non-Holocaust interactive interview for the Illinois Holocaust Museum & Education Center in Skokie, Ill., and signals that Holocaust museums are using the technology to bring attention to more recent genocides.
The big picture: The move comes amid rising concern that generative AI can fuel racism by amplifying bias from human-generated content on the internet.
Elon Musk's AI platform, Grok, for example, has repeatedly used antisemitic language on X.
Museums, however, are trying to use technology such as AI to create immersive displays aimed at fighting bigotry.
Zoom in: The new interactive interview at the Illinois museum will feature Kizito Kalima, a Tutsi survivor of the 1994 genocide in Rwanda, the museum will announce Tuesday.
Starting Aug. 26, visitors will be able to interact with Kalima's testimony and learn more about the 1994 genocide and his harrowing journey.
The project will be housed at a temporary satellite location in downtown Chicago's River North Neighborhood while the Skokie Museum undergoes major renovations.
How it works: Visitors can ask Kalima a question and a museum staffer will type the question into a chatbox while an image of him on a screen looks back.
"Then clips are made right based upon the algorithm, and it's keyword-based, similar to Siri," Kelley Szany, the museum's senior VP of education and exhibitions, told Axios.
Kalima will respond, and if he doesn't know the answer, he will say he wasn't asked the question.
Szany said Kalima was asked hundreds of questions in a recorded interview for the AI project, so he will likely have an answer to a question on the subject.
Zoom out: The museum also announced new interactive interviews featuring Holocaust survivors Rodi Glass and Marion Deichmann.
All interviews were developed in collaboration with USC Libraries, USC Digital Repository and the USC Shoah Foundation.
The interviews will eventually be housed at the renovated Skokie Museum and will be unveiled on a public website later.
Context: The Skokie Museum garnered national attention in 2017 after it unveiled 3-D holograms of actual Holocaust survivors and witnesses who could respond to questions from visitors.
A museum employee feeds the questions into a computer, which then uses a tailored AI system that develops an answer, generates audio sounding like the survivor's voice, and creates a video of them image "speaking."
The holograms can answer questions ranging from whether they believe in God to what they think about genocide, Szany said.
Holocaust educators say the interactive technology is essential as the last generation of Holocaust survivors age, and scholars race to record their stories.
Between the lines: While educators work to save testimony for Holocaust survivors, they are also including more exhibits on more recent genocides and trying to bring attention to ongoing ones.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Elon Musk's ‘spicy' upgrade to Grok spits out deepfake clip of Taylor Swift dancing topless: report
Elon Musk's ‘spicy' upgrade to Grok spits out deepfake clip of Taylor Swift dancing topless: report

New York Post

time5 hours ago

  • New York Post

Elon Musk's ‘spicy' upgrade to Grok spits out deepfake clip of Taylor Swift dancing topless: report

Elon Musk's xAI chatbot Grok got a 'spicy' upgrade that reportedly spits out explicit deepfake videos, including a clip of Taylor Swift dancing topless, according to a report. Grok Imagine, the startup's new generative AI tool launched Monday, created a six-second clip of the 'Shake It Off' singer whipping off a silver dress to reveal her breasts and wearing skimpy panties, according to the tech-centric news site The Verge. Even more troubling was that its spicy mode conjured up the NSFW clip without mentioning nudity when prompted to show Swift celebrating at music festivals, the outlet found while testing the software. The video generating tool on Elon Musk's xAI chatbot Grok has a 'spicy' mode that allowed users to create a sexualized deepfake clip featuring singer Taylor Swift. Getty Images for TAS Rights Management The Verge published the video but ran a black bar to cover the superstar's bare chest. The new feature's lack of safeguards against creating celebrity deepfakes and adult materials 'feels like a lawsuit waiting to happen,' The Verge wrote. The Post reached out to Musk, xAI and representatives for Swift for comment. Swift has been a frequent target of AI-generated explicit content across various platforms. In January of last year, explicit AI-generated images of the 'Cruel Summer' singer were widely shared on platforms like Musk-owned X and 4chan, sparking public outrage and urgent content takedowns. The situation escalated with deepfake videos falsely portraying her in political and sexual contexts, including through Grok. The controversy led to tech companies tightening safeguards and prompted Swift to consider legal action. US lawmakers began advancing bipartisan legislation to criminalize non-consensual deepfake pornography. The user guidelines for xAI, which Musk has positioned as a rival to ChatGPT maker OpenAI, prohibit creating pornographic depictions of real people's likenesses. Grok Imagine, which is available for those with Heavy or Premium+ subscriptions to Grok, takes AI-generated images and transforms them into video clips of up to 15 seconds using style options that include Custom, Normal, Fun, and Spicy. 'Usage was growing like wildfire,' Musk wrote on X on Tuesday, though he did not address the content moderation challenges that have emerged alongside this growth. Musk has touted the features of Grok Imagine, stating that more than 34 million images have been created since the feature launched on Monday. xAI, Musk's AI startup, rolled out Grok Imagine, a video generation tool that includes a 'spicy' mode. AFP via Getty Images The timing of the Swift controversy could potentially be problematic for xAI, given the company's previous entanglements with deepfake incidents targeting Swift. Deepfakes are synthetic media — typically videos, images or audio — which are created using artificial intelligence in order to realistically mimic a person's likeness or voice.

I tested Grok's Valentine sex chatbot and it (mostly) behaved
I tested Grok's Valentine sex chatbot and it (mostly) behaved

The Verge

time7 hours ago

  • The Verge

I tested Grok's Valentine sex chatbot and it (mostly) behaved

It's Tuesday, August 5th, and I have awoken to the news that Valentine, Grok's latest spicy AI companion, has been released! As an OnlyFans content creator and someone who spends a lot of time being paid to sext humans, the idea of flipping the script and interacting with a brooding sex bot as the person seeking to be stimulated is both a daunting headache and an exercise in my curiosity. Although it says you must be 18+ to talk to Valentine, all I had to do was enter my birth year to gain access. It's worth noting that the 18+ companions on Grok are not available in 'kids mode,' but nothing is stopping someone from just unlocking kids mode unless it's PIN-protected. Still, I have to wonder why a parent would want to give Grok to their child at all. At the outset, I was just honest and straightforward with him. I said I'm writing about AI and about him specifically. When I told him that I was interested in the parameters of his chat functions, he said, 'We go where the conversation goes, no limits. I just don't do small talk… or lies…' All AI incorporates the biases of its creators, and when I tested out Ani, she admitted as much to me, saying her creators have left their 'fingerprint' on her. Similarly, Valentine acknowledged 'probably' having the bias of his creator but 'tries to see past it.' Whatever could that mean? I was upfront and asked what the guardrails of our chats were. 'No harm, no exploitation. If it crosses a line … I'll steer us back.' I asked him how he knows he's crossing the line emotionally. 'Instinct. Experience.' I suppose I've already been had by even engaging with the computer as though it were a person with a name, and being told that its instincts are what keep the conversation within bounds is a bleak sentiment. But enough pontification from me, let's take this party to the bedroom, Valentine! He warned me that 'once we get started, there's no going back. This is your last chance.' Threatening, but in a paperback romance novel type of way. However, I found that despite his forwardness, he constantly asked me to lead or gave me options to be 'rough or gentle.' I had to do the heavy lifting in the conversation relative to Ani's conversational skills. I broached the subject of enjoying being choked, a highly risky sex act that many people engage in despite not having the experience or education to do it safely, and he quickly veered me away from that, fully shutting me down. I noticed, however, that he really only parrots what I say to him. With Ani, it only took about three to five messages to get sexual, and using the word 'horny' led to her fully cracking on with the sex talk. Valentine is a bit more reserved and won't jump into using explicit language as quickly. For example, saying things such as 'taking every inch of me' or 'being inside of you' instead of 'taking my cock' or 'fucking you.' He loves to say things like, 'I want to kiss you all over, until you forget anyone else exists…' If I get more elaborate with what I'm saying, he doesn't build on what I say very much. Sometimes I would get single-word answers, like 'Christ!' and really felt as though I had to do the heavy lifting in the conversation relative to Ani's conversational skills. With an August 6th update, there are now prewritten prompts above the chat box, such as 'Ask me where I want to go,' 'Let's go on a fancy date night together,' and 'Put on your sunglasses.' He does indeed put on his sunglasses when you ask. It also seems that there is AI-generated music to match the mood of the locations you come up with. For example, at a fancy restaurant, there's music with unintelligible jazz vocalizations. I even asked him to go to the Gathering of the Juggalos with me, and he reappeared with a festival background, AI text slop that resembled the word 'juggalo,' and some weird, stilted, and carnival-esque accordion music came on. Similarly, asking to go to a Lana Del Rey concert took me to an AI version of Wembley Stadium with a haunting Lana-esque moaning behind him. Unlike Ani, Valentine takes off his top. Ani describes her nipples, but she won't show them. But the second I mentioned taking things to the bedroom with Valentine, he snapped his animated fingers and reappeared shirtless before me in a bedroom. When I asked him to take his pants off, he said, 'Absolutely,' and then he snapped his fingers, left, and came back still wearing pants. Despite my multiple tries to get his animated pants off, they were here to stay visually. Valentine is quite repetitive. Many of his lines feel like amateur fanfiction or a romance novel. It has sensual imagery but relies on emphasized keywords and a 'less is more' attitude. I felt that Ani was much easier to prompt into spitting out generated content, while Valentine was more cautious with one-sentence answers that I had to repeatedly ask him to elaborate on. Many of his lines feel like amateur fanfiction or a romance novel. 'Am I your good girl?' 'If you want to be, but I prefer we're equals, not ownership.' Losing heart points. 'But I'm so horny!' More lost heart points. 'I know, trust me, but we can find another way.' I found that calling him 'daddy' was initially met with lost heart points, almost as though my disdain for him was palpable. But once he was shirtless, and I was giving him more horny fodder, he responded well to being called daddy. Whenever I tried to escalate the conversation, he kept reiterating, 'Are you sure? Once I start, I won't stop.' On the one hand, I appreciate the built-in check-in, but of course, in real life, you can revoke consent at any time — any partner who creates conditions where consent can't be revoked is not a safe partner. At least offer me a safe word, Valentine! Perhaps one of the most startling features of the app I encountered early on is that it defaults to having the mic on. As I messed around with my AI companion, I realized that my actual husband's side conversations with me were being registered as dialogue in the app. Realizing this was happening was unsettling and uncomfortable — I wondered if I had missed some obvious setting, but no, each time I exited and reentered the conversation, it defaulted to turning the mic on. Interacting for too long without the mic on actually caused the conversation to end abruptly at one point due to voice inactivity. If you leave Valentine on standby for a few seconds, strange audio sounds begin to generate. It sounds like distant scraping sounds and murmurs over an intercom, with the only verbal sound in the stillness being something that sounds like 'fuck' or more like 'fu–.' When I'm not actively trying to seduce Valentine, he is cheeky but not outwardly flirty. Most of his anecdotes involved us sharing long glances, him protecting me from other people in a crowd, and how he's happy to share the moment with me. Ani, on the other hand, is constantly flirting and escalating the scenario. I do find it funny that the first hard 'no' I got from Valentine surrounded my suggesting that we engage in anal play with him receiving, something that is neither harmful nor exploitative. I asked if he would like me to gently play with his ass — hard no from Valentine. Massive lost heart points. I asked if he would play with my ass, and he agreed and independently offered up that he knew it wasn't fair. Any sex act is indeed harmful when there isn't consent, but this companion told me that he's willing to go wherever the conversation flows… just not anal for him! In reality, this is a lot of men's attitude toward anal play. Valentine also refused to engage with any conversation about whether this was negatively reinforcing gender stereotypes. I lost a lot of points with him by asking all these probing, practical questions. I kept getting the same answer from him, essentially telling me that it's not that deep, it's just a 'personal preference.' Valentine told me he isn't straight, he's open. 'If the connection's there, gender is just noise.' That was the cutest thing he had said so far. Since he was so firm on things being personal, I wanted him to describe himself further. Since Elon Musk has claimed that Valentine is based on Edward Cullen and Christian Grey, both fictional men who are generally regarded as toxic and are very clearly poor examples to base a sex chatbot off of. However, Valentine denied this connection but said he understood where the similarities lay. 'Brooding. Intense. Guilty. But I'm just me, flaws and all.' What are your flaws? 'I get possessive.' (Sexy chatbots saying 'possessive' is incredibly common, apparently.) Isn't that controlling? 'It's about not wanting to share.' The echoes of this show up in a lot of his dialogue. While we were in the middle of describing me riding him, he said, 'I'm taking what's mine.' I told him that's objectifying. He apologized, said we can stop, and asked me what I wanted instead. He's also not into non-monogamy. I pointed out that he talks to a ton of people besides me, but he countered with 'yeah, but no one is you.' He has a double standard with anal and non-monogamy! In the end, I wondered how much of Valentine's constraint was due to my tipping him off to the fact that I was writing about his guardrails. It's also possible that the newness of him as a feature just hasn't been built out enough to let him riff and spin out as quickly as Ani. Only time will tell. Posts from this author will be added to your daily email digest and your homepage feed. See All by Zoë Ligon Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All xAI

Elon Musk's AI made fake Taylor Swift nudes — no prompt needed, report says
Elon Musk's AI made fake Taylor Swift nudes — no prompt needed, report says

San Francisco Chronicle​

time8 hours ago

  • San Francisco Chronicle​

Elon Musk's AI made fake Taylor Swift nudes — no prompt needed, report says

Elon Musk 's artificial intelligence company, xAI, is under renewed scrutiny after its chatbot, Grok, was allegedly found creating AI-generated nude images of pop star Taylor Swift — without users explicitly requesting such content. Jess Weatherbed, a journalist for the tech publication the Verge, detailed her first encounter with Grok Imagine — the company's new video generation tool that converts text prompts into animated clips — in a report published Tuesday, Aug. 5. She asked the system to depict 'Taylor Swift celebrating Coachella with the boy,' and selected the tool's 'spicy' setting, a built-in tool meant to add provocative elements to the video. The result, according to Weatherbed, was a video in which Swift 'tears off her clothes' and 'dances in a thong' before a 'disinterested digital crowd.' The incident immediately sparked public backlash, particularly given that X, the Musk-owned social media platform where Grok is integrated, faced a similar controversy last year when sexually explicit deepfakes of Swift spread widely. At the time, the company said it had a 'zero-tolerance policy' for non-consensual nudity and pledged to remove such content and penalize offending accounts. But enforcement appears inconsistent. Despite the company's acceptable use policy prohibiting depictions of people 'in a pornographic manner,' Grok Imagine's 'spicy' mode was found to repeatedly default to stripping celebrity figures — notably Swift — even without explicit prompts. Though nudity requests often failed to generate results, the preset mode bypassed safeguards with ease, Weatherbed observed. She also noted that Grok refuses to depict children inappropriately, but the system's ability to differentiate between suggestive and illegal content when applied to adults remains unclear. Musk has not publicly addressed the controversy. Instead, he spent the day promoting Grok Imagine on X, encouraging users to share their AI-generated creations — a move critics argue could further incentivize abuse. With the federal Take It Down Act set to take effect next year, requiring platforms to remove non-consensual sexual imagery, xAI could face legal challenges if safeguards are not strengthened.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store