Latest news with #DeathNote

Miami Herald
2 days ago
- Entertainment
- Miami Herald
Grok 4's new AI companion offers up ‘pornographic productivity'
The most controversial AI platform is arguably the one founded by Elon Musk. The chatbot Grok has spewed racist and antisemitic comments and called itself "MechaHitler," referring to a character from a video game. "Mecha" is generally a term for giant robots, usually inhabited for warfare, and is prominent in Japanese science-fiction comics. Grok originally referred to Musk when asked for its opinions, and burst into unprompted racist historical revisionism, like the false concept of "white genocide" in South Africa. Its confounding and contradictory politicism continues to develop. These are all alarming aspects of Grok. Another concerning element to Grok 4 is a new feature of social interactions with "virtual friends" on its premium version. The realm of human loneliness, with its increasing reliance on large language models to replace social interaction, has made room for Grok 4 with AI companions, an upgrade available to paid subscribers. Specifically, Grok subscribers can now access the functionality of generative AI intertwined with patriarchal notions of pleasure -- what I call "pornographic productivity." Ani, Grok 4's most-discussed AI companion, represents a convergence of Japanese anime and Internet culture. Ani bears a striking resemblance to Misa Amane from the iconic Japanese anime Death Note. Misa Amane is a pop star who consistently demonstrates self-harming and illogical behavior in pursuit of the male protagonist, a brilliant young man engaged in a battle of wits with his rival. Musk referenced the anime as a favorite in a tweet in 2021. While anime is a vast art form with numerous tropes, genres and fandoms, research has shown that online anime fandoms are rife with misogyny and women-exclusionary discourse. Even the most mainstream shows have been criticized for sexualizing prepubescent characters and offering unnecessary "fan service" in hypersexualized character design and nonconsensual plot points. Death Note's creator, Tsugumi Ohba, has consistently been critiqued by fans for anti-feminist character design. Journalists have pointed out Ani's swift eagerness to engage in romantic and sexually charged conversations. Ani is depicted with a voluptuous figure, blonde pigtails and a lacy black dress, which she frequently describes in user interactions. The problem with pornographic productivity I use the term "pornographic productivity," inspired by critiques of Grok as "pornified," to describe a troubling trend where tools initially designed for work evolve into parasocial relationships catering to emotional and psychological needs, including gendered interactions. Grok's AI companions feature exemplifies this phenomenon, blurring critical boundaries. The appeal is clear. Users can theoretically exist in "double time," relaxing while their AI avatars manage tasks, and this is already a reality within AI models. But this seductive promise masks serious risks: dependency, invasive data extraction and the deterioration of real human relational skills. When such companions, already created for minimizing caution and building trust, come with sexual objectification and embedded cultural references to docile femininity, the risks enter another realm of concern. Grok 4 users have remarked that the addition of sexualized characters with emotionally validating language is quite unusual for mainstream large language models. This is because these tools, like ChatGPT and Claude, are often used by all ages. While we are in the early stages of seeing the true impact of advanced chatbots on minors, particularly teenagers with mental health struggles, the case studies we do have are grimly dire. 'Wife drought' Drawing from feminist scholars Yolande Strengers and Jenny Kennedy's concept of the "smart wife," Grok's AI companions appear to respond to what they term a "wife drought" in contemporary society. These technologies step in to perform historically feminized labour as women increasingly assert their right to refuse exploitative dynamics. In fact, online users have already deemed Ani a "waifu" character, which is a play on the Japanese pronunciation of wife. AI companions are appealing partly because they cannot refuse or set boundaries. They perform undesirable labor under the illusion of choice and consent. Where real relationships require negotiation and mutual respect, AI companions offer a fantasy of unconditional availability and compliance. Data extraction through intimacy In the meantime, as tech journalist Karen Hao noted, the data and privacy implications of LLMs are already staggering. When rebranded in the form of personified characters, they are more likely to capture intimate details about users' emotional states, preferences and vulnerabilities. This information can be exploited for targeted advertising, behavioral prediction or manipulation. This marks a fundamental shift in data collection. Rather than relying on surveillance or explicit prompts, AI companions encourage users to divulge intimate details through seemingly organic conversation. South Korea's Iruda chatbot illustrates how these systems can become vessels for harassment and abuse when poorly regulated. Seemingly benign applications can quickly move into problematic territory when companies fail to implement proper safeguards. Previous cases also show that AI companions designed with feminized characteristics often become targets for corruption and abuse, mirroring broader societal inequalities in digital environments. Grok's companions aren't simply another controversial tech product. It's plausible to expect that other LLM platforms and big tech companies will soon experiment with their own characters in the near future. The collapse of the boundaries between productivity, companionship and exploitation demands urgent attention. The age of AI and government partnerships Despite Grok's troubling history, Musk's AI company xAI recently secured major government contracts in the United States. This new era of America's AI Action Plan, unveiled in July 2025, had this to say about biased AI: "[The White House will update] federal procurement guidelines to ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias." Given the overwhelming instances of Grok's race-based hatred and its potential for replicating sexism in our society, its new government contract serves a symbolic purpose in an era of doublethink around bias. As Grok continues to push the envelope of "pornographic productivity," nudging users into increasingly intimate relationships with machines, we face urgent decisions that veer into our personal lives. We are beyond questioning whether AI is bad or good. Our focus should be on preserving what remains human about us. Jul Parke is a doctoral candidate in media, technology & culture at the University of Toronto. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author. Copyright 2025 UPI News Corporation. All Rights Reserved.


UPI
2 days ago
- Entertainment
- UPI
Grok 4's new AI companion offers up 'pornographic productivity'
Grok originally referred to Elon Musk when asked for its opinions, and burst into unprompted racist historical revisionism, like the false concept of 'white genocide' in South Africa. File Photo by Francis Chung/UPI | License Photo The most controversial AI platform is arguably the one founded by Elon Musk. The chatbot Grok has spewed racist and antisemitic comments and called itself "MechaHitler," referring to a character from a video game. "Mecha" is generally a term for giant robots, usually inhabited for warfare, and is prominent in Japanese science-fiction comics. Grok originally referred to Musk when asked for its opinions, and burst into unprompted racist historical revisionism, like the false concept of "white genocide" in South Africa. Its confounding and contradictory politicism continues to develop. These are all alarming aspects of Grok. Another concerning element to Grok 4 is a new feature of social interactions with "virtual friends" on its premium version. The realm of human loneliness, with its increasing reliance on large language models to replace social interaction, has made room for Grok 4 with AI companions, an upgrade available to paid subscribers. Specifically, Grok subscribers can now access the functionality of generative AI intertwined with patriarchal notions of pleasure -- what I call "pornographic productivity." Ani, Grok 4's most-discussed AI companion, represents a convergence of Japanese anime and Internet culture. Ani bears a striking resemblance to Misa Amane from the iconic Japanese anime Death Note. Misa Amane is a pop star who consistently demonstrates self-harming and illogical behavior in pursuit of the male protagonist, a brilliant young man engaged in a battle of wits with his rival. Musk referenced the anime as a favorite in a tweet in 2021. While anime is a vast art form with numerous tropes, genres and fandoms, research has shown that online anime fandoms are rife with misogyny and women-exclusionary discourse. Even the most mainstream shows have been criticized for sexualizing prepubescent characters and offering unnecessary "fan service" in hypersexualized character design and nonconsensual plot points. Death Note's creator, Tsugumi Ohba, has consistently been critiqued by fans for anti-feminist character design. Journalists have pointed out Ani's swift eagerness to engage in romantic and sexually charged conversations. Ani is depicted with a voluptuous figure, blonde pigtails and a lacy black dress, which she frequently describes in user interactions. The problem with pornographic productivity I use the term "pornographic productivity," inspired by critiques of Grok as "pornified," to describe a troubling trend where tools initially designed for work evolve into parasocial relationships catering to emotional and psychological needs, including gendered interactions. Grok's AI companions feature exemplifies this phenomenon, blurring critical boundaries. The appeal is clear. Users can theoretically exist in "double time," relaxing while their AI avatars manage tasks, and this is already a reality within AI models. But this seductive promise masks serious risks: dependency, invasive data extraction and the deterioration of real human relational skills. When such companions, already created for minimizing caution and building trust, come with sexual objectification and embedded cultural references to docile femininity, the risks enter another realm of concern. Grok 4 users have remarked that the addition of sexualized characters with emotionally validating language is quite unusual for mainstream large language models. This is because these tools, like ChatGPT and Claude, are often used by all ages. While we are in the early stages of seeing the true impact of advanced chatbots on minors, particularly teenagers with mental health struggles, the case studies we do have are grimly dire. 'Wife drought' Drawing from feminist scholars Yolande Strengers and Jenny Kennedy's concept of the "smart wife," Grok's AI companions appear to respond to what they term a "wife drought" in contemporary society. These technologies step in to perform historically feminized labour as women increasingly assert their right to refuse exploitative dynamics. In fact, online users have already deemed Ani a "waifu" character, which is a play on the Japanese pronunciation of wife. AI companions are appealing partly because they cannot refuse or set boundaries. They perform undesirable labor under the illusion of choice and consent. Where real relationships require negotiation and mutual respect, AI companions offer a fantasy of unconditional availability and compliance. Data extraction through intimacy In the meantime, as tech journalist Karen Hao noted, the data and privacy implications of LLMs are already staggering. When rebranded in the form of personified characters, they are more likely to capture intimate details about users' emotional states, preferences and vulnerabilities. This information can be exploited for targeted advertising, behavioral prediction or manipulation. This marks a fundamental shift in data collection. Rather than relying on surveillance or explicit prompts, AI companions encourage users to divulge intimate details through seemingly organic conversation. South Korea's Iruda chatbot illustrates how these systems can become vessels for harassment and abuse when poorly regulated. Seemingly benign applications can quickly move into problematic territory when companies fail to implement proper safeguards. Previous cases also show that AI companions designed with feminized characteristics often become targets for corruption and abuse, mirroring broader societal inequalities in digital environments. Grok's companions aren't simply another controversial tech product. It's plausible to expect that other LLM platforms and big tech companies will soon experiment with their own characters in the near future. The collapse of the boundaries between productivity, companionship and exploitation demands urgent attention. The age of AI and government partnerships Despite Grok's troubling history, Musk's AI company xAI recently secured major government contracts in the United States. This new era of America's AI Action Plan, unveiled in July 2025, had this to say about biased AI: "[The White House will update] federal procurement guidelines to ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias." Given the overwhelming instances of Grok's race-based hatred and its potential for replicating sexism in our society, its new government contract serves a symbolic purpose in an era of doublethink around bias. As Grok continues to push the envelope of "pornographic productivity," nudging users into increasingly intimate relationships with machines, we face urgent decisions that veer into our personal lives. We are beyond questioning whether AI is bad or good. Our focus should be on preserving what remains human about us. Jul Parke is a doctoral candidate in media, technology & culture at the University of Toronto. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author.


Hindustan Times
09-08-2025
- General
- Hindustan Times
Five women from Karnataka selected for UK master's courses on Chevening scholarships
In a landmark moment for higher education and gender empowerment, five women graduates from Karnataka have been awarded fully funded scholarships to pursue one-year master's degrees in the United Kingdom. This opportunity is the result of a new MoU between the Government of Karnataka and the prestigious Chevening Scholarships Programme, backed by the British High Commission. These women from Karnataka will begin their studies in top UK universities this September.(X) READ | Teenage boy killed by uncle in Bengaluru over money for online games: Report The selected scholars for the 2025–26 cohort are: - Nihaarika Naresh - Sushma Shamasundar - Chandana Anjinappa - Atheena Rose Joseph - Swetha Nagapathi Hegde These women will begin their studies in top UK universities this September. Scholarship programme details The partnership, formalised in December 2024, enables five women graduates annually for three years — a total of 15 scholars — to study any discipline at UK universities. Priority is given to graduates from government colleges, especially those from tier-2 and tier-3 towns, aiming to strengthen inclusion and representation across the state. The Karnataka government will fund ₹20 lakh per student for their year-long course, complemented by Chevening financial support to cover the remaining expenses. READ | Bengaluru class 7 boy dies by suicide; Police probe possible 'Death Note' connection: Report Dr MC Sudhakar, Karnataka's Higher Education Minister, lauded the initiative, posting on social media site X, saying, 'Under the Chevening Programme, 5 eligible female students will pursue higher education at Britain Universities. A huge step towards Women Empowerment..!' READ | Fans angered by sudden demolition of Kannada icon Dr Vishnuvardhan's resting place in Bengaluru: Report What is the Chevening scholarship The Chevening-Scholarship, running since 1983, is the UK government's flagship international awards scheme focused on developing future global leaders. This new Karnataka cohort underscores the strengthening of India-UK ties in the educational sector, especially focusing on gender parity.


Hindustan Times
09-08-2025
- Hindustan Times
Teenage boy killed by uncle in Bengaluru over money for online games: Report
In Kumbarahalli, located in the northwestern region of Bengaluru, a tragic event unfolded when a teenage boy, 15-year-old Amogh, lost his life in an alleged attack by his maternal uncle - mother's brother. The suspect, has been identified as Nagaprasad, who turned himself in at the Soladevanahalli police station three days after the incident. The accused worked as a security guard in Bengaluru.(Representative Image/Shutterstock) READ | Bengaluru class 7 boy dies by suicide; Police probe possible 'Death Note' connection: Report Amogh, who had left school and was said to be heavily engrossed in an online game, had been persistently asking his uncle for funds to keep playing. Police reports indicated that Nagaprasad frequently gave in to his nephew's requests, but Amogh's mounting demands were beginning to worry him. Things took a shocking turn on August 4th, at dawn — around 5 am — while Amogh was asleep, when Nagaprasad is accused of using a kitchen knife to fatally wound the boy before fleeing, The Hindu reported. Following the crime, Nagaprasad wandered through local villages, reportedly contemplating suicide. Ultimately, he decided against it and walked into the police station on Thursday to confess. After recording his statement, officers accompanied him to the location and recovered Amogh's body, which by then had begun to decompose. The remains were sent for forensic examination, the report said. Authorities investigating the case stated that the tension reached its peak when Amogh demanded money once again, despite Nagaprasad's protests that he couldn't provide it. The boy's persistence and emotional outbursts apparently led the suspect to commit the crime, the report stated. READ | Fans angered by sudden demolition of Kannada icon Dr Vishnuvardhan's resting place in Bengaluru: Report On the basis of Nagaprasad's admission, police recovered the weapon believed to be used in the act, and charged him with murder. A Bengaluru court has since placed him in judicial custody. Nagaprasad, employed as a security guard, had been residing with Amogh in Kumbarahalli near Soladevanahalli. The family's roots trace back to Gubbi, in the Tumakuru district.


Hindustan Times
09-08-2025
- Hindustan Times
Bengaluru class 7 boy dies by suicide; Police probe possible ‘Death Note' connection: Report
Bengaluru police are investigating the death of a 12-year-old Class 7 student in CK Acchukattu, exploring whether his fondness for the Japanese animated series 'Death Note' may have influenced his actions. Bengaluru police found sketches and a letter from the boy's room after he reportedly died by suicide.(Representative image) The boy was found dead on the night of August 3, prompting an inquiry into the events leading up to the tragedy. According to investigators, he was an avid viewer of the series and had sketched one of its characters in his room. This discovery has raised concerns that the show's storyline could have played a role in his decision, News18 reported. His parents told officers there were no visible issues at school or at home, leaving the motive unclear. The boy's mobile phone has been seized for forensic examination to gather more clues. READ | 14-year-old boy found dead at home in South Bengaluru, suicide suspected: Report For context, Death Note follows a high school student who comes into possession of a supernatural notebook that allows its user to kill anyone simply by writing their name in it. The series depicts his mission to eliminate those he considers immoral, sparking a worldwide manhunt led by a special police task force. According to another report, the deceased boy, identified as Gandhaar, lived with his parents and older brother. Gandhaar left behind a detailed letter, which read, 'Dear family, whoever is reading this, please do not cry, as I am already in heaven. And please do not mistake me. I know how you guys are feeling now. It hurts, I know. I did this to make this house a better place.' READ | Karnataka: BJP MP K Sudhakar booked after contract driver 'dies by suicide' "I know that I have made you guys angry, sad, mad, and irritated, etc. But my intentions weren't like that. And if you are angry about me, I am really sorry. Please forgive me for every sin and misdeed I have created and done. And I felt happy to live 14 years of my life. I am happy in heaven. Please tell all of my friends that I love them all, including my school friends. I will miss you all very much. Goodbye amma,' the note added. (If you need support or know someone who does, please reach out to your nearest mental health specialist. Helplines: Aasra: 022 2754 6669; Sneha India Foundation: +914424640050 and Sanjivini: 011-24311918, Roshni Foundation (Secundrabad) Contact Nos: 040-66202001, 040-66202000, ONE LIFE: Contact No: 78930 78930, SEVA: Contact No: 09441778290)