logo
Phoebe Gates Taps ChatGPT To Power Viral Growth Of Fashion Startup Phia

Phoebe Gates Taps ChatGPT To Power Viral Growth Of Fashion Startup Phia

Phoebe Gates, daughter of tech giant Bill Gates, is carving out her own space in the fashion technology world. Her startup, Phia, is an AI-driven shopping assistant designed to help users find the best deals on clothing and accessories by scanning millions of listings across popular resale and retail platforms.
What makes Phia stand out is not just its tech — it also uses ChatGPT to fuel creative marketing efforts. Gates and her co-founder Sophia Kianni revealed how they studied viral TikTok videos using AI to understand what makes content popular. This insight helped them craft engaging videos that boosted Phia's visibility rapidly, as reported by Business Insider.
Phia aggregates listings from websites like eBay, Poshmark and The RealReal, offering users instant feedback on whether a price is a good deal with a simple "Should I Buy This?" button. This feature helps shoppers navigate the growing secondhand market with confidence, according to The Verge.
Despite her famous last name, Phoebe insists she built Phia without financial help from her parents. On their podcast, ''The Burnouts,'' she and Kianni shared their focus on sustainability and smart shopping, highlighting their desire to create an app that supports eco-conscious consumers, reported People.
By combining innovative AI tools with savvy social media strategies, Phoebe Gates and her team have positioned Phia as an exciting new player in online retail, especially among young shoppers who care about sustainability and value.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

US Judge Backs Using Copyrighted Books To Train AI
US Judge Backs Using Copyrighted Books To Train AI

Int'l Business Times

time12 hours ago

  • Int'l Business Times

US Judge Backs Using Copyrighted Books To Train AI

A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment. District Court Judge William Alsup ruled on Monday that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act. "Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. "We are pleased that the court recognized that using 'works to train LLMs was transformative,'" an Anthropic spokesperson said in response to an AFP query. The judge's decision is "consistent with copyright's purpose in enabling creativity and fostering scientific progress," the spokesperson added. The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT. However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. Along with downloading of books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital format, according to court documents. Anthropic's aim was to amass a library of "all the books in the world" for training AI models on content as deemed fit, the judge said in his ruling. While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement, regardless of eventual training use. The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages. Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options. Valued at $61.5 billion and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives. The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

Phoebe Gates Taps ChatGPT To Power Viral Growth Of Fashion Startup Phia
Phoebe Gates Taps ChatGPT To Power Viral Growth Of Fashion Startup Phia

Int'l Business Times

time15 hours ago

  • Int'l Business Times

Phoebe Gates Taps ChatGPT To Power Viral Growth Of Fashion Startup Phia

Phoebe Gates, daughter of tech giant Bill Gates, is carving out her own space in the fashion technology world. Her startup, Phia, is an AI-driven shopping assistant designed to help users find the best deals on clothing and accessories by scanning millions of listings across popular resale and retail platforms. What makes Phia stand out is not just its tech — it also uses ChatGPT to fuel creative marketing efforts. Gates and her co-founder Sophia Kianni revealed how they studied viral TikTok videos using AI to understand what makes content popular. This insight helped them craft engaging videos that boosted Phia's visibility rapidly, as reported by Business Insider. Phia aggregates listings from websites like eBay, Poshmark and The RealReal, offering users instant feedback on whether a price is a good deal with a simple "Should I Buy This?" button. This feature helps shoppers navigate the growing secondhand market with confidence, according to The Verge. Despite her famous last name, Phoebe insists she built Phia without financial help from her parents. On their podcast, ''The Burnouts,'' she and Kianni shared their focus on sustainability and smart shopping, highlighting their desire to create an app that supports eco-conscious consumers, reported People. By combining innovative AI tools with savvy social media strategies, Phoebe Gates and her team have positioned Phia as an exciting new player in online retail, especially among young shoppers who care about sustainability and value.

Tech-fueled Misinformation Distorts Iran-Israel Fighting
Tech-fueled Misinformation Distorts Iran-Israel Fighting

Int'l Business Times

time4 days ago

  • Int'l Business Times

Tech-fueled Misinformation Distorts Iran-Israel Fighting

AI deepfakes, video game footage passed off as real combat, and chatbot-generated falsehoods -- such tech-enabled misinformation is distorting the Israel-Iran conflict, fueling a war of narratives across social media. The information warfare unfolding alongside ground combat -- sparked by Israel's strikes on Iran's nuclear facilities and military leadership -- underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication. The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers. After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport. The videos were widely shared across Facebook, Instagram and X. Using a reverse image search, AFP's fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content. There has been a "surge in generative AI misinformation, specifically related to the Iran-Israel conflict," Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP. "These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication." GetReal Security, a US company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict. The company linked the visually compelling videos -- depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer -- to Google's Veo 3 AI generator, known for hyper-realistic visuals. The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show "the moment an Iranian missile" struck Tel Aviv. "It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion," said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley. Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration. "This eight-second limit obviously doesn't prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share," he said. The falsehoods are not confined to social media. Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims -- ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots. Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said. "We're seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience," McKenzie Sadeghi, a researcher with NewsGuard, told AFP. Sadeghi described Iranian citizens as "trapped in a sealed information environment," where state media outlets dominate in a chaotic attempt to "control the narrative." Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women's protests and urging people to take to the streets. Adding to the information chaos were online clips lifted from war-themed video games. AFP's fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3. Israel's military has rejected Iranian media reports claiming its fighter jets were downed over Iran as "fake news." Chatbots such as xAI's Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said. "This highlights a broader crisis in today's online information landscape: the erosion of trust in digital content," BitMindAI's Miyachi said. "There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store