
I used an AI stylist for five days and hated it
For five days, generative AI became my personal stylist. It told me what to wear and how to wear it. It chose my hairstyle and lip colour. It even suggested to the manicurist that yellow was my colour.
Generative AI—which can do everything from write code, conduct makeshift therapy sessions and create designs for fashion designers—could add close to $300 billion in the next five years to the operating profits of apparel, fashion and luxury sectors, says a 2023 report by consulting firm McKinsey. It can do so, the report adds, by helping streamline design and supply chain issues, but more importantly, it can allow shoppers to try on clothes via their devices, and add chatbots as shopping assistants to help people sift through their wardrobe and make better style decisions.
So, I decided to dive into the world of AI with three apps to see if algorithms could change the way I dress.
Also read: Gen Z can't stop buying fast fashion and won't repeat outfits—here's why
One was ChatGPT, the popular US-born general-purpose tool, and the other two were newly launched homegrown platforms, Glance AI and Slayrobe. These three AI advisers, or stylists, saved me time and lessened the burden of making a decision each time I opened my wardrobe. But they seemed to have one agenda: turn me into a blind follower of trends.
Sample this conversation with ChatGPT.
Me: Hey ChatGPT, What should I wear to office today? I work in a newsroom and I want to wear something corporate-ish but a little casual and it should look all very effortless.
ChatGPT: That's a great idea. I'd suggest going for something like this:
A simple, well-fitted blazer in a neutral colour (like navy, black or grey), but leave it unbuttoned to keep it relaxed. Pair it with a plain shirt. Chinos in a neutral shade, or dark jeans if you want to keep it even more casual. Casual loafers or stylish sneakers. Keep accessories minimal.
Me: This is too boring. And it's 40 degrees Celsius here, so blazer is out. I'm a fashion writer and I want something different and stylish. Suggest more options?
ChatGPT: Wow! A fashion writer, I'm jealous of you, girl! Of course, here's something for that 'unbothered but unmistakably stylish" look. Lightweight cotton shirt with 3/4th sleeves or rolled long sleeves. Tuck it in slightly at the front (French tuck) for that casual-editor vibe. Straight or wide-leg trousers in breathable fabrics like linen. Chunky flats or sleek platform slides. Bold earrings or a structured minimalist necklace. Go for a low sleek bun. Add gold hoops—bam, you're Anna Wintour's cooler cousin.
It's not bad advice, but recommending rolling up the sleeves or adding an ankle-length skirt for a Parisian touch isn't exactly going to up my style game.
Then, I turned to Slayrobe. After logging into the app, you have to enter your height, identify your body shape (round, triangle, rectangle, inverted triangle, hourglass), features you want to flaunt/downplay, and your 'style agenda" (Effortless Chic, Dreamy Diva, Bold Boss, Timeless Queen). Then comes a barrage of looks, from clothes to jewellery, all based on algorithms trained on over 70,000 styling decision points.
You can also buy the suggested options—the app leads the user to the respective product's brand website. There's colour analysis in case you wish to know which shades are meant for your skin tone. And a wardrobe organiser allows you to upload a photo of your garments and create looks, like Cher Horowitz's outfit-picking computer in Clueless (1995).
'Women are being overfed with trends, and despite this, the universal feeling remains: We have nothing to wear," says Slayrobe's founder-CEO Pooja Lalwani, who's earlier worked on fashion tech initiatives at Google and Tommy Hilfiger. 'I wanted to create one platform where you can get help organising your wardrobe, style yourself better, know your colours and then post about it." The women-only app is designed to let you post your photos on the feed and as Stories, and follow other people—much like Instagram.
After choosing the option of Dreamy Diva as my 'vibe", I asked the stylist bot to help me find some timeless office wear style ideas. The responses were: 'Embrace soft, flowing fabrics", 'accessorise with delicate details", don't 'overdo with embellishments" .... again, all useful tips but nothing new.
But for a school student like Aashna Gulati such information is useful. 'I am still figuring out what my style is," says Mumbai-based Gulati, 16, who constantly experiments with AI apps to find fashion inspiration. 'I shop after seeing (on AI stylist apps) what will look good on me. I want to dress like Kendall (Jenner), Kareena (Kapoor Khan) but I don't have their stylists, so this is the best option for me."
Also read: The salwar returns to the runway with a new look
That's the promise of an AI stylist. It might not give you exactly what you're looking for but it can direct you towards what you might like.
'AI gets the basics right. It can tell you how a sage green blouse, a pair of canary blue trousers and a burgundy bag will look together but it can't tell whether that combo will suit you," says celebrity stylist Isha Bhansali, who's styled the likes of actor Ayushmann Khurrana. 'It's also about how much you care about personal style. Many people are just happy with following trends and shopping accordingly, and AI helps with exactly that."
Glance AI works on this theme: it creates fully styled, shoppable looks based on your body type, personal taste, lifestyle and mood, all on the phone.
After entering my details, Glance AI offered a variety of looks in a somewhat Pinterest moodboard way. Under Ready For Safari, for instance, I could see a myself—the app asks you to upload a selfie—wearing a moss green casual shirt from two brands, one ₹664 and other, ₹1,799, depending on your budget. The options were followed by text explaining how the collection 'merged utility and sophistication, featuring earthy tones, linen and structured silhouettes for an elevated, adventure-ready and stylish look". Choice of a green shirt for safari? Definitely not out of the box.
'We're built for Gen Z and millennial consumers," says Mansi Jain, senior vice-president and general manager, Glance AI. 'These are users who want AI to curate for them—not just show options. It marks a shift from marketplace models to inspiration-first commerce, where the AI stylist becomes the first touchpoint of fashion discovery."
Can an AI stylist replace a human stylist? It depends on who you ask. Bhansali compares AI to a calculator, saying 'it works on calculations, trends and not emotions, and fashion is so much about emotions."
Jain's take: 'Unlike human stylists, AI stylists scale to millions, adapt in real-time, and learn from every interaction. Our models don't just react to trends—they understand your style graph, predict what will resonate, and visualise you in it with near-photorealistic accuracy."
The thing is AI can offer answers when you're searching for your sense of style but it cannot serve you that personal touch that makes you you. Whether it was the taupe blouse ChatGPT suggested I wear for an interview, or the smokey party make-up I did after going through different looks on Glance AI, or the red salwar-kameez I zeroed in on for a family puja, they were all 'uniform"-like answers. And so very boring and predictable.
As Bhansali says, 'If I told ChatGPT I am on my period, it won't suggest anything white because that information has been fed into it. But what if my mood is to wear white? Style can be dictated by algos, not personal style."
When I asked ChatGPT if it believes AI stylists can supersede human counterparts, it said: 'AI is the calculator. The human stylist is the architect. Both can do math. Only one can build the Taj Mahal."
Also read: Why labubu dolls, Disney cartoons have entered the adult wardrobe

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
31 minutes ago
- Hindustan Times
How language LLMs will lead the India's AI leap
The next great power struggle in technology won't be about speed or scale, it'll be about whose language AI speaks. Because trust in technology begins with something deeply human: being understood. You trust a doctor who speaks your language. You trust a banker who understands your context. So why would you trust an algorithm that doesn't know who you are, where you're from, or what your words mean? This question is being asked by governments, developers, and communities across the Global South who have seen how powerful large language models (LLMs) can be—and how irrelevant they often are to people who don't speak English or live in Silicon Valley. In India, the response until now has been BharatGPT. This is a collaboration between startups like government-backed platforms like Bhashini, and academic institutions such as the IITs. Its aim is not to chase ChatGPT on global benchmarks. Instead, it hopes to solve problems at home—helping citizens navigate government forms in Hindi, automating railway queries in Tamil, or enabling voice assistants in other regional languages. CoRover has already deployed multilingual chatbots in sectors like railways, insurance, and banking. The value here isn't just in automation. It's in comprehension. This isn't unique to India. In South Africa, Lelapa AI is working on InkubaLM, a small language model trained in African languages. In Latin America, a consortium is building LatAm GPT, rooted in Spanish, Portuguese, and indigenous dialects. Each of these projects is a rebellion: against invisibility, against standardization, against a worldview where the technology speaks only in one accent. What's driving this shift? 'Current large language models do not adequately represent the linguistic, cultural, or civic realities of many regions,' says Shrinath V, a Bengaluru-based product coach and Google for Startups mentor. 'As governments begin exploring AI-powered delivery of public services, from education and legal aid to citizen support, they recognize the need for models that reflect local languages, data, and social context. Regional LLMs are being positioned to fill that gap,' he explains. Manoj Menon, founder of the Singapore-based research firm Twimbit, is on the same page as Shrinath: 'With AI there are several nuances that come into play—how we train them to be contextually relevant for our local, national needs.' At the heart of it lies something more political: digital sovereignty. Shrinath breaks it down and says, 'Data sovereignty is no longer an abstract idea. Countries don't want to depend on models trained on data they don't control. Indigenous models are a way to retain that control.' It boils down to geopolitical leverage. Nations that build their own models won't just protect cultural identity—they'll shape trade, diplomacy, and security doctrines in the AI era. 'This is a reasonable argument,' says Menon. 'How we interpret a particular subject or issue depends completely on the context. Hence geo-politics is a significant input. Also the ability to train based on local issues and context.' Viewed through this lens, the shift underway towards frugal AI is more radical than most people realise. These are models that don't need massive GPUs or high-speed internet. They're lean, nimble, and context-rich. Think of it like this: if ChatGPT is a Tesla on a six-lane highway, BharatGPT is a motorbike designed for rough, narrow roads. Not as flashy. But it gets where it needs to go. 'Most countries will want a say in shaping how AI is adopted, governed, and deployed within a sovereign context,' points out Shrinath. This matters because AI is starting to mediate access to public services—healthcare, legal advice, welfare. And in that context, a model that doesn't understand a citizen's language isn't just ineffective. It's dangerous. It can mislead, it can exclude and it can fail silently. So yes, Silicon Valley still leads the headlines. But away from the noise, something deeper is unfolding. A shift in who gets to define intelligence, in whose language it speaks and in whose image it is built. Regional AI, says Menon, 'won't go head-on with what is built in Silicon Valley. They will complement it and their opportunity will help AI be more relevant locally.' These regional AI efforts don't seek applause, they seek agency. They aren't chasing scale, they're chasing significance instead. This revolution is not being televised, it's being trained.


Mint
an hour ago
- Mint
Why superintelligent AI isn't taking over anytime soon
A primary requirement for being a leader in AI these days is to be a herald of the impending arrival of our digital messiah: superintelligent AI. For Dario Amodei of Anthropic, Demis Hassabis of Google and Sam Altman of OpenAI, it isn't enough to claim that their AI is the best. All three have recently insisted that it's going to be so good, it will change the very fabric of society. Even Meta—whose chief AI scientist has been famously dismissive of this talk—wants in on the action. The company confirmed it is spending $14 billion to bring in a new leader for its AI efforts who can realize Mark Zuckerberg's dream of AI superintelligence—that is, an AI smarter than we are. 'Humanity is close to building digital superintelligence," Altman declared in an essay this week, and this will lead to 'whole classes of jobs going away" as well as 'a new social contract." Both will be consequences of AI-powered chatbots taking over all our white-collar jobs, while AI-powered robots assume the physical ones. Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren't buying all that talk. The title of a fresh paper from Apple says it all: 'The Illusion of Thinking." In it, a half-dozen top researchers probed reasoning models—large language models that 'think" about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim. Generative AI can be quite useful in specific applications, and a boon to worker productivity. OpenAI claims 500 million monthly active ChatGPT users—astonishingly far reach and fast growth for a service released just 2½ years ago. But these critics argue there is a significant hazard in overestimating what it can do, and making business plans, policy decisions and investments based on pronouncements that seem increasingly disconnected from the products themselves. Apple's paper builds on previous work from many of the same engineers, as well as notable research from both academia and other big tech companies, including Salesforce. These experiments show that today's 'reasoning" AIs—hailed as the next step toward autonomous AI agents and, ultimately, superhuman intelligence—are in some cases worse at solving problems than the plain-vanilla AI chatbots that preceded them. This work also shows that whether you're using an AI chatbot or a reasoning model, all systems fail utterly at more complex tasks. Apple's researchers found 'fundamental limitations" in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered 'complete accuracy collapse." Similarly, engineers at Salesforce AI Research concluded that their results 'underscore a significant gap between current LLM capabilities and real-world enterprise demands." Importantly, the problems these state-of-the-art AIs couldn't handle are logic puzzles that even a precocious child could solve, with a little instruction. What's more, when you give these AIs that same kind of instruction, they can't follow it. Apple's paper has set off a debate in tech's halls of power—Signal chats, Substack posts and X threads—pitting AI maximalists against skeptics. 'People could say it's sour grapes, that Apple is just complaining because they don't have a cutting-edge model," says Josh Wolfe, co-founder of venture firm Lux Capital. 'But I don't think it's a criticism so much as an empirical observation." The reasoning methods in OpenAI's models are 'already laying the foundation for agents that can use tools, make decisions, and solve harder problems," says an OpenAI spokesman. 'We're continuing to push those capabilities forward." The debate over this research begins with the implication that today's AIs aren't thinking, but instead are creating a kind of spaghetti of simple rules to follow in every situation covered by their training data. Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple's paper, along with related work, exposes flaws in today's reasoning models, suggesting they're not the dawn of human-level ability but rather a dead end. 'Part of the reason the Apple study landed so strongly is that Apple did it," he says. 'And I think they did it at a moment in time when people have finally started to understand this for themselves." In areas other than coding and mathematics, the latest models aren't getting better at the rate that they once did. And the newest reasoning models actually hallucinate more than their predecessors. 'The broad idea that reasoning and intelligence come with greater scale of models is probably false," says Jorge Ortiz, an associate professor of engineering at Rutgers, whose lab uses reasoning models and other cutting-edge AI to sense real-world environments. Today's models have inherent limitations that make them bad at following explicit instructions—the opposite of what you'd expect from a computer, he adds. It's as if the industry is creating engines of free association. They're skilled at confabulation, but we're asking them to take on the roles of consistent, rule-following engineers or accountants. That said, even those who are critical of today's AIs hasten to add that the march toward more-capable AI continues. Exposing current limitations could point the way to overcoming them, says Ortiz. For example, new training methods—giving step-by-step feedback on models' performance, adding more resources when they encounter harder problems—could help AI work through bigger problems, and make better use of conventional software. From a business perspective, whether or not current systems can reason, they're going to generate value for users, says Wolfe. 'Models keep getting better, and new approaches to AI are being developed all the time, so I wouldn't be surprised if these limitations are overcome in practice in the near future," says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, who has studied the practical uses of AI. Meanwhile, the true believers are undeterred. Just a decade from now, Altman wrote in his essay, 'maybe we will go from solving high-energy physics one year to beginning space colonization the next year." Those willing to 'plug in" to AI with direct, brain-computer interfaces will see their lives profoundly altered, he adds. This kind of rhetoric accelerates AI adoption in every corner of our society. AI is now being used by DOGE to restructure our government, leveraged by militaries to become more lethal, and entrusted with the education of our children, often with unknown consequences. Which means that one of the biggest dangers of AI is that we overestimate its abilities, trust it more than we should—even as it's shown itself to have antisocial tendencies such as 'opportunistic blackmail"—and rely on it more than is wise. In so doing, we make ourselves vulnerable to its propensity to fail when it matters most. 'Although you can use AI to generate a lot of ideas, they still require quite a bit of auditing," says Ortiz. 'So for example, if you want to do your taxes, you'd want to stick with something more like TurboTax than ChatGPT." Write to Christopher Mims at


Mint
2 hours ago
- Mint
How to make ChatGPT forget any sensitive information
ChatGPT's ability to remember and reference past conversations allows it to personalize responses, making interactions more seamless and context-aware. For example, you can ask, 'Based on our past conversations, what do you know about me?" and it will tailor answers using stored data. While this 'long-term memory" enhances human-AI interaction, it poses risks: ChatGPT might retain sensitive details: personal, financial, or otherwise, raising privacy concerns if not managed properly. How to access: Available in ChatGPT's settings (ensure 'Reference chat history' feature is enabled). hatGPT's memory feature can help you: Example: 'Based on what you know about me from past conversations, help me list potentially sensitive and personal things you know about me." 'Please forget [insert specific detail, e.g., my phone number]." What makes this feature special? Pro tip: Use AI tools smartly, but always prioritize privacy. Mint's 'AI tool of the week' is excerpted from Leslie D'Monte's weekly TechTalk newsletter. Subscribe to Mint's newsletters to get them directly in your email inbox. Note: The tools and analysis featured in this section demonstrated clear value based on our internal testing. Our recommendations are entirely independent and not influenced by the tool creators. Jaspreet Bindra is co-founder and CEO of AI&Beyond. Anuj Magazine is also a co-founder.