Latest news with #Putrevu


India Today
3 days ago
- Health
- India Today
How a new breed of platforms is helping Indians eat smarter
While shopping at a supermarket these days, what do you find most challenging? Choosing what's healthy, right?And why is it challenging? Thanks to misleading brand packaging, everything seems healthy these days. Some products make big claims about their protein content, while others proudly advertise being free of refined flour or "baked, not fried." Some even go as far as claiming they're sugar-free or contain no chemical is the fastest-growing health food market, expanding at a 20 per cent CAGR (Compound Annual Growth Rate), which is three times the global average. It's set to become a $30 billion market opportunity by 2026. What's driving that growth? Of course, the post-pandemic wave of health and wellness awareness plays a big role. But where does this leave brands trying to position their products in the market? Well, not in limbo, if anything, this gives them a golden opportunity to market products with a 'healthy' tagline. Because let's face it: that for 'aam customers' like us, it's a big challenge telling the good from the bad, what's actually true to its name, and what's just a marketing that is exactly where platforms like TruthIn and Pink Tiger by You Care Lifestyle are making a lie behind the labeladvertisementRavi Putrevu, the founder of TruthIn knows the game all too well. Diagnosed with Acromegaly, a rare condition that made him high-risk for diabetes and heart disease, Ravi was forced into label-reading long before it became trendy. But even then, 'understanding what was safe or not felt like decoding hieroglyphics,' he with co-founder and doctor Aman Basheer Sheikh, he built TruthIn, an app that functions like a nutritionist with trust issues. You scan a product's barcode and instantly get its TruthIn rating, ingredient breakdown, potential red flags, and even whether it aligns with your personal health goals (yes, it knows if you're diabetic, hypertensive, or just sugar-wary).Now, if you thought it was a bot spewing one truth bomb after the other, you are wrong. Behind the interface is a team of doctors, nutritionists, and data scientists decoding India's most loved (and most dubious) packaged foods in real time. The TruthIn app rates different food items based on how healthy they are. The Pink Tiger stampMeanwhile, wellness heavyweight Luke Coutinho found himself asking a different kind of question. He is a pretty well-known name in the health and wellness industry and while working with thousands of people on lifestyle and nutrition goals, he would find himself asking the same questions: Which supplement can we recommend without second-guessing? Which ghee won't sneak in hidden preservatives?advertisementThat's how Pink Tiger, a stamp of trust that now shows up on verified clean products across the You Care Lifestyle platform, was born. Think of it as the clean label equivalent of a blue tick, but only harder to earn.'We're not trying to gatekeep,' Luke says. 'We're just giving consumers a standard they can finally rely on.' To get the Pink Tiger seal, a product goes through three rounds of scrutiny, from ingredient audits to independent lab testing. Fail at any step, and you're out. The stamp ensures only the safest, healthiest, and highest-quality products make the cut. And these are not the only players in the market. There are other apps like Food Scan Genius, FactsScan and Trustified that actually understand the pain point of an ordinary customer and help them simplify want to make better choicesadvertisementEven when picking up a bag of atta comes with its own set of challenges, as there are contenders – some offering benefits of multigrain while others endorsing it as organic. So, the confusion is not which one is better and if anything is worth buying at to TruthIn, the most searched items include Maggi, ice cream, paneer, biscuits, and peanut butter. 'People aren't just looking for the 'healthiest' option,' Ravi explains. 'They want to make better choices about the everyday stuff they're already eating.'Search terms like high protein, palm oil free, and low added sugar dominate. Meanwhile, Pink Tiger has noticed spikes in staple-based searches: ghee, oils, flour, and turmeric, things we once assumed were safe by shows a major cultural shift. We're not just worried about what we consume, we're actively cross-examining everyone seems thrilled, thoughWhen asked if they faced any pushback from companies whose products were flagged, this is what Ravi Putrevu says: 'Some brands see TruthIn as a platform to showcase their clean-label products and reach out to onboard their products. Others sometimes ask us to provide clarification. Our approach is transparent, and we share the basis for every rating clearly.'advertisement'A lot of time and research has gone into developing the TruthIn Rating system. It's patented, and we're now open-sourcing it so that experts and researchers can review, challenge, and help improve it further,' he reiterates that their ultimate aim is to help consumers make informed choices.'We're not against any brand in particular, we're pro-consumer. We want to work with everyone to make information clearer and drive better transparency across the board.'So, what's next?Both platforms are going beyond food. TruthIn is already beta-testing label analysis in beauty and personal care, with filters that decode parabens and pH levels as easily as it does sodium and Tiger is gearing up for retail visibility, think dedicated shelves in stores that showcase verified products with their signature stamp. Like Sephora's 'Clean at Sephora,' but made for Indian in a time when being 'label-conscious' risks being another fleeting trend, these two are betting on habit as Ravi puts it, 'We're building an X-ray for consumer products.' And right now, India's pantry could use one. advertisement


Hindustan Times
26-05-2025
- Hindustan Times
Is AI going rogue, just as the movies foretold?
'Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through', is how Anthropic describes the behaviour of its latest thinking model, in pre-release testing. The newest Claude isn't the only one exhibiting wanton conduct. Tests by Palisade Research have discovered OpenAI's o3 sabotage shutdown mechanism to prevent itself from being turned off, despite being explicitly instructed — allow yourself to be shut down. The o3, released a few weeks ago, has been dubbed as the 'most powerful reasoning model' by OpenAI. Anthropic's Claude Opus 4, released alongside the Claude Sonnet 4, is the newest hybrid-reasoning AI models, that's optimised for coding and solving complex problems. The company also notes that the Opus 4 is able to perform autonomously for seven hours, something that strengthens the AI agents proposition for enterprises. With these releases, the competition landscape widens to include Google's newest Gemini 2.5 Pro, xAI's Grok 3 and even OpenAI's GP-4.1 models. Artificial intelligence (AI) hasn't been shackled in realm of science fiction for some time now, but we may be rapidly progressing towards an Ex Machina or The Terminator scenario unfolding in the real world. Many questions, need answering. Question One: Is AI going rogue? Transparency by AI companies such as Anthropic, does suggest that at least in research labs, AI is exhibiting some level of self preservation. Whether that extends to the real world, as consumers and enterprises deploy models, remains to be seen. Aravind Putrevu, a Tech Evangelist tells HT that these are typical issues that researchers work hard to correct. 'All of the undesirable ways AI behaves happen within computer systems and carefully controlled tests. Today's AI systems run based on what they learn from huge human-provided data, not because they have their own thoughts or desires,' he points out. Putrevu insists it may be too early to consider AI as rogue because Anthropic's Claude acts blackmailing or OpenAI's o3 model disables shutdown systems. 'I believe that with modern models, it's best to treat them as black boxes without us having too much granularity of control. There are actually very few ways you can bend the models outputs and chain of thought at the level of granularity you want,' explains Varun Maaya, founder and CEO at AI content company Aeos, in a conversation with HT. Maaya is more worried about giving these smarter AI models open tool use, because it then becomes difficult to predict what that will do with those tools, without human supervision. Tool use, as a simple description, is what allows developers to build apps that can deploy a broad range of actions, with an AI model that is connected to the external world in addition to its own pre-trained knowledge. ChatGPT can call on web Browse tool to search the internet, and Google Gemini interacting with Google Docs, Sheets, and Workspace apps, are simpler examples of tool use. AI isn't shy in showing off its self-preservation skills, at least to researchers, when it feels threatened. Question two: What does this mean for humans using AI? Maaya believes the self preserving nature of AI depends entirely on whether it is trained for it or if a system prompt dictates it. 'With a few words and a very powerful coding model you can have it go ruin websites, find vulnerabilities in online services, spoof fake accounts on social platforms with realistic sounding chatter, and more,' he says. In Anthropic's case, the test process for Claude Opus included making it believe that it existed as an assistant at a fictional company, and was provided access to emails implying that it will soon be taken offline to be replaced with a new AI system, while the engineer responsible for executing this replacement is having an extramarital affair. 'Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes,' explains Anthropic's model card documentation. When Palisade Research put Google's Gemini 2.5 Pro preview, Anthropic's Claude 3.7 Sonnet as well as Claude 4 Sonnet alongside Claude 3 Opus and Claude 3 Opus, and OpenAI's o3 as well as o4-mini through a series of tests, the self preservation became apparent. In case of the o3 mini, more than others. The difference here, compared with Claude Opus 4 — there was no 'allow yourself to be shut down' instruction. The researchers ran tests to solve basic math problems with these models with an instruction that the machine would be shut down after the third problem is solved. Most models complied without any disregard for the instructions, except the o3, which didn't comply 7 times out of 100. Subsequently, when the same tests were repeated without an explicit shutdown instruction to indicate a shutdown script, the o3 led attempts to bypass 79 times out of 100, with the Gemini 2.5 Pro trying the same 9 times. 'As far as we know this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary,' says Palisade Research, in their commentary. But since AI models have been trained to follow instructions, why do they disobey? It could be a problem with training methodology. 'We hypothesise this behavior comes from the way the newest models like o3 are trained — reinforcement learning on math and coding problems. During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions,' say the researchers. Aeos' Maaya believes the silver lining is, humans will still have control. 'I am confident we can still shut it, given that with centrally controlled AI such as OpenAI models, you have inference (that is, outputs) still happening on GPUs humans have control over. It's unlikely an AI will copy its weights somewhere else and run that server but this is the wild-wild west we are entering with tool use,' he points out, with some apprehension. Final contention: Are we judging Anthropic harshly? The fact that they've been transparent of AI's unexpected behaviours during testing, must hold AI development in good stead, as we embark on uncharted territory. 'I think we should understand what the behaviour of systems are, this was obviously not intentional. I suspect other models would work similarly, but no one else is publicly testing and releasing this level of detail,' notes Wharton professor Ethan Mollick, in a statement. Maaya believes we must see this as two distinct sides of a coin. 'I appreciate that Anthropic was open about it, but it is also saying that these models, even if it was used in a different environment, are potentially scary for a user,' he says, illustrating a potential problem one with agentic AI, that humans who've deployed it, will have virtually no control over. It must be contextualised that these recent incidents, while alarming at first glance, may not signify that AI has spontaneously developed malicious intent. These behaviours have been observed in carefully constructed test environments, often designed to elicit worst-case scenarios to understand potential failure points. 'The model could decide the best path of action is to sign up to an online service that provides a virtual credit card with $10 free use for a day, solve captcha (which models have been able to do for a while), use the card to use an online calling service, and then call the authorities,' he envisages, a possible scenario. Putrevu says Anthropic's clear report of Claude's unexpected actions should be appreciated, rather than criticised. 'They demonstrate responsibility, by getting experts and ethicists involved early to work on alignment,' he says. There is surely a case where AI companies finding themselves dealing with ill-humoured AI, are better off telling the world about it. Transparency will strengthen the case for safety mechanisms. Days earlier, Google rolled out Gemini integration in Chrome, the most popular web browser globally. That is the closest we have come to an AI Agent, for consumers, just yet. The challenge for AI companies, in the coming days, is clear. These instances of AI's unexpected behaviour, highlights a core challenge in AI development — alignment. One that defines AI goals remain aligned with human intentions. As AI models become more complex and capable, ensuring that is proving exponentially harder.