Latest news with #DigitalPersonalDataProtectionAct


Hindustan Times
3 days ago
- Politics
- Hindustan Times
An effective, credible Data Protection Board
Earlier this year, the ministry of electronics and information technology (MeitY) sought public feedback on the draft rules for the implementation of the Digital Personal Data Protection Act. At present, stakeholder feedback is being considered, and the final rules await imminent enforcement. Thus, it is important to understand the effectiveness of the institutional framework of the Data Protection Board (DPB) established under the Act and operationalised through the rules. DPB, an enforcement and adjudicatory body, will receive complaints from individuals and references from the government, conduct inquiries, and impose penalties on organisations failing to comply with the Act's provisions. The draft rules prescribe the operational procedures for the implementation of the Act, but it is limited by the design envisaged under the Act. The lean institutional design and narrow scope of powers will constrain what the rules can achieve. Yet, there are three key aspects that can enhance the performance of DPB that can be brought about through the rules. First, the effective functioning of DPB requires designing an institution with adequate independence and functional autonomy, sufficient expertise and capacity, and necessary accountability measures. Institutional independence is based on several factors, but a great deal can be achieved through how its members are selected and appointed. The proposed rules empower the Union government to establish a selection committee to recommend DPB members, including its chairperson. The committee selecting the chairperson will be led by the Cabinet Secretary and have the secretaries of legal affairs and MeitY as well as two government-appointed experts. Similarly, the selection committee for DPB members will be headed by the MeitY secretary and have the secretary of legal affairs and two government-nominated experts. The composition of such selection committees is often skewed towards serving Union government officials. While the involvement of the executive is inevitable, a lack of diversity in the selection committee can result in partisan and biased appointments. A more diverse selection committee could include members from the legislative, judiciary, civil society, and other stakeholder groups. This would align better with principles of impartiality and ensure the much-needed diversity of stakeholder perspectives. There are enough examples of this, such as the selection committees for members and chairs of the Competition Commission of India and the Central Information Commission, among others. Institutional independence is vital for public trust and effective governance, and the selection and appointment processes are foundational to this. Second, the need for transparency in appointments, as well as the proceedings and functioning of DPB, cannot be overstated. The selection committees will evaluate candidates based on qualifications in data governance, dispute resolution, information technology, and the digital economy, as required by the Act. They will assess the integrity and practical experience of candidates too. To ensure transparency, the committee's recommendations and a brief justification of each candidate's eligibility should be made public. Predictability and certainty in dispute resolution will guide stakeholder decisions on approaching DPB. It will also allow the study of trends, effectiveness, and critically analyse DPB's performance. One way to ensure this is to impart maximum transparency to the resolution process. This can be done by publishing orders and findings of DPB and their reasoning and rationale. Periodic guidance on complex issues relating to data privacy and how DPB may respond to concerns relating to any emerging technology would be invaluable but is potentially outside the scope of the board's current design. The rules could also provide for the disclosure of minutes of meetings to promote transparency in the decision-making process. Third, DPB should not suffer delays in appointments and resolution of complaints. Delayed appointments in the statutory bodies affect enforcement of the law and impede policy decisions. They also lead to backlog of appeals and complaints from the public, resulting in a trust deficit and erosion of confidence in the institutional grievance redressal mechanism. To ensure this doesn't happen, it would be useful for the rules to stipulate that appointments with respect to any future vacancy should be made prior to the date of the vacancy to maintain smooth operation of the board. There is considerable potential for improving the institutional design of DPB through the draft rules to create a more independent, transparent, and reliable body. Institutions responsible for protecting core fundamental constitutional values, such as the right to privacy, must be independent to ensure effectiveness and credibility, and strengthen the trust of civil society, the market, and industry. Gangesh Varma and Yaqoob Alam work with Technology and Policy Practice at Saraf and Partners, a law firm. The views expressed are personal.


Mint
6 days ago
- Entertainment
- Mint
Seen, sent, but never read – WhatsApp's new privacy story
In its boldest marketing move yet, WhatsApp has launched 'Not Even WhatsApp"—a global campaign that puts privacy front and centre. The 60-second TV spot in India, directed by Achowe and shot across Delhi's Yamuna banks and Chandni Chowk, flips the camera to the app's POV—where your most mundane messages stay unseen, even by WhatsApp itself. With Aamir Khan lending his voice in India, the film plays like a love letter to everyday messaging: from voice notes to moms, to gossip sessions and late-night confessions. It's all end-to-end encrypted, the ad reminds us, and that's the selling point, wrapped in local sights, sounds, and sentiment. Also Read | Annapurna's Mother's Day Miss: Where's the brand in the beauty? The timing is strategic. As India revisits data protection laws and rivals like Signal continue their quiet rise, Meta is reasserting trust. A star like Khan adds credibility without triggering political baggage. The campaign also highlights the new 'Advanced Chat Privacy" toggle, though subtly, alongside WhatsApp's other privacy tools like Privacy Checkup. It's smooth, emotional, and miles ahead of the usual tech gobbledygook. If Meta follows it up with an intuitive product experience, this could help fix a trust gap it has long been trying to bridge. Also Read | Are advertising agencies dying? Long may the art of persuasion live WhatsApp's latest ad campaign doesn't just sell a feature, it sells a feeling. In its biggest global marketing push yet, the messaging giant has unveiled 'Not Even WhatsApp", a bold declaration that your chats are yours alone. No snooping. No leaks. Not even from the app itself. The 60-second film, directed by Achowe of Chalk & Cheese Films and voiced in India by Aamir Khan, takes viewers into the inner world of WhatsApp—seen from the other side of your phone screen. Familiar, everyday messages float by in stylised motion, but no one is reading. It's a neat visual metaphor for end-to-end encryption, dramatised without ever feeling technical. Also Read | WhatsApp vs Pegasus: A well deserved win for Zuckerberg Shot across Delhi, including along the Yamuna and through the chaos of Chandni Chowk, the film grounds a global campaign in hyper-local familiarity. Khan's presence adds quiet gravitas. There's no hard-sell, no tech babble, just the idea that your most mundane exchanges, from mom's voice notes to midnight confessions, deserve absolute privacy. The campaign couldn't have come at a more strategic time. WhatsApp may be India's most used messaging platform, but its trust reserves have taken hits—from misinformation forwards to regulatory tussles and rising competition from privacy-first players like Signal and Telegram. With the Indian government re-examining the Digital Personal Data Protection Act, Meta is clearly looking to pre-empt the trust question with storytelling, not statements. And the stakes are higher than ever. As messaging apps increasingly double up as transaction hubs, health info archives, and workplace tools, privacy has gone from a niche concern to a mainstream demand. WhatsApp is responding with product updates like 'Advanced Chat Privacy"—a new setting to keep content from being taken outside the app—and tools like Privacy Checkup. The ad gently nods to these, but wisely avoids turning into a product demo. Still, the campaign's real win is tone. It doesn't panic you into caring about privacy. It normalises it. That restraint stands out in an advertising landscape obsessed with drama and data dumps. Will it be enough to shift sentiment? That depends on how easily users find and trust the new privacy tools. But in terms of narrative clarity, 'Not Even WhatsApp" sticks the landing. It's intimate without being intrusive, cinematic without losing cultural context, and local without looking like a retrofit. For a brand often caught between global ambition and local anxiety, this is WhatsApp speaking softly, but saying something loud.
Yahoo
21-05-2025
- Business
- Yahoo
Vereigen Media Leads the Charge in Privacy-First B2B Marketing, Ensuring Compliance and Quality in Every Lead
Powering Compliant B2B Demand Gen with First-Party Data and Verified Engagement AUSTIN, Texas, May 21, 2025 (GLOBE NEWSWIRE) -- As B2B marketers navigate an evolving digital landscape marked by increasing privacy regulations and shifts in data strategy, Vereigen Media is setting a new industry benchmark. While third-party cookies remain part of the ecosystem for now, the growing emphasis on data privacy and compliance is pushing marketers to future-proof their strategies. Vereigen Media leads this shift with a foundation built on first-party data, verified content engagement, and a zero-outsourcing model. This privacy-first approach ensures reliable demand generation that delivers real results without compromising compliance. According to Gartner's 2024 report on data privacy trends, over 60% of marketing leaders now rank compliance as a top priority for 2025. Growing regulatory pressure from laws like the General Data Protection Regulation (GDPR) in Europe, the Digital Personal Data Protection Act in India, and the expanded California Privacy Rights Act (CPRA) in the U.S. has forced a reckoning in how companies collect and validate prospect data. In this environment, transparency, ethical data practices, and data quality are not just advantages, they are how verified, privacy-first leads drive real results!Connect with Vereigen Media today! 'Today's marketing leaders are no longer chasing volume, they're chasing clarity, trust, and control,' said Anuj Pakhare, CEO of Vereigen Media. 'Our clients come to us because we don't just deliver leads, we deliver certainty. Every engagement is verified, every contact is validated, and every data point is compliant with modern regulations.' Vereigen Media operates exclusively through first-party data collected across its owned and operated publisher ecosystem. This model gives clients full visibility into how leads are generated and ensures that every prospect provides explicit opt-in consent before accessing client content. Unlike vendors who rely on third-party sources or aggregators, Vereigen's leads are sourced and validated entirely in-house. A cornerstone of this approach is Verified Content Engagement, a process in which every prospect must actively engage with content before qualifying as a lead. This goes beyond a basic form fill; the user must spend measurable time with the content. If they don't meet a minimum engagement threshold, the lead is not passed along. This mechanism ensures that only genuinely interested prospects make it into the pipeline. Beyond digital engagement, every lead is further scrutinized through human verification by Vereigen Media's 200+ member validation team. Each record is manually cross-checked against public sources to confirm accuracy, eligibility, and alignment with the client's targeting criteria. This double-layered approach, first-party engagement followed by human validation, ensures not only compliance but also lead quality. 'The brands we work with want to move fast, but they cannot afford to move blind,' said Kari Martindale, Executive Director of Client Experience at Vereigen Media. 'They rely on us because we act as an extension of their internal teams. We're not just generating leads, we're protecting their brand, reputation, and ROI.' That trust is supported by performance. Recent client programs have demonstrated: More than 90% of leads converting to MQLs Fewer than 1% of records requiring replacement A return on ad spend (ROAS) 1.5x above industry benchmarks Forrester's 2025 B2B Data Report echoes this effectiveness, noting that 78% of organizations relying primarily on first-party data report stronger conversion rates and customer relationships compared to just 49% among third-party data users. With a global database of over 107 million validated first-party contacts, Vereigen Media supports enterprise clients across North America, EMEA, APAC, and Latin America. Industries served include cybersecurity, SaaS, finance, and other sectors where data sensitivity and decision complexity demand accuracy and accountability. As the demand generation landscape evolves, Vereigen Media is focused not just on adapting to change but leading it. Discover how verified, privacy-first leads drive real results!Connect with Vereigen Media today! About Vereigen Media Vereigen Media is a global leader in B2B demand generation, delivering outcome-driven programs through verified content engagement, programmatic advertising, and event registration. With an unwavering commitment to first-party data, zero outsourcing, and full transparency, Vereigen Media empowers marketing and sales teams to connect with real decision-makers—compliantly and confidently. Contact:Janvi Gandhi - Brand Marketing ManagerVereigen Media LLCEmail: marketing@ +1 512-240-2212 (US)Official Website: A photo accompanying this announcement is available at in to access your portfolio


Time of India
21-05-2025
- Entertainment
- Time of India
Kidfluencers: Balancing compliance and reach
HighlightsIn India, 37% of Generation Alpha children aspire to become social media influencers, reflecting a significant shift in childhood ambitions from traditional careers to digital stardom. The number of kid influencers (under 16) on Instagram in India surged to 83,212 by March 2025, with a notable 41% growth since April 2024, predominantly featuring young girls. India's Digital Personal Data Protection Act aims to protect children under 18 from using social media, but enforcement remains a challenge, leading to potential vulnerabilities for young content creators. First, they won the pageant stage; then they shone on the silver screen — now, these pint-sized stars dazzle through our smartphones. Welcome to the era of the ' kidfluencer '. In today's social media age, the playground has become more of a virtual concept, with the excitement of swings and slides replaced by shares and sponsorships. Toddlers unbox toys for views, children share the stage with sponsored content and family vlogs transform bedtime routines into branded narratives. Complex world This is the evolving reality for today's kidfluencers. Some have barely hit adolescence and yet, they are already building brands and commanding audiences in millions, while navigating the complex world of content creation, parental management and online scrutiny. Dreams of becoming an astronaut have been replaced by ambitions of social media stardom. According to a 2024 survey by US-based tech website Hosting Advice, 37% of Gen Alpha kids (born 2010-2025) in India aspire to become social media influencers . While precise figures can fluctuate, industry reports indicate that their primary revenue stream comes from sponsored content. In the US, brands allocate budgets ranging from $10,000 to $20,000 per sponsored post or video from successful kidfluencers. In fact, even a nano-influencer can get $600 per post. Following criticism, Instagram and TikTok have made it mandatory for users to be at least 13 years. But this rule is easily circumvented as children are already building online empires, guided by their parents, via 'momfluencers' or 'momagers'. Concerns about the safety of these youngsters have led governments across the world to take action, from Europe's strict GDPR to the US tightening controls on TikTok and child data collection. France has passed a law that safeguards child influencers ' earnings and limits their working hours, while Australia has banned all children under 16 from using social media. In India, however, regulations remain a step behind, leaving young content creators vulnerable. These dynamics, fraught with both opportunities and pitfalls, echo the unsettling narratives explored in shows such as Adolescence and Bad Influence, where the digital world casts long shadows on young lives. What are the consequences of these blurred lines between childhood and online commerce? Cradle to content India's digital landscape is teeming with young talent. According to influencer marketing platform Qoruz, the number of kid influencers (under 16) on Instagram in India reached 83,212 by March 2025, with a 41% growth from April 2024. These creators, predominantly girls (68.69%), command an average engagement rate of 3.17% and a reach of 1,20,000 per influencer. Micro-influencers (10K to 100K followers) dominate at 59.15%, reflecting a broad base of niche, engaged communities. Anantya Anand, better known as MyMissAnand , embodies the rise of the kidfluencers in India. 'I enjoy being in front of the camera,' she says, recalling how she began as a four-year-old encouraged by her mother. Anantya was eight when she got her first brand deal; today she's 16 and boasts of deals with Disney and Nestlé. 'It's a full-time job managing her account's content,' says her mother, Nisha Topwal. 'It's still a hit and trial strategy for us.' For many others in Anantya's space, the journey started with a love for performing, later guided by digital-savvy parents. Alongside kidfluencers are the 'momfluencers', whose parenting content often features their children. According to Qoruz, India had over 3,79,265 parenting influencers on Instagram in India by March 2025. Women continue to lead this space, accounting for 63.59% of creators, reflecting their role as primary content drivers and household decision-makers. Avantika Bahuguna, a momfluencer and founder of Momsleague, says that her teenaged daughter occasionally features in her content, but only with her consent. As platforms tighten age restrictions, Bahuguna sees an opportunity: 'It's a great time to shift the spotlight back on us,' she says. Where law meets likes India's legal framework for social media age limits is evolving. The Digital Personal Data Protection Act (DPDPA) bars children under 18 from using social media. 'Parental consent will also be required, but currently all this has not been enforced,' says Mallika Noorani, senior partner at Parinam Law Associates. Until the DPDPA is notified, platforms such as Meta and YouTube rely on their global 'Terms of Use', setting 13 as the minimum age. Noorani notes, 'If a local law sets a higher age limit, that would take precedence.' This reliance on platform guidelines often leads to workarounds. Anantya Anand's account, for instance, is managed by her mother, Nisha. 'In India, I think it's after 16 or 18 that she can manage her own account, but currently it's managed by me,' Nisha explains. Rules that aim to protect kids' privacy and safety also end up being a challenge for creators and brands. For instance, YouTube's 'Made for Kids' label restricts personalised ads, affecting monetisation. Meta's stricter privacy settings limit reach and engagement of young influencers. Ethical responsibility is also key for brands targeting children. Manisha Kapoor, CEO and secretary general of the Advertising Standards Council of India (ASCI), stresses the importance of clear privacy laws, explicit consent for data collection and responsible, transparent marketing practices. 'This includes strict adherence to age restrictions on platforms and a shared responsibility between brands and guardians,' Kapoor states. Brands on alert Brands like Funskool India rely on kid and momfluencers for authentic storytelling. Its Instagram and Facebook campaigns focus on mothers to promote developmental play, aligning with child safety values. Philip Royappan, GM of sales and marketing, Funskool, believes that stricter age limits won't affect their strategy, as 'our collaborations are centred around parents'. The brand is also exploring content hubs and YouTube playlists for parents to stay compliant. 'Our campaigns focus on play value, not unboxing hype,' he adds, underscoring their commitment to responsible content. Meanwhile, since 2023, Unilever globally has tightened its rules on food and drink advertising, pledging to stop targeting children under 16 across all media. Its statement reads: 'A key part of this updated policy is the ban on using influencers, celebrities or social media stars who are under 16 or primarily appeal to that age group. The move aims to promote responsible marketing in the digital age and support parents.' Adding a broader perspective, Praanesh Bhuvaneswar, CEO of Qoruz, notes a global shift: 'Campaign briefs now often include explicit clauses about age verification and consent.' The price of influence But what about the psychological risks of social media on young minds? 'Being part of social media content from a young age can shape how they see themselves,' says Bahuguna. Stricter age limits could alleviate this pressure, giving kids the space to grow without a digital footprint. Anantya's detachment from comments — 'I've never really cared enough to go through them' — suggests a coping mechanism, but not all kids may be as insulated as she is. In Adolescence, for instance, the lead character was disturbed by online comments, once again showing how deeply such interactions can affect their mental health. As social media age limits tighten, the influencer ecosystem is at a crossroads. Kid influencers like Anantya may need to pause or pivot. Brands are already adapting with parent-focused campaigns and alternative platforms, ensuring compliance without sacrificing reach. But while platforms like Meta and YouTube change their global guidelines, India's regulatory bodies will need to consider the implications and the delicate balance between opportunity and exploitation for these young digital stars. Bhuvaneswar says, 'The future of influencer marketing will not just be about engagement metrics, but also about ethical storytelling and regulatory alignment.' In India, where the kid influencer boom shows no signs of slowing, these changes signal a shift toward a safer, more responsible digital space — one where tiny stars can shine without burning out.


Mint
20-05-2025
- Mint
Rahul Matthan: Don't let data privacy safeguards work against us
The first country to seriously address the issue of protecting digital personal data was the United States of America. In a report titled Records, Computers and the Rights of Citizens issued in 1973, it set out a list of data protection principles called the Fair Information Practice Principles (FIPPs). FIPPs required organizations to provide notice before collecting personal data and seek consent before processing it. Only as much personal data as was necessary to achieve the specified purpose could be collected, and it could only be used for the purpose specified. Organizations had to keep personal data accurate, complete and up to date, and give individuals the ability to access and amend it as required. If all this sounds familiar, it is because it is. These principles have been incorporated into all modern data protection laws—from Europe's General Data Protection Regulation to India's Digital Personal Data Protection Act. It is where concepts like notice and consent, purpose specification, use limitation, data minimization and retention restriction come from, and it is remarkable how 50 years after they were first conceptualized, they continue to be used to protect personal privacy. Or do they? Also Read: Use verifiable credentials to grant us agency over our digital data In the 1970s, our ability to process data was limited, constrained by computational power and storage capacity. As a result, very few organizations could afford to process personal information at a scale that would affect our privacy. Since companies had to be selective about what data they collected and used, it made sense to require them to constrain the uses to which they put the data and for how long they retained it. Today, these constraints are no longer relevant. All organizations, regardless of their size or sphere of activity, use data in all aspects of their operations. Global data creation grew from about two zettabytes in 2010 to over 160 zettabytes projected in 2024. As a result, concepts like notice and consent are becoming increasingly meaningless, as it is no longer feasible to provide notice of all the different types of data processed or the many uses to which it will be put. Advances in artificial intelligence (AI) have further complicated the issue. If we want to benefit from all that AI has to offer, we need to give these systems access to our personal data so that they can draw inferences from it. With the ability to analyse the cumulative record of all the data that our personal fitness trackers have recorded about us, for example, AI systems may be able to use that information to infer our likelihood of contracting a disease. Those who are currently unable to access credit because they lack the traditional indicators of creditworthiness may be able to provide other indicators of their ability to repay a loan if AI systems are allowed to analyse their personal information. Also Read: Biases aren't useless: Let's cut AI some slack on these If we use AI systems for these purposes today, we are likely to run afoul of one or more of the data protection principles. Take, for instance, purpose specification. Since most AI use cases may not even have been conceivable when the data in question would have been collected, it is unlikely that our consent would have been obtained for it to be used in that manner. Deploying AI for these use cases would most likely require seeking fresh consent from data principals. The other concern is around retention. Since data is only permitted to be retained for as long as necessary to serve the purpose for which it was collected, organizations that comply with the mandates of global data protection regulations have set up systems to delete personal data once their purpose has been served. In the case of healthcare data, this is unfortunate because medical AI applications rely on access to health data records over as long a period of time as possible in order to establish trends for current parameters to be evaluated against baselines. If hospitals have to delete this data as soon as the immediate purpose is served, these opportunities will not be realized. Finally, there is the principle of data minimization, which requires us to only collect as much data as is strictly required to fulfil the specified purpose. Since AI systems perform better if they have more data on which they can be trained, the minimization obligation makes less data available for training and, as a result, limits the benefits that AI can bring. Also Read: India must forge its own AI path amid a foundational tug of war The approach taken by the US FIPPs to minimize the risk of privacy-harm limited the amount of personal data in the hands of the organizations that processed it. At the time, this was a reasonable approach as there was no additional benefit to be gained by allowing corporations to store our data. This is no longer the case. The more data that AI systems have, the better the outcomes they produce. As a result, any approach that simply limits the data these systems can use trades the benefits that could accrue from data analysis for the mitigation of privacy-related risks. I have, in previous columns, written about how new technological approaches—data anonymization and privacy-enhancing technologies—as well as institutional measures like data trusts can offer us a new approach. If we can deploy federated learnings and confidential compute systems, we should be able to use personal data without violating personal privacy. Our current approach to data protection is now more than half a century old. It is no longer fit for purpose. We need to learn to use personal data for our benefit without causing privacy harms. The author is a partner at Trilegal and the author of 'The Third Way: India's Revolutionary Approach to Data Governance'. His X handle is @matthan.