logo
Why China has asked its officials to cut spending on alcohol, cigarettes and travel

Why China has asked its officials to cut spending on alcohol, cigarettes and travel

Time of India19-05-2025
The directive specifically addressed spending on cigarettes, alcohol and receptions. (AI image)
China has instructed government officials nationwide to reduce unnecessary expenditure on travel, meals and office facilities, signalling President
Xi Jinping
's commitment to fiscal prudence amidst challenging economic conditions affecting government finances.
The regulations reinforce Xi's initiative for officials to reduce expenditure, particularly as declining land sale revenues constrain budgets whilst local administrations face substantial debt obligations.
In late 2023, central authorities instructed government officials to embrace austerity measures, strengthening Xi's drive against corruption and ostentatious displays of affluence.
Also Read |
'Big ban' actions: How India is shunning Pakistan and its allies like Turkey & Azerbaijan - top 5 measures
According to the official Xinhua News Agency on Sunday, the directive from the government and Communist Party leadership specifically addressed spending on cigarettes, alcohol and receptions.
The announcement emphasises the importance of practising careful spending and frugality whilst condemning excessive expenditure. Xinhua quoted the directive stating that "waste is shameful and the economy is glorious."
On Monday, consumer staples stocks experienced a significant decline within the CSI 300 Index's sub-categories, dropping by 1.7%. Notably, Kweichow Moutai Co. saw its largest decline in six weeks, falling by 2.4%, according to a Bloomberg report.
Beijing launched its most comprehensive programme in recent years to tackle local-authority debt concerns in the previous year. This initiative sought to minimise default risks and enable local governments to maintain economic development support.
Meanwhile, China recorded a 5.4% economic growth in the first quarter, surpassing projected figures. Officials maintain their optimism about reaching Beijing's growth objective of approximately 5% for the year, although economists caution that UStariffs could impact this progress.
Also Read |
Why India can be a big winner of Donald Trump 2.0 era if it plays its cards right
Concerned about the negative effects of tariffs on economic performance, the government introduced stimulus initiatives earlier this month, encompassing reductions in interest rates and substantial liquidity support.
The monetary policy measures were implemented prior to the China-U.S. trade agreement, which was finalised following crucial negotiations in Geneva, representing a notable reduction in the prolonged period of increasing friction.
Stay informed with the latest
business
news, updates on
bank holidays
and
public holidays
.
AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nearly 60% of Indian organisations lack AI governance policy: Report
Nearly 60% of Indian organisations lack AI governance policy: Report

The Hindu

timean hour ago

  • The Hindu

Nearly 60% of Indian organisations lack AI governance policy: Report

Nearly 60% of Indian organisations either lack an artificial intelligence (AI) governance policy or are still in the process of developing one, per a new IBM report. The report titled 'Cost of a Data Breach Report 2025' highlights a worrying gap between rapid AI adoption and lagging security controls, raising fresh concerns about the strategic readiness of Indian enterprises to handle AI-related cyber threats. IBM's report also revealed that the average cost of a data breach in India has reached an all-time high of ₹220 million in 2025, marking a 13% increase over last year's figure of ₹195 million. The surge reflects rising cyber risks across industries, particularly as organisations integrate AI tools without corresponding investments in governance and access control. Globally, IBM noted that while AI adoption is booming, it is increasingly outpacing security protocols, making ungoverned AI systems attractive targets for threat actors. In India, only 37% of surveyed organizations have implemented AI access controls, and just 42% reported having policies to manage or detect 'shadow AI' — the unauthorised use of AI tools and applications. Notably, shadow AI emerged as one of the top three cost drivers of breaches, adding an average of INR 17.9 million to the total impact. Despite this, most organisations have yet to adopt dedicated safeguards to monitor or contain these hidden vulnerabilities. Phishing continues to be the most common cause of data breaches in India, accounting for 18% of incidents. This is followed closely by third-party and supply chain compromises (17%), and vulnerability exploitation (13%). Among industries, the research sector bore the highest average breach cost at ₹289 million, followed by transportation (₹288 million) and industrial organizations (₹264 million). Despite evidence that AI-driven security tools can more than halve breach costs, 73% of Indian organizations surveyed reported limited or no use of AI-based security automation.

Meta contractors review private AI chats, sometimes seeing user names and photos: Report
Meta contractors review private AI chats, sometimes seeing user names and photos: Report

India Today

timean hour ago

  • India Today

Meta contractors review private AI chats, sometimes seeing user names and photos: Report

Some conversations you've had with Meta's AI may not have been as private as you thought. According to a report by Business Insider, contract workers hired to train Meta's AI systems have reviewed thousands of real user chats, and in many cases, those conversations included names, email addresses, phone numbers, selfies, and even explicit images. Four contractors told the publication that they were regularly exposed to personal information while working on Meta AI projects. These individuals were reportedly hired through platforms called Outlier (owned by Scale AI) and Alignerr. The projects they worked on aimed to improve the quality and personalisation of Meta's AI responses, a process that involves reviewing real interactions between users and AI-powered said they often came across highly personal conversations, ranging from therapy-like sessions and rants about life, to flirty or romantic exchanges. One worker claimed that up to 70 per cent of the chats they reviewed included some form of personally identifiable information. Some users reportedly sent selfies or explicit images to the chatbot, believing the conversation to be seen by Business Insider reportedly showed that in some cases, Meta itself provided background user data, like names, locations, or hobbies, to help the AI personalise responses. In other cases, users voluntarily gave up this information during conversations, despite Meta's privacy policy warning users not to share personal details with the chatbot. One particularly concerning example described in the report involved a sexually explicit conversation with enough personal information for the reporter to locate a matching Facebook profile within which owns platforms like Facebook and Instagram, acknowledged that it does review user interactions with its AI. A spokesperson told Business Insider that it has "strict policies" governing who can access personal data and that contractors are instructed on how to handle any information they may come across. 'While we work with contractors to help improve training data quality, we intentionally limit what personal information they see,' the spokesperson reportedly contractors said projects run by Meta exposed more unredacted personal data than those of other tech companies. One project called Omni, run by Alignerr, aimed to boost engagement on Meta's AI Studio. Another project called PQPE, operated via Outlier, encouraged AI responses to reflect user interests pulled from past conversations or social isn't the first time Meta has come under scrutiny for its data practices. The company's history includes the 2018 Cambridge Analytica scandal and multiple reports over the years about contractors listening in on voice recordings without proper safeguards. While reviewing AI conversations with human help is common in the tech industry, Meta's track record has raised added concern. - EndsTrending Reel

Tenable expands exposure management platform to secure enterprise AI
Tenable expands exposure management platform to secure enterprise AI

Time of India

timean hour ago

  • Time of India

Tenable expands exposure management platform to secure enterprise AI

Tenable® , the exposure management company, today announced a significant expansion of its market-leading Tenable One platform with the launch of Tenable AI Exposure , a comprehensive solution to see, manage and control the risks introduced by generative AI. Unveiled at Black Hat USA 2025 , these new capabilities provide an end-to-end approach that moves beyond discovery to include risk management and policy enforcement for enterprise AI platforms like ChatGPT Enterprise and Microsoft Copilot. As businesses race to adopt generative AI to boost productivity, they are inadvertently creating a new, complex, and often invisible frontier of exposure. Security teams lack visibility into how employees use these powerful tools, which sensitive data might be exposed, and how they could be manipulated by attackers. Tenable now offers a single, unified solution that addresses the entire lifecycle of AI security challenges. 'Simply discovering shadow AI isn't enough. A true exposure management strategy requires an end-to-end solution that lets organisations discover their entire AI footprint, manage the associated risks, and govern its use according to their policies. That's exactly what we are delivering today,' said Steve Vintz, Co-CEO and CFO, Tenable. 'With Tenable AI Exposure , we're giving organisations the visibility and control they need to safely embrace the promise of generative AI without introducing unacceptable risk. This is a critical step in the evolution of exposure management.' With an agentless deployment that provides coverage in minutes, Tenable's unified AI Exposure capabilities deliver: Comprehensive AI Discovery: Unifies insights from Tenable AI Aware, scanning from AI Security Posture Management (AI-SPM) and continuous monitoring to discover and identify all sanctioned and unsanctioned AI usage across the enterprise environment. This provides a deep understanding of user interactions, data flows and activities that might introduce Exposure Management and Prioritisation: Leverages AI-SPM to help organisations identify, prioritise and manage the risk of AI exposure, including from sensitive data leakage (PII, PCI, PHI), misconfigurations and unsafe integrations with external and Control of AI Usage: Enforces security guardrails and organisational policies to control how AI is used, preventing risky user behaviours and mitigating novel threats such as prompt injections, jailbreaks, and malicious output manipulation. Tenable AI Exposure integrates into the Tenable One platform, providing a unified view of risk across the entire attack surface. With the combination of Tenable AI Aware, AI-SPM and the new governance features, customers will gain comprehensive capabilities to discover, manage and secure enterprise AI usage.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store