logo
#

Latest news with #AI-assisted

Boosting AI literacy for professional communication
Boosting AI literacy for professional communication

New Straits Times

time2 days ago

  • Business
  • New Straits Times

Boosting AI literacy for professional communication

THE British Council is strengthening its corporate training strategy across the Asia-Pacific region to address the growing impact of artificial intelligence (AI) on workplace communication. According to David Neufeld, Corporate English Solutions (CES) Sales Head for the region, professionals are increasingly relying on AI-generated writing without adequate review, which can result in issues with clarity, relevance and factual accuracy. "We are not training people how to use AI. Rather, we are trying to help them with what AI outputs, to be better business communicators," said Neufeld. He noted that many corporate clients, particularly in the banking, financial services and insurance sectors, now have internal AI tools. However, employees often forward AI-generated content without editing, even when it contains grammatical errors or irrelevant details. This overreliance, Neufeld warned, creates a risk of miscommunication in high-stakes situations. The British Council, he explained, trains professionals to assess, refine and apply AI-generated content using structured frameworks designed for the workplace. These frameworks provide support in areas such as business writing, interpersonal communication, influencing, and trust-building techniques. "We want participants to think critically about what AI produces. Is it accurate? Is it appropriate for the audience? Can it stand up to scrutiny?" Neufeld added. He emphasised that professionals must also learn to navigate AI's limitations, including outdated data, hallucinations and factual inaccuracies—particularly when handling sensitive or time-critical communication. At the British Council's Lunch and Learn 2025 session held on 10 July, participants were introduced to three targeted training modules aimed at building communication confidence in AI-assisted environments. The first session taught participants how to use the Point, Reason, Example, Point (PREP) structure to organise AI-generated text into persuasive messages. The second focused on negotiation skills, using frameworks such as Best Alternative to a Negotiated Agreement (BATNA), Bottom Line, and Most Desirable Outcome, with AI used to simulate role plays. The final session applied the British Council's six Cs—clear, correct, concise, coherent, complete and courteous—to improve clarity and tone in AI-written content. Neufeld said these frameworks help participants keep human judgement at the centre of communication. "AI is useful for drafting and simulating ideas, but humans must still decide what to say, how to say it, and whether it's appropriate," he said. British Council CES operates on a business-to-organisation model and delivers training to clients in the corporate, government and education sectors. Malaysia and Singapore are currently two of its largest markets in Southeast Asia, although demand in Thailand, Vietnam and Indonesia is on the rise. Neufeld, who has lived in Malaysia since 2010, began his tenure with the British Council as a corporate trainer and now leads CES across the Asia-Pacific region. He said demand for AI-related training has grown steadily over the past two years, as organisations race to integrate generative tools into their operations. The British Council's observations align with broader trends among learning and development (L&D) teams in the region. AI is increasingly being used to create personalised assessments, enhance learner engagement, automate feedback, and deliver training at scale across multiple locations. However, the British Council cautions that challenges remain. Neufeld said the absence of clear organisational policies, ethical concerns, and a loss of the human touch in communication are among the top risks raised by clients. "Some worry AI might replace certain roles; others are concerned about bias, or using inaccurate data that goes unchallenged," he said. To adapt, the British Council is placing greater emphasis on developing communication fundamentals and soft skills with its corporate clients. According to Neufeld, these include active listening, clarity in messaging, critical thinking, emotional intelligence, and the ability to adapt tone to suit the audience and situation. The British Council has also identified creativity, time management and conflict resolution as vital skills for navigating increasingly complex and fast-changing workplaces. These areas are integrated into CES training programmes, alongside language competency and task-based communication models. Looking ahead, the British Council anticipates broader workplace transformation over the next five to ten years, with AI serving as a central driver. Shifts in job roles, workforce composition, economic uncertainty, and rising expectations around employee well-being are all contributing to a new approach to learning. Neufeld said the British Council's corporate clients are also becoming more conscious of the reputational risks posed by poor communication. "A bad message can hurt trust. Whether written by a person or a machine, it still reflects your brand," he said. In response, British Council Malaysia has incorporated more digital tools into its delivery model while maintaining interactive and context-based learning. Clients are increasingly requesting hybrid solutions that combine face-to-face workshops with online modules and follow-up coaching. The British Council has stated that its role is not to replace corporate L&D teams, but to support them in ensuring communication remains a core skill in the age of automation. "Even with AI doing the heavy lifting in some areas, we still need people who can lead with empathy, explain ideas clearly, and respond in real time," said Neufeld. The British Council is the United Kingdom's international organisation for cultural relations and educational opportunities, providing services in English language education, examinations, arts and cultural exchange. Founded in 1934 and present in over 100 countries, the British Council builds lasting trust and cooperation through language, culture and global partnerships. Now in its 90th year, the organisation continues to evolve, helping individuals and institutions around the world connect, learn and collaborate with the UK to foster peace, prosperity and shared progress.

TikTok Germany moderators raise alarm over layoff plans
TikTok Germany moderators raise alarm over layoff plans

Time of India

time4 days ago

  • Business
  • Time of India

TikTok Germany moderators raise alarm over layoff plans

Content moderators at the German branch of social media giant TikTok sounded the alarm Thursday about what they say is a plan to replace them with artificial intelligence, potentially putting platform users at risk. Another one of the moderators, 36-year-old Sara Tegge, says that the artificial intelligence used by the company "cannot tell whether content discriminates against certain groups and it can't judge the danger of certain content". Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Berlin, Jul 17, 2025 -Content moderators at the German branch of social media giant TikTok sounded the alarm Thursday about what they say is a plan to replace them with artificial intelligence, potentially putting platform users at 50 people gathered for a protest near the offices of TikTok Germany, among them some of the 150-strong " trust and safety " department in Berlin, who say management are threatening to fire them en a banner reading "we trained your machines, pay us what we deserve", the protestors said TikTok had already overseen one round of layoffs last year and demanded it reverse plans to fully close the content moderators are tasked with keeping content such as hate speech, misinformation and pornography off the platform, which claimed more than 20 million users in Germany as of late row in Germany comes amid a global trend of social media companies reducing their use of human fact-checkers and turning to AI October, TikTok -- which has 1.5 billion users worldwide and is a division of Chinese tech giant ByteDance -- announced hundreds of job losses worldwide as part of a shift to AI-assisted content did not reply to an AFP request for moderators at TikTok Germany are being supported by the union who say that the company has refused to negotiate and that strike action is being prepared if this of the moderators, 32-year-old Benjamin Karkowski, said that staff had been "shocked" when they learned of TikTok's current plans via a message from one of the moderators, 36-year-old Sara Tegge, says that the artificial intelligence used by the company "cannot tell whether content discriminates against certain groups and it can't judge the danger of certain content".She cited an example in which the AI flagged innocuous content about Berlin's annual LGBT+ pride as breaking TikTok's guidelines on political the company moves ahead with its plans she "certainly fears" users may be exposed to greater support at Thursday's demonstration was Werner Graf, leader of the Green party's lawmakers in Berlin's state assembly."These people have been fighting so that the the internet isn't permanently overwhelmed" with "fake news and hate speech", he said."We in the political arena must make clear that checking content... can't simply be left up to AI, we must legislate to make sure it's done by humans," he went on.

YouTuber loses 11 kg in 46 days with ChatGPT's prompts; shares daily routine, no trainers or fad diets involved
YouTuber loses 11 kg in 46 days with ChatGPT's prompts; shares daily routine, no trainers or fad diets involved

Time of India

time4 days ago

  • Health
  • Time of India

YouTuber loses 11 kg in 46 days with ChatGPT's prompts; shares daily routine, no trainers or fad diets involved

In an inspiring transformation story that's capturing global attention, a US-based YouTuber, Cody Crone, has lost 11 kilograms in just 46 days—all without hiring a personal trainer or following any fad diet. What sets his journey apart? He relied entirely on ChatGPT, OpenAI's artificial intelligence chatbot, to design a custom fitness and diet plan tailored to his lifestyle. AI as a personal coach Cody Crone, a 56-year-old father of two from the Pacific Northwest, took to YouTube to document his remarkable progress. Frustrated by his previous physical condition and seeking a sustainable solution, he turned to AI for help. Using ChatGPT, he created a comprehensive routine centered around clean eating, exercise, and disciplined habits. Explore courses from Top Institutes in Select a Course Category Design Thinking Data Science PGDM Data Science Finance MBA Data Analytics Digital Marketing Artificial Intelligence Management Product Management Leadership Others Public Policy Healthcare CXO MCA Skills you'll gain: Duration: 22 Weeks IIM Indore CERT-IIMI DTAI Async India Starts on undefined Get Details Skills you'll gain: Duration: 25 Weeks IIM Kozhikode CERT-IIMK PCP DTIM Async India Starts on undefined Get Details Skills you'll gain: Duration: 22 Weeks IIM Indore CERT-IIMI DTAI Async India Starts on undefined Get Details Weighing 95 kg at the start, Mr. Crone dropped to 83 kg by day 46—without taking shortcuts or relying on weight-loss medications like Ozempic. Instead, the AI-assisted approach provided a well-rounded strategy that focused on long-term health, not just rapid weight reduction. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Dermatologist: Throw This Bedroom Item Away (Here's Why) Blissy Learn More Undo Diet and nutrition strategy His AI-generated meal plan focused on whole, nutrient-dense foods. He eliminated processed items, refined sugar, seed oils, and dairy, instead opting for high-quality ingredients like: Grass-fed meats Jasmine rice Steel-cut oats Olive oil Organic greens To support recovery and muscle growth, Crone supplemented his diet with creatine, beta-alanine, whey protein, collagen, and magnesium—all suggested by the AI based on his training intensity and goals. Live Events Structured daily routine His day began at 4:30 am, followed by a 60–90-minute workout in his home garage gym, which he equipped with kettlebells, resistance bands, and a weight vest. He worked out six days a week, demonstrating commitment and consistency. Other key aspects of his routine included Drinking up to 4 liters of water daily (with a cut-off in the early evening) Strict sleep hygiene: blackout curtains, natural bedding, and zero screen time before bed A spoonful of local raw honey before sleep to improve rest Daily sunlight exposure in the morning Tracking weight daily so ChatGPT could adjust his plan in real time Real results Unlike many weight loss journeys that rely on pharmaceutical aids, Crone made it clear that he avoided any medications. His focus was entirely on natural, sustainable methods powered by data-driven AI advice. The results were not just physical—he also reported reduced inflammation, better joint health, increased muscle strength, enhanced mental clarity, and improved self-confidence.

Doppel Expands Executive Leadership Team, Welcoming Bobby Ford as Chief Strategy and Experience Officer, Amongst Other Key Executive Hires
Doppel Expands Executive Leadership Team, Welcoming Bobby Ford as Chief Strategy and Experience Officer, Amongst Other Key Executive Hires

Business Wire

time01-07-2025

  • Business
  • Business Wire

Doppel Expands Executive Leadership Team, Welcoming Bobby Ford as Chief Strategy and Experience Officer, Amongst Other Key Executive Hires

SAN FRANCISCO--(BUSINESS WIRE)-- Doppel, the AI-powered social engineering defense platform, has hired experienced Chief Information Security Officer (CISO), Bobby Ford, to serve as Chief Strategy and Experience Officer. In addition, further expanding its executive bench, Doppel welcomes Alex Hu as Vice President (VP) of Finance and Alyssa Smrekar as Senior Vice President (SVP) of Marketing. "My Co-Founder, Rahul Madduluri, and I are thrilled to welcome Alex, Alyssa and Bobby to our executive leadership team at Doppel,' said Kevin Tian, CEO of Doppel. 'Their expertise in finance, marketing and cybersecurity is invaluable as we focus on delivering more value to our customers, push the frontier of technology and execute on our ambitious plans. We've built the best executive team in cybersecurity, and they will help drive our culture of customer obsession and relentless innovation." Ford brings nearly three decades of experience protecting some of the world's most complex and operationally intensive enterprises. His career began in the military as a founding member of the Pentagon Computer Incident Response Team, and he's since served as the first CISO at Abbott Labs, CISO for Unilever and most recently SVP and Chief Security Officer at Hewlett Packard Enterprise. Known for his collaborative style and empathetic leadership, Ford fosters an inclusive culture that empowers security organizations to excel. At Doppel, Ford will ensure an exceptional customer experience, be an industry advocate and deliver thought leadership as the company delivers on its mission to protect brands, executives and consumers from adaptive, AI-assisted social engineering threats. 'Joining Doppel is an incredibly compelling opportunity as we are creating an entirely new category with an unmatched social engineering defense platform,' said Ford. 'After decades as a CISO, I've seen firsthand how social engineering remains the most persistent and sophisticated threat facing organizations. Doppel's vision is to address this challenge at its core, by redefining how we defend against human-targeted attacks across email, social media, mobile and other attack surfaces.' Hu will oversee financial planning, investor relations and more, bringing senior leadership experience from organizations such as BigID and Dataiku, where he helped raise $1B+ in venture capital and scale both companies into centaurs. In addition, overseeing all facets of marketing, Smrekar brings a strong track record of building and scaling comprehensive marketing teams. Prior to joining Doppel, Smrekar served as the VP of Brand Marketing and Interim CMO at dbt Labs, and led Brand and Corporate Marketing teams at Intercom and Okta. This news comes on the heels of Doppel's $35 Million Series B funding round, announcing substantial company growth. In addition, Doppel recently opened its second office, located in New York, and surpassed 100 full-time employees. Over the last several months, the company expanded across all departments, hiring a full executive team of leaders who bring extensive experience to further drive company growth. Additional hires include SVP of Sales, Mike Ferrari and VP of Customer Success and Security Operations, Billy Jennings. To learn more about Doppel and how the company is redefining social engineering defense, visit: and if you are interested in joining the team, Doppel is actively hiring: Our enterprise-ready security platform is built to neutralize social engineering threats targeting your executives, employees, and third parties before they damage your business. Doppel Vision doesn't just play whack-a-mole with individual attacks, it links threats together, showing you threat actors' malicious infrastructure, protecting your brand and your customers against everything from phishing and fraud to deepfakes and brand impersonation.

AI drives 80 percent of phishing with USD $112 million lost in India
AI drives 80 percent of phishing with USD $112 million lost in India

Techday NZ

time01-07-2025

  • Business
  • Techday NZ

AI drives 80 percent of phishing with USD $112 million lost in India

Artificial intelligence has become the predominant tool in cybercrime, according to recent research and data from law enforcement and the cybersecurity sector. AI's growing influence A June 2025 report revealed that AI is now utilised in 80 percent of all phishing campaigns analysed this year. This marks a shift from traditional, manually created scams to attacks fuelled by machine-generated deception. Concurrently, Indian police recorded that criminals stole the equivalent of USD $112 million in a single state between January and May 2025, attributing the sharp rise in financial losses to AI-assisted fraudulent operations. These findings are reflected in the daily experiences of security professionals, who observe an increasing use of automation in social engineering, malware development, and reconnaissance. The pace at which cyber attackers are operating is a significant challenge for current defensive strategies. Methods of attack Large language models are now being deployed to analyse public-facing employee data and construct highly personalised phishing messages. These emails replicate a victim's communication style, job role and business context. Additionally, deepfake technology has enabled attackers to create convincing audio and video content. Notably, an incident in Hong Kong this year saw a finance officer send HK $200 million after participating in a deepfake video call bearing the likeness of their chief executive. Generative AI is also powering the development of malware capable of altering its own code and behaviour within hours. This constant mutation enables it to bypass traditional defences like endpoint detection and sandboxing solutions. Another tactic, platform impersonation, was highlighted by Check Point, which identified fake online ads for a popular AI image generator. These ads redirected users to malicious software disguised as legitimate installers, merging advanced loader techniques with sophisticated social engineering. The overall result is a landscape where AI lowers the barriers to entry for cyber criminals while amplifying the reach and accuracy of their attacks. Regulatory landscape Regulators are under pressure to keep pace with the changing threat environment. The European Union's AI Act, described as the first horizontal regulation of its kind, became effective last year. However, significant obligations affecting general-purpose AI systems will begin from August 2025. Industry groups in Brussels have requested a delay on compliance deadlines due to uncertainty over some of the rules, but firms developing or deploying AI will soon be subject to financial penalties for not adhering to the regulations. Guidance issued under the Act directly links the risks posed by advanced AI models to cybersecurity, including the creation of adaptive malware and the automation of phishing. This has created an expectation that security and responsible AI management are now interrelated priorities for organisations. Company boards are expected to treat the risks associated with generative models with the same seriousness as data protection or financial governance risks. Defensive measures A number of strategies have been recommended in response to the evolving threat environment. Top of the list is the deployment of behaviour-based detection systems that use machine learning in conjunction with threat intelligence, as traditional signature-based tools struggle against ever-changing AI-generated malware. Regular vulnerability assessments and penetration testing, ideally by CREST-accredited experts, are also regarded as essential to expose weaknesses overlooked by both automated and manual processes. Verification protocols for audio and video content are another priority. Using additional communication channels or biometric checks can help prevent fraudulent transactions initiated by synthetic media. Adopting zero-trust architectures, which strictly limit user privileges and segment networks, is advised to contain potential breaches. Teams managing AI-related projects should map inputs and outputs, track possible abuse cases, and retain detailed logs in order to meet audit obligations under the forthcoming EU regulations. Staff training programmes are also shifting focus. Employees are being taught to recognise subtle cues and nuanced context, rather than relying on spotting poor grammar or spelling mistakes as indicators of phishing attempts. Training simulations must evolve alongside the sophistication of modern cyber attacks. The human factor Despite advancements in technology, experts reiterate that people remain a core part of the defence against AI-driven cybercrime. Attackers are leveraging speed and scale, but defenders can rely on creativity, expertise, and interdisciplinary collaboration. "Technology alone will not solve AI‑enabled cybercrime. Attackers rely on speed and scale, but defenders can leverage creativity, domain expertise and cross‑disciplinary thinking. Pair seasoned red‑teamers with automated fuzzers; combine SOC analysts' intuition with real‑time ML insights; empower finance and HR staff to challenge 'urgent' requests no matter how realistic the voice on the call," said Himali Dhande, Cybersecurity Operations Lead at Borderless CS. The path ahead There is a consensus among experts that the landscape has been permanently altered by the widespread adoption of AI. It is increasingly seen as necessary for organisations to shift from responding to known threats to anticipating future methods of attack. Proactive security, embedded into every project and process, is viewed as essential not only for compliance but also for continued protection. Borderless CS stated it, "continues to track AI‐driven attack vectors and integrate them into our penetration‐testing methodology, ensuring our clients stay ahead of a rapidly accelerating adversary. Let's shift from reacting to yesterday's exploits to pre‐empting tomorrow's."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store