logo
AI won't replace you — but a leader who uses it might

AI won't replace you — but a leader who uses it might

Bangkok Post29-06-2025
Artificial intelligence is changing how we work and how we lead. Whether you're running a startup, managing a team, or growing your career in Thailand, the question is no longer 'Should I use AI?' It's 'How can I lead better with it?'
Many leaders still think AI is just for IT or tech experts. That's no longer true. AI is now a tool for everyone — from HR to sales, from strategy to operations.
And using AI well doesn't mean you need to code or build apps. It means using AI to ask smarter questions, make faster decisions, and lead your team through change with more confidence.
In short: Leading with AI is more about mindset than technology.
How Good Leaders Are Using AI
I work with leaders across Asia, and I've seen how some are using AI in powerful ways:
In sales, they use AI to understand what customers are feeling, then help their teams respond better.
In strategy, they use AI to test different ideas before making big decisions.
In HR, they use AI to spot signs of stress or burnout before people leave.
These leaders aren't using AI to replace people — they're using it to help people do better work.
But Be Careful
AI is a powerful tool. But if you're not a good leader, AI won't help much. In fact, it could make things worse.
If you don't listen to your team, AI can make you feel even more distant. If you avoid tough choices, AI won't make them for you. And if you use data to push your own opinions, people will stop trusting you.
That's why great AI leaders are also self-aware. They know when to trust the data and when to trust their people.
How to Start Leading with AI
You don't need to wait. Here are three simple ways to begin:
Ask better questions. Try using ChatGPT or other AI tools to explore ideas. Ask: 'What am I missing?' or 'What are other ways to see this?'
Practise with AI tools. Some tools let you simulate tough conversations or team decisions. Try them — it's safe space to learn.
Be open with your team. Talk about AI. Share what you're learning. Help your team feel included, not afraid.
It's Still About People
In Thailand, we value nam jai — kindness and human connection. That's not going away. In fact, it matters more now. As AI handles tasks, what makes us human — our judgement, care, and relationships — becomes more valuable.
The future of leadership is not humans versus AI. It's humans working with AI. The best leaders will be the ones who use AI to bring out the best in themselves — and their teams.
So if you've been waiting to get started, now is the time. AI is ready. The world is changing fast.
Are you ready to lead in a new way?
arinya.talerngsri@bts.com or visit her LinkedIn profile.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Artificial intelligence is not your friend
Artificial intelligence is not your friend

Bangkok Post

time3 days ago

  • Bangkok Post

Artificial intelligence is not your friend

Meta CEO Mark Zuckerberg and OpenAI's Sam Altman have been aggressively promoting the idea that everyone -- children included -- should form relationships with AI "friends" or "companions". Meanwhile, multinational tech companies are pushing the concept of "AI agents" designed to assist us in our personal and professional lives, handle routine tasks, and guide decision-making. But the reality is that AI systems are not, and never will be, friends, companions, or agents. They are, and will always remain, machines. We should be honest about that and push back against misleading marketing that suggests otherwise. The most deceptive term of all is "artificial intelligence". These systems are not truly intelligent, and what we call "AI" today is simply a set of technical tools designed to mimic certain cognitive functions. They are not capable of true comprehension and are neither objective, fair, nor neutral. Nor are they becoming any smarter. AI systems rely on data to function, and increasingly, that includes data generated by tools like ChatGPT. The result is a feedback loop that recycles output without producing deeper understanding. More fundamentally, intelligence is not just about solving tasks; it's also about how those tasks are approached and performed. Despite their technical capabilities, AI models remain limited to specific domains, such as processing large data sets, performing logical deductions, and making calculations. Simply put, machines lack moral agency. Their behaviour is governed by patterns and rules created by people, whereas human morality is rooted in autonomy -- the capacity to recognise ethical norms and behave accordingly. By contrast, AI systems are designed for functionality and optimisation. They may adapt through self-learning, but the rules they generate have no inherent ethical meaning. Consider self-driving cars. To get from point A to point B as quickly as possible, a self-driving vehicle might develop rules to optimise travel time. If running over pedestrians would help achieve that goal, the car might do so, unless instructed not to, because it cannot understand the moral implications of harming people. This is partly because machines are incapable of grasping the principle of generalisability -- the idea that an action is ethical only if it can be justified as a universal rule. Moral judgment depends on the ability to provide a plausible rationale that others can reasonably accept. These are what we often refer to as "good reasons". Unlike machines, humans are able to engage in generalisable moral reasoning and can therefore judge whether their actions are right or wrong. The term "data-based systems" (DS) is thus more appropriate than "artificial intelligence", as it reflects what AI can actually do: generate, collect, process, and evaluate data to make observations and predictions. At their core, these are systems that use highly sophisticated mathematical processes to analyse vast amounts of data -- nothing more. Humans may interact with them, but communication is entirely one-way. DS have no awareness of what they are "doing" or of anything happening around them. This is not to suggest that DS cannot benefit humanity or the planet. On the contrary, we can and should rely on them in domains where their capabilities exceed our own. But we must also actively manage and mitigate the ethical risks they present. Over the past two decades, Big Tech firms have isolated us and fractured our societies through social media -- more accurately described as "anti-social media", given its addictive and corrosive nature. Now, those same companies are promoting a radical new vision: replacing human connection with AI "friends" and "companions". At the same time, these companies continue to ignore the so-called "black box problem": the untraceability, unpredictability, and lack of transparency in the algorithmic processes behind automated evaluations, predictions, and decisions. This opacity, combined with the high likelihood of biased and discriminatory algorithms, inevitably results in biased and discriminatory outcomes. The risks posed by DS are not theoretical. These systems already shape our private and professional lives in increasingly harmful ways, manipulating us economically and politically, yet tech CEOs urge us to let DS tools guide our decisions. To protect our freedom and dignity, as well as the freedom and dignity of future generations, we must not allow machines to masquerade as what they are not: us. ©2025 Project Syndicate

Passwords under threat as tech giants seek tougher security
Passwords under threat as tech giants seek tougher security

Bangkok Post

time13-08-2025

  • Bangkok Post

Passwords under threat as tech giants seek tougher security

PARIS - Fingerprints, access keys and facial recognition are putting a new squeeze on passwords as the traditional computer security method -- but uses hesitate over making the switch. "The password era is ending," two senior figures at Microsoft wrote in a July blog post. The tech giant has been building "more secure" alternatives to log in for years -- and has since May been offering them by default to new users. Many other online services -- such as artificial intelligence giant OpenAI's ChatGPT chatbot -- require steps like entering a numerical code emailed to a user's known address before granting access to potentially sensitive data. "Passwords are often weak and people re-use them" across different online services, said Benoit Grunemwald, a cybersecurity expert with Eset. Sophisticated attackers can crack a word of eight characters or fewer within minutes or even seconds, he pointed out. And passwords are often the prize booty in data leaks from online platforms, in cases where "they are improperly stored by the people supposed to protect them and keep them safe", Grunemwald said. One massive database of around 16 billion login credentials amassed from hacked files was discovered in June by researchers from media outlet Cybernews. The pressure on passwords has tech giants rushing to find safter alternatives. - Tricky switchover - One group, the Fast Identity Online Alliance (FIDO) brings together heavyweights including Google, Microsoft, Apple, Amazon and TikTok. The companies have been working on creating and popularising password-free login methods, especially promoting the use of so-called access keys. These use a separate device like a smartphone to authorise logins, relying on a pin code or biometric input such as a fingerprint reader or face recognition instead of a password. Troy Hunt, whose website Have I Been Pwned allows people to check whether their login details have been leaked online, says the new systems have big advantages. "With passkeys, you cannot accidentally give your passkey to a phishing site" -- a page that mimics the appearance of a provider such as an employer or bank to dupe people into entering their login details -- he said. But the Australian cybersecurity expert recalled that the last rites have been read for passwords many times before. "Ten years ago we had the same question... the reality is that we have more passwords now than we ever did before," Hunt said. Although many large platforms are stepping up login security, large numbers of sites still use simple usernames and passwords as credentials. The transition to an unfamiliar system can also be confusing for users. Passkeys have to be set up on a device before they can be used to log in. Restoring them if a PIN code is forgotten or trusted smartphone lost or stolen is also more complicated than a familiar password reset procedure. "The thing that passwords have going for them, and the reason that we still have them, is that everybody knows how to use them," Hunt said. Ultimately the human factor will remain at the heart of computer security, Eset's Grunemwald said.

Musk accuses Apple of unfairly favouring OpenAI on iPhone
Musk accuses Apple of unfairly favouring OpenAI on iPhone

Bangkok Post

time12-08-2025

  • Bangkok Post

Musk accuses Apple of unfairly favouring OpenAI on iPhone

Elon Musk lashed out against Apple Inc's app store practices late on Monday, accusing the iPhone maker of favouring OpenAI. The billionaire founder of xAI Holdings, which now houses the Grok artificial intelligence team and X social network, said Apple makes it impossible for anyone other than OpenAI to reach the top of the App Store charts, a sought-after global spotlight for app developers. In a pinned post on his X account, Musk asked if Apple is 'playing politics' by not highlighting his products. His comments mark the biggest fight that Musk has picked since his confrontation with US President Donald Trump in June, when each man posted disparaging remarks about the other on his own social network. Apple and OpenAI representatives didn't immediately respond to requests for comment. Musk added that xAI will take legal action against what he deemed an antitrust violation. Apple and OpenAI — whose ChatGPT is the most-downloaded free iPhone app in the US, ahead of Grok at No.5 — have a partnership around AI built into the latest iPhones. Musk, 54, has a long-running feud with OpenAI Chief Executive Officer Sam Altman, which dates back to disagreements that led to their split after the two founded OpenAI together. Altman, 40, responded to the Monday claims of impropriety by turning the focus to how Musk manages the X network, suggesting he manipulates it to serve his personal interests. 'But OpenAI will just stay focused on making great products,' Altman added in his post on X.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store