logo
Who are we in the age of AI?

Who are we in the age of AI?

The Sun21-04-2025

WHO we are at work is deeply tied to how we see ourselves. Our identity is shaped by what we do, where we work, and how well we perform.
These factors influence our job satisfaction, motivation, and overall well-being. When we feel aligned with our work, we thrive. But when our professional identity is challenged, we risk losing our sense of purpose.
Today, the creative industry is undergoing a transformation driven by artificial intelligence (AI), automation, and digital collaboration tools.
For designers, artists, and creative professionals, these changes redefine how work is done and what it means to be a creative expert.
Whether you are a graphic designer, architect, or UX/UI specialist, the question remains: 'How do you maintain your creative identity when machines can generate designs, edit videos, and compose music?'
AI tools like MidJourney, DALL·E, Adobe Firefly, Runway ML, and Canva's AI-powered design features can now generate logos, website layouts, and 3D models in seconds.
This technology offers efficiency, handling repetitive tasks so that designers can focus on bigger ideas. However, it also shifts their role from creators to curators, raising fundamental questions about identity.
Traditionally, creative professionals have taken pride in originality and craftsmanship. But when AI can produce near-instant results, some may ask: 'Am I still the artist, or just someone refining what the machine generates?' This shift can be unsettling, especially in an industry where individuality has always been a marker of success.
At the same time, digital collaboration tools like Figma, Miro, and Adobe Creative Cloud have changed how teams work together. Designers, developers, and clients can now collaborate in real time, making the creative process more dynamic. While this improves efficiency, it also blurs the boundaries of expertise.
Previously, designers led the creative process, blending aesthetics with functionality. Now, with multiple stakeholders weighing in, they must navigate competing opinions and justify their decisions more than ever. Some may feel their expertise is being diluted, reduced to executing rather than envisioning.
This shift affects how designers perceive their own value. If an organisation prioritises collective decision-making over individual creativity, professionals may struggle to see their contributions as unique or essential. When work is defined by consensus rather than creative vision, does the designer still have authority, or are they simply another voice in the crowd?
How can creative professionals embrace these changes while preserving their sense of identity? The key lies in redefining their roles.
For those working with AI, the focus should shift from execution to creative strategy. AI may handle the technical aspects, but human designers provide vision, meaning, and refinement.
Rather than competing with AI, professionals should guide its use, ensuring that technology enhances creativity rather than replacing it. Organisations must also recognise that true innovation is not just about speed, but about depth and originality.
In collaborative environments, designers need to establish themselves as creative integrators.
While teamwork is valuable, their expertise should remain central in balancing aesthetics, functionality, and user needs. Companies can support this by giving designers a clear leadership role, ensuring their voice is not lost in the crowd.
The architects in my research illustrate this balance. Instead of resisting change, they adapted by expanding their roles beyond traditional architectural services.
Rather than being confined to rigid professional boundaries, they embraced diverse identities, voluntarily taking on non-architectural scopes of work to maintain their influence.
This flexibility not only reinforced their presence in projects but also ensured that their creative vision was upheld until completion. By broadening their contributions rather than retreating from change, they sustained their professional identity in a shifting landscape.
Likewise, creative professionals today must adopt a sustainable mindset, integrating AI and digital tools without compromising their artistic integrity. The ability to evolve without losing one's core values is what will distinguish those who thrive from those who struggle to adapt.
As we navigate the Gen AI era, it is worth reflecting on what truly defines us in our work. Is it what we do, the skills we master, or the outputs we deliver? In my research, I have observed a growing emphasis on performance-based identity, where success is measured by efficiency rather than creativity. But is this shift sustainable?
Perhaps the answer lies in redefining creativity itself. Instead of seeing AI as a threat, we should harness it as a tool that amplifies human ingenuity. The future of creativity is not about choosing between humans and machines but about finding ways to let technology enhance humanity.
As AI continues to reshape industries, we must ask ourselves: How can we use these tools to enrich our work rather than diminish it? And how do we ensure that our professional identity remains a source of pride, purpose, and fulfilment in this new era?
Dr Syafizal Shahruddin is a senior lecturer at the School of Housing, Building and Planning, Universiti Sains Malaysia. Comments: letters@thesundaily.com

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Malaysia faces sharp rise in AI-driven cyber threats, Fortinet warns
Malaysia faces sharp rise in AI-driven cyber threats, Fortinet warns

Sinar Daily

timea day ago

  • Sinar Daily

Malaysia faces sharp rise in AI-driven cyber threats, Fortinet warns

54 per cent of organisations experienced a twofold increase in AI-enabled threats while 24 per cent saw a threefold surge in the past year. 02 Jun 2025 03:02pm Malaysia is facing a sharp rise in artificial intelligence (AI)-driven cyber threats, with nearly 50 per cent of organisations reporting incidents involving AI-powered attacks. Photo for illustrative purposes only - Canva KUALA LUMPUR - Malaysia is facing a sharp rise in artificial intelligence (AI)-driven cyber threats, with nearly 50 per cent of organisations reporting incidents involving AI-powered attacks, according to a survey commissioned by global cybersecurity firm Fortinet. The survey, conducted by the International Data Corporation (IDC) across 11 Asia-Pacific (APAC) countries, found that in Malaysia, 54 per cent of organisations experienced a twofold increase in AI-enabled threats while 24 per cent saw a threefold surge in the past year. Fortinet Malaysia Country Manager Kevin Wong said cybercriminals are increasingly leveraging AI to develop and launch attacks more quickly and effectively, moving beyond traditional methods of manual coding. "To give a sense of scale, there are up to 36,000 scam attempts occurring every second through automation, with 97 billion exploitation attempts recorded in the first half of last year alone, and AI is amplifying this trend by two to three times. "In Malaysia, the surge in AI-driven threats is evident, with over 100 billion records stolen and traded on the dark web according to IDC," he told a media briefing on Thursday. He noted that credential theft has spiked by more than 500 per cent within a year, with AI-powered phishing attacks becoming increasingly targeted and difficult to detect. "Traditional tools simply can't keep up, as fast-paced, AI-powered threats demand an equally fast and intelligent response, and that's where AI also plays a role on the defensive side," he said. Wong also noted that cyber risk has evolved from being an occasional concern to a constant and ongoing challenge. "With the rise of AI-powered threats, the nature of cyber risk itself has changed from something we respond to after it happens to something we must act on before it occurs. That is why we partnered with IDC to better understand how security leaders across Asia are navigating this evolving threat landscape, the challenges they face, and the critical gaps in organisational readiness," he said. Meanwhile, Fortinet's vice president of marketing and communications for Asia /Australia and New Zealand (ANZ), Rashish Pandey, said cybersecurity investment in Malaysia remains disproportionately low, with an average of only 15 per cent of IT budgets allocated to cybersecurity, representing just over 1 per cent of total revenue. Malaysia is facing a sharp rise in artificial intelligence (AI)-driven cyber threats, with nearly 50 per cent of organisations reporting incidents involving AI-powered attacks. Photo for illustrative purposes only - Canva "The reason cybersecurity investment remains low is that we still struggle to clearly articulate its business impact to executive teams and boards of directors. Too often, the conversation is framed in technical terms, whereas boards are looking for a discussion centred on business risk, impact, and assessment, which is why we are helping our customers reframe cybersecurity as a strategic business issue rather than just a technical one," he said. The survey also found that only 19 per cent of Malaysian organisations are highly confident in their ability to defend against AI-powered attacks, with 27 per cent stating that such threats are outpacing their detection capabilities and 20 per cent admitting they are unable to detect them at all. Ransomware remains the most frequently encountered threat, reported by 64 per cent of Malaysian respondents. Other common risks include software supply chain attacks (54 per cent), insider threats (52 per cent), cloud vulnerabilities (46 per cent), and phishing (40 per cent). The survey, conducted between February and April 2025, involved 550 IT and cybersecurity leaders from across the APAC region to assess organisational readiness in the face of escalating AI-enabled threats. - BERNAMA More Like This

When experience gets downsized
When experience gets downsized

The Sun

time27-05-2025

  • The Sun

When experience gets downsized

IT usually starts innocently: a 52-year-old staff member, who has been loyally clocking in since Elon Musk was still using a dial-up modem, walks into HR, summoned by an email ominously titled 'Performance Improvement Discussion'. He is expecting maybe a workflow upgrade or, if he dares to dream, a long-overdue promotion. Instead, he is gently told that his role no longer fits into the company's 'new structure'. Fast forward three weeks and a fresh-faced 26-year-old with the same job description, now rebranded as 'Digital Workflow Ninja', is sitting in his old cubicle, sipping teh tarik in a can and talking about 'workflow synergies' on a podcast. Selamat datang to the corporate jungle, where if your knees creak louder than your keyboard, you are no longer considered 'future-ready'. Let us not kid ourselves. No company is going to outright say: 'We're letting you go because you are closer to EPF withdrawal than TikTok virality.' Instead, they cloak the blow with corporate lingo so thick you will need a decoder and possibly a translator from PwC. You will hear familiar phrases like: 'We're shifting towards a more tech-driven operation.' Translation: We saw you panic-click during a Zoom call. Goodbye. 'Your position is no longer required.' Translation: We needed your salary to hire two interns with TikTok skills and no overhead. 'Based on recent performance reviews, we have decided not to continue your employment.' Translation: We gave you KPIs requiring Canva, AI and hashtags. You asked, 'What's a hashtag?' You're out. These phrases sound strategic, objective and even fair. However, beneath the shiny HR language is a growing corporate trend that smells suspiciously like ageism, served on a recycled PowerPoint slide titled 'operational agility'. And the irony? These same 50-somethings were once the original disruptors. They survived dot-matrix printers, dial-up modems and telex machines. They ruled the office back when 'cloud' meant the weather and 'streaming' referred to Sungai Klang. Now, because they don't refer to Excel as 'coding', they are being treated like expired yogurt. Let's be clear, this is not always about tech skills. Often, it is about cost. Older employees are expensive. They have earned their stripes (and their bonuses) and that makes them a prime target during corporate 'realignments'. Why pay one experienced manager RM15,000 when you can hire three juniors who call you 'boss' and work on beanbags? I am not saying companies should never let go of older employees. Businesses need to adapt and not everyone is a unicorn. But when the exits start looking like a silver tsunami and the average age in a department suddenly drops by 20 years, someone needs to say: 'Eh, macam tak kena je (Eh, this does not feel right)'. The sad part? This purge is happening at a time when these employees are hitting their professional prime. Emotionally intelligent, steady under pressure and immune to office gossip (because they don't care who is dating whom in HR), these are the people who will tell you how to fix the printer with a paper clip and a prayer. So, what can companies do instead of quietly ghosting their veterans? First, stop making technology the only yardstick of relevance. Train, don't terrorise. If an uncle can learn to scan a MySejahtera code during MCO, he can learn Microsoft Teams, even if he still calls it 'the new Skype thing'. Second, design KPIs that value multiple generations. Instead of awarding points for building the best Slack bot, how about rewarding crisis management, mentorship or knowing where the office router lives? Third, create roles that honour experience. We already have too many 'digital transformation officers'. What we need are 'wisdom integration managers'. Fourth, consider phased retirement or consultancy gigs. Let the seniors exit with dignity, not a surprise exit memo, a weak kopitiam lunch and a generic farewell email: 'We wish you all the best in your future endeavours.' That is not closure. That is a cop-out. And finally, educate your HR managers. If your definition of 'diversity' stops at race and gender, it is time for an age-inclusion crash course, ideally taught by the same 50-year-old you were eyeing for 'strategic downsizing'. At the end of the day, a company that discards its veterans the moment they develop crow's feet is one that may soon realise: wisdom isn't Googleable, loyalty isn't scalable and no 25-year-old knows why the office printer only works after you slap it twice and press 'start' in BM. So, the next time someone says: 'We're realigning our strategic priorities,' look around. If everyone left is under 35, sipping oat milk and calling the surau a 'quiet pod', your company may have just aged itself out of wisdom. After all, a successful company is not built on vibes and fast Wi-Fi alone but also on memory. And someone has got to remember where the HR kept the punch card machine or at least the emergency Milo stash from 2007.

AI isn't ready to be your therapist, but it's a top reason people use it
AI isn't ready to be your therapist, but it's a top reason people use it

The Star

time26-05-2025

  • The Star

AI isn't ready to be your therapist, but it's a top reason people use it

From falling in love with ChatGPT to deepfakes of deceased loved ones, artificial intelligence's potential for influence is vast – its myriad potential applications not yet completely charted. In truth, today's AI users are pioneering a new, still swiftly developing technological landscape, something arguably akin to the birth of social media in the early 2000s. Yet, in an age of uncertainty about nascent generative AI's full potential, people are already turning to artificial intelligence for major life advice. One of the most common ways people use generative AI in 2025, it turns out, is for therapy. But the technology isn't ready yet. How people use AI in 2025 As of January 2025, ChatGPT topped the list of most popular AI tools based on monthly site visits with 4.7 billion monthly visitors, according to Visual Capitalist. That dwarfed the next most popular service, Canva, more than five to one. When it comes to understanding AI use, digging into how ChatGPT is being put to work this year is a good starting point. Sam Altman, CEO of ChatGPT's parent company, OpenAI, recently offered some insight into how its users are making the most of the tool by age group. 'Gross oversimplification, but like older people use ChatGPT as a Google replacement,' Altman said at Sequoia Capital's AI Ascent event a few weeks ago, as transcribed by Fortune. 'Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system.' It turns out that life advice is something a lot of AI users may be seeking these days. Featured in Harvard Business Review, author and co-founder Marc Zao-Sanders recently completed a qualitative study on how people are using AI. 'Therapy/companionship' topped the list as the most common way people are using generative AI, followed by life organisation and then people seeking purpose in life. According to OpenAI's tech titan, it seems that generated life advice can be an incredibly powerful influence. A Pew Research Center survey published last month reported that a 'vast majority' of surveyed AI experts said people in the United States interact with AI several times a day, if not almost constantly. Around a third of surveyed US adults said they had used a chatbot (which would include things like ChatGPT) before. Some tech innovators, including a team of Dartmouth researchers, are leaning into the trend. Therabot, can you treat my anxiety? Dartmouth researchers have completed a first-of-its-kind clinical trial on a generative AI-powered therapy chatbot. The smartphone app-friendly Therabot has been in development since 2019, and its recent trial showed promise. Just over 100 patients – each experiencing depressive disorder, generalized anxiety disorder or an eating disorder – participated in the experiment. According to senior study author Nicholas Jacobson, the improvement in each patient's symptoms was comparable to traditional outpatient therapy. 'There is no replacement for in-person care, but there are nowhere near enough providers to go around,' he told the college. Even Dartmouth's Therabot researchers, however, said generative AI is simply not ready yet to be anyone's therapist. 'While these results are very promising, no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter,' first study author Michael Heinz told Dartmouth. 'We still need to better understand and quantify the risks associated with generative AI used in mental health contexts.' Why is AI not ready to be anyone's therapist? RCSI University of Medicine and Health Sciences' Ben Bond is a PhD candidate in digital psychiatry who researches ways digital tools can be used to benefit or better understand mental health. Writing to The Conversation, Bond broke down how AI therapy tools like Therabot could pose some significant risks. Among them, Bond explained that AI 'hallucinations' are known flaws in today's chatbot services. From quoting studies that don't exist to directly giving incorrect information, he said these hallucinations could be dangerous for people seeking mental health treatment. 'Imagine a chatbot misinterpreting a prompt and validating someone's plan to self-harm, or offering advice that unintentionally reinforces harmful behaviour,' Bond wrote. 'While the studies on Therabot and ChatGPT included safeguards – such as clinical oversight and professional input during development – many commercial AI mental health tools do not offer the same protections.' According to Michael Best, PhD, a psychologist and contributor to Psychology Today, there are other concerns to consider, too. 'Privacy is another pressing concern,' he wrote to Psychology Today. 'In a traditional setting, confidentiality is protected by professional codes and legal frameworks. But with AI, especially when it's cloud-based or connected to larger systems, data security becomes far more complex. 'The very vulnerability that makes therapy effective also makes users more susceptible to harm if their data is breached. Just imagine pouring your heart out to what feels like a safe space, only to later find that your words have become part of a data set used for purposes you never agreed to.' Best added that bias is a significant concern, something that could lead to AI therapists giving bad advice. 'AI systems learn from the data they're trained on, which often reflect societal biases,' he wrote. 'If these systems are being used to deliver therapeutic interventions, there's a risk that they might unintentionally reinforce stereotypes or offer less accurate support to marginalized communities. 'It's a bit like a mirror that reflects the world not as it should be, but as it has been – skewed by history, inequality, and blind spots.' Researchers are making progress in improving AI therapy services. Patients suffering from depression experienced an average 51% reduction in symptoms after participating in Dartmouth's Therabot experiment. For those suffering from anxiety, there was an average 31% drop in symptoms. The patients suffering from eating disorders showed the lowest reduction in symptoms but still averaged 19% better off than before they used Therabot. It's possible there's a future where artificial intelligence can be trusted to treat mental health, but – according to the experts – we're just not there yet. – The Atlanta Journal-Constitution/Tribune News Service

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store