
Crucial for OpenAI to build AI infrastructure: OpenAI
At the ATXSummit, OpenAI Chief Strategy Officer Jason Kwon speaks with CNA's Sarah Al-Khaldi about ChatGPT's rapid growth in Southeast Asia and the 'OpenAI to Countries' initiative. He emphasises the company's commitment to safety and reliability, and shares his vision for the next phase of AI innovation, where autonomous agents will carry out complex tasks with minimal human input.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Independent Singapore
an hour ago
- Independent Singapore
53-year-old retrenched Singaporean dad launches indoor air quality company after his toddler kept falling sick
SINGAPORE: A 53-year-old Singaporean dad, Jay Choy, who had worked for 26 years at a Japanese multinational company, was retrenched in December 2022, with no job in sight. Around the same time, his 15-month-old son, Jayson, had just started infant care and was often unwell with high fever and a runny nose. Little did he know that looking for something more sustainable for his son's health than just relying on medication would lead to the launch of his own company. Mr Choy then remembered an air purification technology he had previously promoted in his former job, called BioZone Photoplasma™. After installing the unit in his son's room, he said 'the air felt noticeably fresher within days', and his son's symptoms began to ease. Over the next month, his son's health gradually improved. Motivated by his own experience, Mr Choy started reviewing lab reports, real-world studies, and client testimonials about indoor air quality. By February 2023, he had set up FJ SafeSpace Pte Ltd—a business focused on improving indoor air quality in homes, schools, offices, and industrial spaces. The company offers complimentary indoor air quality audits using uHoo advanced air sensors, which are certified by the Singapore Green Building Council. They also benchmark clients' indoor environments against Green Mark 2021, an internationally recognised green building certification scheme tailored for the tropical climate. In addition, they provide solutions using BioZone Photoplasma™ technology to deal with viruses, bacteria, fine dust, volatile organic compounds ( VOCs), and odours. Since its launch, the company has conducted over 140 indoor air quality audits, working with families, childcare centres, offices, F&B outlets, and industrial sites. Mr Choy, who shares audit results (with consent) on LinkedIn to raise public awareness, found this often led to referrals and client collaborations. One success story was when printing and packaging company KPP Packaging faced lingering food smells and frequent staff sick days in their office. After working with FJ SafeSpace, the company reported fewer sick days among staff and improved productivity. They later added more units on the production floor. See also Singapore trader stocking 3,000 swimming pools of sugar! In early 2025, Mr Choy enrolled in the Corporate Environment & Sustainability Executive Programme at Nanyang Technological University (NTU). He had already completed the Green Mark Associate course by the Singapore Green Building Council in 2023. His wife, who works in finance, has been a steady support throughout the journey, while their son, Jayson, remains his daily reminder of why the work matters. Mr Choy said, 'I want all children to grow up in a world where clean air is the norm, not a luxury. That means protecting both the spaces we live in and the planet we live on.' /TISG Read also: Singapore women entrepreneurs find global success through Amazon Global Selling


Independent Singapore
an hour ago
- Independent Singapore
Man finds out his colleague earns S$500 more than him, asks if he should speak up or stay silent
SINGAPORE: A Singapore-based tech sales employee recently discovered that he's being paid hundreds of dollars less than a colleague who joined the company at the same time. Posting anonymously on the r/askSingapore forum, the employee shared that the topic of pay arose during a casual dinner with his teammates, where they began comparing their base salaries. 'We were discussing our base pay, and I realized that I've been getting paid significantly less (around S$400 to S$500),' he said. 'At first, I thought it might be because they had been with the company longer, but one colleague who joined at the same time as me (we even went through the interview process together last year) is being paid much more as well.' Uncertain about how to proceed, he turned to the online community for advice, asking, 'Am I supposed to just accept this, or should I bring it up with HR? Even though I know that might go against what the contract says?' 'The salary offered will differ from one person to another…' In the comments, one user pointed out that while the situation may feel unfair, the employee had ultimately agreed to his salary terms when he accepted the offer. 'You agreed to your terms of employment when you signed the contract,' the user wrote. 'You can ask, but there is really no basis. I mean, you agreed on the contract and (signed), right?' Another added, 'In the same job role, there are several factors to be considered that qualify you—market rates (at that moment), experience, qualifications, particular niche skillsets, and others. Hence, the salary offered will differ from one person to another. Every candidate is different, even for the same role. You also agreed to the contract when you signed the offer.' A third, however, urged him to 'leave' the company, adding, 'You're responsible for yourself. Your boss and HR are not responsible for increasing your pay. Their priority is for the company, not you. Now that you know what the basic pay is for your job, it should be easier to go find another job and negotiate. Heck, you can even find higher pay; that's what job hopping is for.' What to do when you find out your colleague earns more Finding out a colleague earns more than you can feel like a punch to the gut, or at least a hard nudge, but before you jump to conclusions or spiral into frustration, take a moment to pause. According to staffing agency Mondo, there are a few helpful steps you can take to make sense of the situation: Assess your colleague's background Your colleague may possess more years of experience, different qualifications, or a longer tenure with the company, which could have contributed to several salary increases over time. Research about your role Use this as a chance to reflect on where you're at. Research what people in similar roles are earning in your field and location. Sites like Glassdoor or Payscale can give you a rough idea of what's fair. Bring it up with your manager If, after your research, you still feel like you're underpaid, you can bring it up with your manager. However, it's important to remember that when you do have that conversation, focus on your own work, progress, and what you bring to the team. Read also: 'The reno worker just let them in': Woman horrified as neighbours invade her new BTO unit

Straits Times
9 hours ago
- Straits Times
Researchers create chatbot to teach law class in university, but it kept messing up
Despite the enthusiasm, there is limited research testing how well AI performs in teaching environments, especially within structured university courses. PHOTO: ISTOCKPHOTO Researchers create chatbot to teach law class in university, but it kept messing up A significant revelation was the sheer effort required to get the chatbot working effectively in tests. 'AI tutors' have been hyped as a way to revolutionise education. The idea is generative artificial intelligence (AI) tools (such as ChatGPT) could adapt to any teaching style set by a teacher. The AI could guide students step-by-step through problems and offer hints without giving away answers. It could then deliver precise, immediate feedback tailored to the student's individual learning gaps. Despite the enthusiasm, there is limited research testing how well AI performs in teaching environments, especially within structured university courses. In our new study, we developed our own AI tool for a university law class. We wanted to know, can it genuinely support personalised learning or are we expecting too much? Our study In 2022, we developed SmartTest, a customisable educational chatbot, as part of a broader project to democratise access to AI tools in education. Unlike generic chatbots, SmartTest is purpose-built for educators, allowing them to embed questions, model answers and prompts. This means the chatbot can ask relevant questions, deliver accurate and consistent feedback and minimise hallucinations (or mistakes). SmartTest is also instructed to use the Socratic method, encouraging students to think, rather than spoon-feeding them answers. We trialled SmartTest over five test cycles in a criminal law course (that one of us was coordinating) at the University of Wollongong in 2023. Each cycle introduced varying degrees of complexity. The first three cycles used short hypothetical criminal law scenarios (for example, is the accused guilty of theft in this scenario?). The last two cycles used simple short-answer questions (for example, what's the maximum sentencing discount for a guilty plea?). An average of 35 students interacted with SmartTest in each cycle across several criminal law tutorials. Participation was voluntary and anonymous, with students interacting with SmartTest on their own devices for up to 10 minutes per session. Students' conversations with SmartTest – their attempts at answering the question, and the immediate feedback they received from the chatbot – were recorded in our database. After the final test cycle, we surveyed students about their experience. What we found SmartTest showed promise in guiding students and helping them identify gaps in their understanding. However, in the first three cycles (the problem-scenario questions), between 40 per cent and 54 per cent of conversations had at least one example of inaccurate, misleading or incorrect feedback. When we shifted to much simpler short-answer format in cycles four and five, the error rate dropped significantly to between 6 per cent and 27 per cent. However, even in these best-performing cycles, some errors persisted. For example, sometimes SmartTest would affirm an incorrect answer before providing the correct one, which risks confusing students. A significant revelation was the sheer effort required to get the chatbot working effectively in our tests. Far from a time-saving silver bullet, integrating SmartTest involved painstaking prompt engineering and rigorous manual assessments from educators (in this case, us). This paradox – where a tool promoted as labour-saving demands significant labour – calls into question its practical benefits for already time-poor educators. Inconsistency is a core issue SmartTest's behaviour was also unpredictable. Under identical conditions, it sometimes offered excellent feedback and at other times provided incorrect, confusing or misleading information. For an educational tool tasked with supporting student learning, this raises serious concerns about reliability and trustworthiness. To assess if newer models improved performance, we replaced the underlying generative AI powering SmartTest (ChatGPT-4) with newer models such as ChatGPT-4.5, which was released in 2025. We tested these models by replicating instances where SmartTest provided poor feedback to students in our study. The newer models did not consistently outperform older ones. Sometimes, their responses were even less accurate or useful from a teaching perspective. As such, newer, more advanced AI models do not automatically translate to better educational outcomes. What does this mean for students and teachers? The implications for students and university staff are mixed. Generative AI may support low-stakes, formative learning activities. But in our study, it could not provide the reliability, nuance and subject-matter depth needed for many educational contexts. On the plus side, our survey results indicated students appreciated the immediate feedback and conversational tone of SmartTest. Some mentioned it reduced anxiety and made them more comfortable expressing uncertainty. However, this benefit came with a catch: Incorrect or misleading answers could just as easily reinforce misunderstandings as clarify them. Most students (76 per cent) preferred having access to SmartTest rather than no opportunity to practise questions. However, when given the choice between receiving immediate feedback from AI or waiting one or more days for feedback from human tutors, only 27 per cent preferred AI. Nearly half preferred human feedback with a delay, and the rest were indifferent. This suggests a critical challenge. Students enjoy the convenience of AI tools, but they still place higher trust in human educators. A need for caution Our findings suggest generative AI should still be treated as an experimental educational aid. The potential is real – but so are the limitations. Relying too heavily on AI without rigorous evaluation risks compromising the very educational outcomes we are aiming to enhance. Armin Alimardani is senior lecturer in law and emerging technologies at the University of Wollongong, in Australia, and Emma A. Jane is associate professor, School of Arts and Media, UNSW Sydney. This article was first published in The Conversation Join ST's Telegram channel and get the latest breaking news delivered to you.