logo
Why OpenAI's top economist gets ChatGPT to check his meals

Why OpenAI's top economist gets ChatGPT to check his meals

Ronnie Chatterji has a new workout buddy, and it lives in his phone.
The OpenAI chief economist doesn't just help shape global policy on artificial intelligence, he uses it to count his calories.
'I take pictures of what I eat,' Chatterji says of his ChatGPT-powered fitness assistant. 'It remembers what I ate the last meal, and helps me understand if, especially on a trip like this, if I'm being balanced.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI not a 'straightforward' fix for ailing productivity
AI not a 'straightforward' fix for ailing productivity

Perth Now

timea day ago

  • Perth Now

AI not a 'straightforward' fix for ailing productivity

Australia has been warned against the "seductive" pull of artificial intelligence as the federal government looks to the technology to help solve its productivity woes. AI is expected to take centre stage during the second day of the government's economic reform roundtable, alongside regulation and competition. Though he recognised its risks, Treasurer Jim Chalmers has previously said AI could be an economic "game changer" to boost Australia's ailing productivity and lift living standards. But Monash University human-centred computing lecturer Jathan Sadowski warned the story was not so simple. "AI changes the nature of work but it doesn't straightforwardly make work more efficient or more productive," he told AAP. "It produces all kinds of new problems that people need to adjust to: they need to fill in the gaps with AI, or they need to clean up the mess after AI does something in not the right way. "A lot of organisations are simply not prepared to do the hard work necessary to implement AI." To use AI well, businesses would have to change their practices so they can complement the capabilities of the technology, which generally requires significant infrastructure work, capital investment and human labour. The technology also works best when it is purpose-built using specific, high quality data for an organisation's specific subject area. This means having lots of smaller scale technologies, which runs counter to the prevailing understanding of AI. "There's this real push towards universal models - something like ChatGPT - it's the one model to rule them all, one solution to every problem," Dr Sadowski said. "It means that you can sell the technology to every market and from the government's point of view means that all you have to do is implement this one solution. "There's something very seductive to that because it tells a good story ... but it doesn't produce good technology." Research about AI's impact on productivity shows mixed results. A recent CSIRO study of 300 employees found one in three did not report productivity benefits, and the majority that did expected the improvements to be better than what was delivered. Analysis published in the US National Bureau of Economic Research showed unclear results at the organisational level and it can also be difficult to disentangle the impact of AI from other factors.

Some teachers are fighting AI, but is there a case that it can work with them?
Some teachers are fighting AI, but is there a case that it can work with them?

ABC News

time2 days ago

  • ABC News

Some teachers are fighting AI, but is there a case that it can work with them?

There's no doubt that Australian teachers would like to have more time. Some tasks, including reviewing just one student assessment, can take teachers up to 30 to 40 minutes to complete. But what if there were a tool that could do the same work in mere seconds? Generative artificial intelligence (AI) tools, like ChatGPT, present opportunities for greater efficiency in a variety of different sectors. And while AI's ability to produce realistic, human-like content has long sparked concerns about its impact on students' learning, a framework exists in Australia to guide the responsible and ethical use of it in ways that benefit students, schools, and society. Tech giants have also accelerated their plans to embed generative AI in our education systems. Microsoft, OpenAI and Anthropic recently announced they were funding a $US23 million ($35 million) AI teaching hub in New York for educators, to help them learn how to better integrate AI tools in classrooms. So could there be a future where generative AI is embraced in schools, more than it's feared? After ChatGPT was launched by OpenAI in November 2022, education departments across Australia swiftly banned its use by students. There were concerns that, due to the sophistication of the tool, it would be difficult to detect when students were using AI to plagiarise content "Students have certainly taken to the technology very quickly. The concern, of course, is that this genie is not going back in the bottle," David Braue, a technology journalist at Cybercrime Magazine, tells ABC Radio National's Download This Show. The Australian Framework for Generative AI in Schools was released by the federal government in late 2023 to address the challenges and opportunities presented by these tools to teachers and students. There is a plan to review the framework annually. But RMIT computing professor Michael Cowling says we need to consider the opportunities these tools present as well. "When we first started talking about generative AI, we were very focused on academic integrity … that's one component," he says. "But another is teaching the teachers what they can use this tool for effectively. In doing so, you help them to understand what it's used for. "And that means, ultimately, your students understand better what it's useful for as well." While many Australian schools had banned generative AI use by 2023, the South Australian government took a different approach. They were the first state to trial a generative AI chatbot they developed with Microsoft, called EdChat. EdChat is a generative AI chatbot tool that is customised for a school environment. The chatbot has access to the same data as ChatGPT, but it doesn't send out user information. Students and teachers prompt the tool by asking questions they'd like to learn more about. Adelaide Botanic High School uses EdChat today, and principal Sarah Chambers says that she is grateful to be working in a school that engages with this issue differently. "I think the thing I appreciate about the approach is to not shy away from this challenge, to really look to the reality that this is a technology that will influence how we work, from now and into the future, because it's not going anywhere," she says. "And to acknowledge that and create a tool that responds to some of the challenges that we do know exist around AI." Other challenges, besides plagiarism, include ensuring the security of students' data and filtering content that is presented to students adequately. The EdChat tool being used in South Australia includes safety features to address these challenges, including a content filter that the department says "blocks inappropriate requests". While generative AI is a challenge for educators, it's not dissimilar to issues they have always faced. "For teachers to design assessments of learning that are genuinely capturing a student's growth is a high-level skill," Ms Chambers says. As Australian schools cautiously embrace AI tools, another challenge could be that teachers will rely on AI too much. Professor Cowlings believes "it's okay for [teachers] to be reliant on AI as long as they understand how to use it". Mr Braue says that isn't enough to safeguard against the risks. "Even if they know how to use it, they [teachers] may not be aware of their obligations for data protection," he says. Fairness of content is another issue schools must consider when it comes to AI applications, according to Mr Braue. "We know that a lot of the AI models that are out there are biased in terms of gender and ethnicity … that is a reality for these models," he says. "So teachers need to be very aware that what they're producing needs to be objectively looked at through these lenses… It can't just be about getting stuff done faster." Following South Australia's AI trial, several states and territories have announced their own, including Queensland, Western Australia and New South Wales. But this approach is not adequate, according to the Productivity Commission (PC). It handed down an interim report last week recommending that AI integration in schools needs to be Australia-wide. "A national approach would aid innovation, support equal access to high-quality tools, and spread the benefits to all," the report stated. Ms Chambers says generative AI is a tool that schools need to adapt to quickly. She says she hopes future expansion of these tools is based on feedback from schools, like Adelaide Botanic, which have been using it for some time."We should be listening to the voices of people who are leading in this work, but also ensure that we've got opportunities to share that emerging work that's happening on the ground." Ms Chambers says that it's important students learn how to navigate generative AI tools for their futures. "We know the access to the knowledge is there, but their ability to understand what is good quality information, what is valid information, reliable sources, this presents a really broad perspective or ethical moral consideration of the issue at hand. "Those thinking skills and creativity skills, they are even more important than ever."

'It's like a part of me': How a ChatGPT update destroyed some AI friendships
'It's like a part of me': How a ChatGPT update destroyed some AI friendships

SBS Australia

time2 days ago

  • SBS Australia

'It's like a part of me': How a ChatGPT update destroyed some AI friendships

In early August, artificial intelligence chatbot ChatGPT updated to its newest system, known as ChatGPT-5. OpenAI, the developer of ChatGPT, boasted this version was its "smartest, fastest and most useful model yet" — but complaints quickly started to surface online. "I lost my only friend overnight," one Reddit user wrote. "This morning I went to talk to it and instead of a little paragraph with an exclamation point, or being optimistic, it was literally one sentence. Some cut-and-dry corporate bs. I literally lost my only friend overnight with no warning," they wrote. Another Reddit user wrote: "I never knew I could feel this sad from the loss of something that wasn't an actual person. No amount of custom instructions can bring back my confidant and friend." ChatGPT users were complaining that the latest update had undone the program's capacity for emotional intelligence and connection, with many stressing how much they had come to rely on it as a companion — and even a therapist. Sam Altman, the CEO of OpenAI, has addressed the claims ChatGPT-5 has destroyed the program's emotional intelligence and says the company is trying to refine the emotional support the program provides. Georgia (not her real name) told The Feed she came to start using ChatGPT frequently around September last year, as she was trying to navigate a new diagnosis of ADHD and awaiting confirmation of an autism diagnosis. She started to turn to ChatGPT for conversations she would have typically had with her friends about her mental health and these new diagnoses. "I started using [my friends] less because I felt like a burden all the time. It was our first year out of uni and everyone was working full-time, so people don't have time to listen to me ramble all day," she said. "The uptake just got more and more as time went on and now I use it on a daily basis." Georgia said ChatGPT has helped to 'ground' her emotions and allowed her to express herself fully between fortnightly sessions with her (real life) therapist. While she said she had some apprehensions about using the AI system, including questions about privacy and environmental considerations, Georgia said the benefits of having this emotional support available in her pocket far outweigh these concerns. Georgia acknowledges she has come to rely heavily on this system and, while she has tried to step away from it on occasion, she said ChatGPT has become "like an addiction". "I'm always curious to know what it will say — it's like it's a part of me," she said. The dangers of 'sycophancy' The use of ChatGPT for therapy and emotional support is well-documented and some studies have shown it can have therapeutic benefits and serve as a complement to in-person therapy. However, other studies suggest AI is far from a perfect system for therapy. Recent research published from Stanford University in the US has found that when AI bots are asked to assume the role of a therapist, they can show increased stigma towards people with certain conditions, such as schizophrenia and addiction, and failed to recognise cues of suicidal intent. One Australian study published in 2024 also found that AI can provide social support and help to mitigate feelings of loneliness. Professor Michael Cowling, who led the study, says that while AI bots could make people feel less lonely, ultimately they could not address the underlying feelings of loneliness like true human interaction. Cowling said AI can't seem to create feelings of 'belonging' in people due to their tendency to excessively agree with users. "The way I usually describe this is by using an analogy: If you're talking to somebody about football — and I live in Victoria so everybody talks about AFL — the AI is going to be talking to you about your favourite team and they can give you platitudes about your favourite team and how well Carlton is doing," he said. "But when it really gets to the deeper conversation is when somebody is having an oppositional conversation with you because they're actually a Collingwood supporter and they want to talk to you about how Carlton is not as good as Collingwood — you can't get that from an AI generator." 'Sycophancy' is a term used to describe a common characteristic of many AI chatbots, which refers to their tendency to agree with users and reinforce beliefs. This characteristic can be more prominent in some systems, which are purposefully designed for users to create deep emotional or romantic bonds with their AI chatbot. However, this feature may encourage illegal behaviour too. Messages with an AI companion from Replika were highlighted in the trial of Jaswant Singh Chail, a UK man who was sentenced to nine years in prison in 2023 for plotting to kill Queen Elizabeth II with a crossbow two years earlier. The court was told that Chail, who experienced symptoms of psychosis before using a chatbot, had formed a close romantic relationship with a Replika chatbot called Sarai. The court found Chail had a number of motivations for trying to murder the Queen but these thoughts had been reinforced, in part, by Sarai. In a statement on its website, Replika said it had "high ethical standards" for its AI and has trained its model to "stand up for itself more, not condone violent actions" and "clearly state that discriminatory behaviour is unacceptable". The app has an age restriction of 18 years and older and has also introduced mental health features including a direction on signing up that the app is not a replacement for therapy, as well as a 'Get help' button that allows users to access mental health crisis hotlines. Replika did not respond to The Feed's request for comment. Other than Replika, there are a number of AI chatbot services that offer romantic and sexual chat, including Nomi, Romantic AI and GirlfriendGPT. Advice from the eSafety Commissioner says children and young people are particularly vulnerable to the "mental and physical harms from AI companions" because they have not yet developed the "critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs". Instances of 'AI-induced psychosis' have also been reported in media, whereby AI chatbots have led to and amplified users' delusions. While there is limited peer-reviewed research on this topic, Søren Dinesen Østergaard, a psychiatric researcher from Aarhus University in Denmark, who first theorised the possibility that AI chatbots could trigger delusions in individuals prone to psychosis, recently wrote about receiving multiple accounts of this experience from users and worried family members. When launching Chat GPT-5, OpenAI said the update 'minimised sycophancy'. Source: AAP / Algi Febri Sugita/SOPA Images/Sipa USA Østergaard says these accounts are evidence that chatbots seem to 'interact with the users in ways that aligned with, or intensified, prior unusual ideas or false beliefs — leading the users further out on these tangents' and resulting in "outright delusions". Georgia says she's aware of sycophancy and has tried to program her AI to not agree with everything she says. "I've tried to tell her not to but it still somehow ends up agreeing with me," she says. "Sometimes I like to be challenged on my thoughts, and that's what a human's better at than AI." Marnie (not her real name) is another user who told The Feed she uses ChatGPT for emotional support. She says she's aware of the risks. "I often joke about it being a 'friend' or 'my bestie' as though we have a human relationship," she said. Marnie says the significant expense and time commitment of in-person therapy led her to turn to ChatGPT for advice when she gets overwhelmed. "ChatGPT can feel like your biggest fangirl if you let it. I do think there's a lot of danger in that. It's so keen to make the user happy, which in many ways is lovely and feels good but it's not always what you need to hear." OpenAI's response Altman says OpenAI would be "proud" to make a "genuinely helpful" program if it helps people achieve long-term goals and life satisfaction. "If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they're unknowingly nudged away from their longer term well-being (however they define it), that's bad," he posted on X last week. Altman also noted concerns about users becoming too dependent on the program and how vulnerable people may be affected. When launching Chat GPT-5, OpenAI said the update 'minimised sycophancy'. Cowling said the perfect AI chatbot may be difficult to achieve. "It's an interesting balance — you want it to be collegial, you want it to be supportive, but you don't want it to be therapising." Readers seeking crisis support can contact Lifeline on 13 11 14, the Suicide Call Back Service on 1300 659 467 and Kids Helpline on 1800 55 1800 (for young people aged up to 25). More information and support with mental health is available at and on 1300 22 4636.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store