logo
#

Latest news with #AustralianCatholicUniversity

Aussie job warning ahead of 'huge' shift: 'Can't even imagine yet'
Aussie job warning ahead of 'huge' shift: 'Can't even imagine yet'

Yahoo

time2 hours ago

  • Business
  • Yahoo

Aussie job warning ahead of 'huge' shift: 'Can't even imagine yet'

Australians are being warned the opportunities of artificial intelligence are 'immense' but there is no denying it will cost some jobs. It comes as the head of one of the world's biggest AI labs predicts the technology could eliminate half of all entry-level, white-collar jobs within the next five years. Niusha Shafiabady, associate professor in computational intelligence at Australian Catholic University, told Yahoo Finance the jobs of the future would be different from the types of jobs that we have now. But she said this was something we'd experienced 'throughout the history of human life'. 'I think it's important for the kids to understand that we will have much fewer entry-level jobs and new roles will emerge. We should start thinking about our careers smartly from now on and plan our career properly,' she said. RELATED Millions of Aussies warned as Hungry Jack's replaces drive-thru workers with AI Centrelink cash boost coming from July 1 for millions of Aussies Aussie teen's job paying $300 per hour without a uni degree Australia's productivity commissioner Danielle Wood said the forecast that half of entry-level white-collar jobs would be wiped out was 'pretty extraordinary' and 'out of whack with other reasonable projections' she had seen. But she said there were some elements of jobs that would be overtaken by AI. 'Often it is the more routine elements but that is freeing people up to do the uniquely human parts of jobs,' Wood told ABC's 7.30 program. 'Am I going to sit here and say, 'No jobs are going to go?' No, clearly not. There will be some impacts.' Anthropic chief executive officer Dario Amodei recently warned politicians and businesses were not prepared for the spike in unemployment rates that AI could bring. 'AI is starting to get better than humans at almost all intellectual tasks, and we're going to collectively, as a society, grapple with it,' he told CNN's Anderson Cooper. 'AI is going to get better at what everyone does, including what I do, including what other CEOs do.' Unions will be pushing to regulate AI in the workplace at the upcoming productivity summit. It wants the government to enforce a 'digital just transition' for workers impacted by AI, similar to measures for coal and gas-fired power jobs impacted by the shift to renewables. They will also be pushing to gain a greater share of productivity benefits through higher pay. The World Economic Forum's Future of Jobs Report has estimated 92 million jobs will be lost in this decade, with 170 million new jobs created. This would leave a net worth of 78 million by 2030. Bloomberg found AI could replace more than 50 per cent of the tasks performed by market research analysts and sales representatives, compared to 9 and 21 per cent for their managerial counterparts. Wood said AI offered a 'huge amount of opportunity' and there would be jobs created that 'we can't even imagine yet'. 'I think probably the major impact on labour markets we'd expect in the next decade is more people working with AI to take some bits of their job but free them up,' she said. 'From an economy-wide perspective as well, the opportunities are immense. When we look at productivity gains, gains in our living standards have come from over the long run, it is largely those new technologies.' Shafiabady said we would need specialists in automation, those with skills to work with and analyse data, along with more cybersecurity experts. 'The closer we are to making strategic decisions in the organisation and critical thinking skills, the safer our job will be,' she in retrieving data Sign in to access your portfolio Error in retrieving data

Short-man syndrome is REAL: Scientists confirm smaller men act more jealous and competitive to make up for their lack of height
Short-man syndrome is REAL: Scientists confirm smaller men act more jealous and competitive to make up for their lack of height

Daily Mail​

time16 hours ago

  • Entertainment
  • Daily Mail​

Short-man syndrome is REAL: Scientists confirm smaller men act more jealous and competitive to make up for their lack of height

From Tyrion Lannister in Game of Thrones to Gimli in Lord of the Rings, many of the toughest characters in film and TV are small men. This movie trope - in which small men act more aggressively to make up for their lack of height - is often referred to as 'short-man syndrome', or the 'Napoleon complex'. Now, a study has confirmed that this syndrome doesn't just apply in the movies. Scientists from the Australian Catholic University surveyed more than 300 participants, and found a key link between height and intrasexual competition. Their findings showed that shorter men were more likely to display envy, jealousy, and competitiveness than taller blokes. 'This study highlights the relationship between height dissatisfaction and intrasexual competition,' the researchers explained in their study. 'Psychological perceptions of height significantly influence social dynamics and behaviors. 'Understanding these associations can inform strategies for promoting positive body image and mental well-being, particularly among individuals who may feel marginalized by societal height standards.' Short-man syndrome was first identified back in 1926 by the Austrian psychoanalyst Alfred Adler. He also came up with the notion of the inferiority complex, where sufferers demonstrate a lack of self-worth. In its classic form, short men overcompensate for their lack of height by being extra-assertive. The complex has divided psychologists for more than a century. Some say it describes a real phenomenon; others believe there is no evidence it exists. In their study, published in Evolutionary Behavioral Sciences, the team, led by Daniel Talbot, set out to settle the debate once and for all. 'Height is a fundamental variable in intersexual selection and intrasexual competition,' the team explained. 'Taller men are rated as more desirable and formidable as romantic partners and rivals, respectively, than shorter ones.' A total of 302 participants were surveyed across a range of measures. This including their height, their perception of their height, and intrasexual competition (competition between same-sex). The results revealed that, shorter people - men in particular - scored higher for intrasexual envy, jealousy, and competitiveness. What's more, both men and women who wished they were taller were more intrasexually competitive than those who were happy with their heights. 'The findings contribute to broader discussions on how physical attributes impact social hierarchies and competition, with implications for addressing biases in various social contexts,' the researchers added. WHAT IS SHORT MAN SYNDROME OR THE NAPOLEON COMPLEX AND WHAT ARE ITS CHARACTERISTICS? The Napoleon Complex, also known as short man syndrome, was identified in 1926 by the Austrian psychoanalyst Alfred Adler. He also came up with the notion of the inferiority complex, where sufferers demonstrate a lack of self-worth. In its classic form, short men overcompensate for their lack of height by being extra-assertive and chippy. The name itself is a actually a bit of a misnomer. Although Napoleon is assumed to have been short, he was 5ft 6in (1.7m) - around average for a man in the late 18th century. The confusion arose from portraits of the dictator standing alongside unusually tall guards. The complex has divided psychologists for more than a century. Some say it describes a real phenomenon; others believe there is no evidence it exists. One study suggesting the complex is real came from Professor Abraham Buunk, of Holland's University of Groningen. He interviewed 100 men in relationships and found that those around 5ft 4in (1.5m) tall were more likely to suffer from jealousy than those measuring 6ft 6in (2m). However, the evidence is far from clear cut. In 2007, researchers at the University of Central Lancashire found that tall men — not short ones — were quicker to anger when provoked.

‘Complete collapse': Bombshell report into AI accuracy indicates your job is probably safe
‘Complete collapse': Bombshell report into AI accuracy indicates your job is probably safe

News.com.au

timea day ago

  • Science
  • News.com.au

‘Complete collapse': Bombshell report into AI accuracy indicates your job is probably safe

The latest form of cutting-edge artificial intelligence technology suffers 'fundamental limitations' that result in a 'complete accuracy collapse', a bombshell report from Apple has revealed. Researchers from the tech giant have published a paper with their findings, which cast doubt on the true potential of AI as billions of dollars are poured into developing and rolling out new systems. The team put large reasoning models, an advanced version of AI, used in platforms like DeepSeek and Claude, through a series of puzzle challenges ranging from simple to complex. They also tested large language models, which platforms like ChatGPT are built on. Large language model AI systems fared better than large reasoning models with fairly standard tasks, but both fell flat when confronting more complex challenges, the paper revealed. Researchers also found that large reasoning models began 'reducing their reasoning effort' as they struggled to perform, which was 'particularly concerning'. 'Upon approaching a critical threshold – which closely corresponds to their accuracy collapse point – models counterintuitively begin to reduce their reasoning effort despite increasing problem difficulty,' the paper read. The advancement of AI, based on current approaches, might've reached its limit for now, the findings suggested. Niusha Shafiabady, an associate professor of computational intelligence at Australian Catholic University and director of the Women in AI for Social Good lab, said 'expecting AI to be a magic wand' is a mistake. 'I have been talking about the realistic expectations about the AI models since 2024,' Dr Shafiabady said. 'When AI models face countless interactions with the world, it is not possible to investigate and control every single problem that could happen. That is why things could get out of hand or out of control.' Gary Marcus, a leading voice on AI and six-time author, delivered a savage analysis of the Apple paper on his popular Substack, describing it as 'pretty devastating'. 'Anybody who thinks [large language models] are a direct route to the [artificial generative intelligence] that could fundamentally transform society for the good is kidding themselves,' Dr Marcus wrote. Dr Marcus then took to X to declare that the hype around AI has become 'a giant game of bait and switch'. 'The bait: we are going to make an AI that can solve any problem an expert human could solve. It's gonna transform the whole world,' Dr Marcus wrote. 'The switch: what we have actually made is fun and kind of amazing in its own way but rarely reliable and often makes mistakes – but ordinary people makes mistakes too.' In the wake of the paper's release, Dr Marcus has re-shared passionate defences of AI shared to X by evangelists defending the accuracy flaws that have been exposed. 'Imagine if calculator designers made a calculator that worked 80 per cent correctly and said 'naah, it's fine, people make mistakes too',' Mr Marcus quipped. Questions about the quality of large language and large reasoning models aren't new. For example, when released in April, OpenAI described its new o3 and o4-mini models as its 'smartest and most capable' yet, trained to 'think for longer before responding'. 'The combined power of state-of-the-art reasoning with full tool access translates into significantly stronger performance across academic benchmarks and real-world tasks, setting a new standard in both intelligence and usefulness,' the company's announcement read. But testing by prestigious American university MIT revealed the o3 model was incorrect 51 per cent of the time, while o4-mini performed even worse with an error rate of 79 per cent. Truth and accuracy undermined Apple recently suspended its news alert feature on iPhones, powered by AI, after users reported significant accuracy errors. Among the jaw-dropping mistakes was an alert that tennis icon Rafael Nadal had come out as gay, alleged United Healthcare CEO shooter Luigi Mangione had died by suicide in prison, and a winner had been crowned at the World Darts Championship hours before competition began. Research conducted by the BBC found a litany of errors across other AI assistants providing information about news events, including Google's Gemini, OpenAI's ChatGPT and Microsoft's CoPilot. It found 51 per cent of all AI-generated answers to queries about the news had 'significant issues' of some form. When looking at how its own news coverage was being manipulated, the BBC found 19 per cent of answers citing its content were factually incorrect. And in 13 per cent of cases, quotes said to be contained within BBC stories had either been altered or entirely fabricated. Meanwhile, a newspaper in Chicago was left red-faced recently after it published a summer reading list featuring multiple books that don't exist, thanks to the story copy being produced by AI. And last year, hundreds of people who lined the streets of Dublin were disappointed when it turned out the Halloween parade advertised on an events website had been invented. Google was among the first of the tech giants to roll out AI, summarising search results relying on a large language model – with some hilarious and possibly dangerous results. Among them were suggestions to add glue to pizza, eat a rock a day to maintain health, take a bath with a toaster to cope with stress, drink two litres of urine to help pass kidney stones and chew tobacco to reduce the risk of cancer. Jobs might be safe – for now Ongoing issues with accuracy might have some companies thinking twice about going all-in on AI when it comes to substituting their workforces. So too might some recent examples of the pitfalls of people being replaced with computers. Buy now, pay later platform Klarna shed more than 1000 people from its global workforce as part of a dramatic shift to AI resourcing, sparked by its partnership with OpenAI, forged in 2023. But last month, the Swedish firm conceded its strong reliance on AI customer service chatbots – which saw its employee count almost halved in two years – had created quality issues and led to a slump in customer satisfaction. Realising most customers prefer interacting with a human, Klarna has begun hiring back actual workers. Software company Anysphere faced a customer backlash in April when its AI-powered support chatbot went rogue, kicking users out of the code-editing platform Cursor and delivering incorrect information. It then seemingly 'created' a new user policy out of thin air to justify the logouts – that the platform couldn't be used across multiple computers. Cursor saw a flood of customer cancellations as a result. AI adviser and former Google chief decision scientist Casse Kozyrkov took to LinkedIn to share her thoughts on the saga, dubbing it a 'viral hot mess'. 'It failed to tell users that its customer support 'person' Sam is actually a hallucinating bot,' Ms Kozyrkov wrote. 'It's only going to get worse with AI agents.' Many companies pushing AI insist the technology is improving swiftly, but a host of experts aren't convinced its hype matches its ability. Earlier this year, the Association for the Advancement of Artificial Intelligence surveyed two dozen AI specialists and some 400 of the group's members and found a surprising level of pessimism about the potential of the technology. Sixty per cent of those probed don't believe problems with factuality and trustworthiness 'would soon be solved', it found. Issues of accuracy and reliability are important, not just for growing public trust in AI, but for preventing unintended consequences in the future, AAAI president Francesca Rossi wrote in a report about the survey. 'We all need to work together to advance AI in a responsible way, to make sure that technological progress supports the progress of humanity and is aligned to human values,' Ms Rossi said. Projects stalled or abandoned Embarrassing and potentially costly issues like these are contributing to a backtrack, with analysis by S&P Global Market Intelligence showing the share of American and European companies abandoning their AI initiatives rising to 42 per cent this year from 17 per cent in 2024. And a study released last month by consulting firm Roland Berger found a mammoth investment in AI technology wasn't translating to useful outcomes for many businesses. Spending on AI by corporates in Europe hit an estimated US$14 billion (AU$21.4 billion) in 2024, but just 27 per cent were able to fully integrate the technology into their operations or workflows, the research revealed. 'Asked about the key challenges involved in implementing AI projects, 28 per cent of respondents cited issues with data, 25 per cent referenced the complexity of integrating AI use cases, and 15 per cent mentioned the difficulty of finding enough AI and data experts,' the study found. Those findings were mirrored in an IBM survey, which found one-in-four AI projects delivered the returns they promised. Dr Shafiabady said there are a few reasons for problems facing AI, like those identified in Apple's research. 'When dealing with highly complex problems, these types of complex AI models can't give an accurate solution. One of the reasons why is the innate nature of algorithms,' Dr Shafiabady said. 'Models are built on mathematical computational iterative algorithms that are coded into computers to be processed. When tasks get very complicated, these algorithms won't necessarily follow the logical reasoning and will lose track of them. 'Sometimes when the problem gets harder, all the computing power and time in the world won't enhance AI model's performance. Sometimes when it hits very difficult tasks, it fails because it has learnt the example rather than the hidden patterns in the data. 'And sometimes the problem gets complicated, and a lot of computation resource and time is wasted over exploring the wrong solutions and there is not enough 'energy' left to reach the right solution.'

Age of Dead Sea Scrolls pushed back by new AI study
Age of Dead Sea Scrolls pushed back by new AI study

ABC News

time04-06-2025

  • Science
  • ABC News

Age of Dead Sea Scrolls pushed back by new AI study

The Dead Sea Scrolls are one of the most significant troves of religious manuscripts ever found, with many being the oldest surviving copies of biblical texts. First found by a Bedouin shepherd, the hundreds of ancient scrolls — excavated from the Qumran caves, in the West Bank, between 1946 and 1956 — have been a boon to those studying the history of Judaism and Christianity. But while we know the scrolls are all between 2,500 and 1,800 years old, just a fraction have dates written on them indicating when they were first composed. Figuring out the ages of the other scrolls can help scholars to understand how Judaism evolved, and which scripts and ideas were important at different times. Now an international team of researchers has aimed to fill some gaps in the Dead Sea Scrolls' timeline using a combination of artificial intelligence (AI), carbon dating and handwriting analysis. In the journal PLOS One, they proposed new ages for more than 100 scroll fragments, and found many to be older than previously thought. Gareth Wearne, a researcher in biblical studies and the history of ancient Israel at Australian Catholic University, said the research could change our understanding of the history of the Dead Sea Scrolls. "It potentially has implications for how we think about how the material came to be copied and disseminated at the beginning of the process that ultimately led to them being included in the biblical canon," Dr Wearne, who was not involved with the study, said. Radiocarbon dating is often relied on in archaeology to find the age of an artefact, and the Dead Sea Scrolls are no exception. But the technique is vulnerable to contamination, and often yields imprecise results, particularly for the period when the Dead Sea Scrolls were written: there are fewer artefacts with known dates to calibrate the scrolls' age against. Plus, as University of Groningen archaeologist and study lead author Mladen Popović pointed out, "radiocarbon dating is a destructive method". Researchers now only need a few thousandths of a gram of material to carbon-date it, but artefacts with the cultural importance of the Dead Sea Scrolls are incredibly precious. Another common technique used to study the scrolls is palaeography, or the study of handwriting, which looks at the way scripts have changed over centuries. But this method is also vulnerable to inaccuracies. So researchers such as Professor Popović and his colleagues have looked for ways to date the scrolls when other methods fall short. In their new study, the team carbon dated 24 Dead Sea Scroll samples. The researchers fed digital images of the 24 dated scrolls into a machine learning model — a type of AI — which was designed to analyse the handwriting in the scrolls. They then had the AI predict the ages of 135 other scrolls, based on their handwriting and scripts. The researchers named their AI model Enoch, after a figure depicted in the book of Genesis who they deemed a "science hero". Enoch's predictions, and the carbon dated samples, found many of the scrolls were older than previously thought — sometimes by decades, sometimes by a few years. The study suggested two of the Dead Sea Scrolls may be texts contemporary to when they were first written, or close to it. One scroll, which contains a fragment from the book of Daniel, was carbon dated to between 230 and 160 BC — up to 100 years older than previous estimates. This means it overlaps with when the text was believed to be written, based on historical events it refers to. Another scroll, containing text from Ecclesiastes, was dated with the Enoch AI to the third century BC. The text had previously thought to have been created roughly in the mid-second century BC based on how it aligned with the cultural movements of the era. If the dating is correct, these two fragments would be the first-known examples of biblical texts from the time when the work was composed. Expert palaeographers checked the AI's results, and found 79 per cent of them to be realistic predictions. Dr Wearne said the findings were "the single greatest step forward since the development of the original, conventional dating system" in the 1940s. "It then requires us to think about the social and the historical context in which the scrolls were produced in new ways." Andrea Jalandoni, an archaeologist at Griffith University who wasn't involved with the research, said the addition of other techniques strengthened the reliability of the AI. "They've pinned it with radiocarbon and then evaluated it with expert palaeographers," Dr Jalandoni said. But, she said, the AI model was trained on a small sample size, which could complicate its reliability. Professor Popović plans to apply the Enoch model to more Dead Sea Scrolls, as well as other ancient Aramaic texts like the Elephantine Papyri. "The techniques and methods we developed are applicable to other handwritten [collections of text]," he said. Dr Jalandoni, who studies rock art in Australia and South-East Asia, said the study gave her ideas for her own research. "I was looking at this and thinking: 'Wow, I wonder if I can do this with rock art,'" Dr Jalandoni said. "We have some dates for rock art, but not a lot." Australian rock art has very little carbon in it, making carbon dating a fruitless task so archaeologists have to rely on other dating methods. "If we could … create a machine learning model that can predict dates that line up with more methods, I think it's the way to go," Dr Jalandoni said.

Millions of Aussies warned as Hungry Jack's makes major drive-thru change: 'Disappear'
Millions of Aussies warned as Hungry Jack's makes major drive-thru change: 'Disappear'

Yahoo

time03-06-2025

  • Business
  • Yahoo

Millions of Aussies warned as Hungry Jack's makes major drive-thru change: 'Disappear'

An artificial intelligence expert has warned Aussie fast-food workers their jobs may 'disappear' within the next 10 years. Fast-food chain Hungry Jack's is trialling the use of AI drive-thru assistants, leaving many to question where that leaves young people looking for casual jobs. Hungry Jack's has introduced a new 'digital voice-activated customer ordering system' at its St Peters store in Sydney. A spokesperson told Yahoo Finance it anticipated the trial would be rolled out to additional restaurants in the coming months. Niusha Shafiabady, associate professor in computational intelligence at Australian Catholic University, told Yahoo Finance retail jobs were 'very likely' to change in the near future as AI becomes more and more common. RELATED Gen Z job warning as new AI trend set to destroy 80 per cent of influencer industry Tradie reveals surprising industry where he makes '$300,000 to a million' a year Right to disconnect warning as worker sues former employer for $800,000 "At this stage, people are preparing our food, in the future robots will be preparing our food," she said. 'I would anticipate with the work that Elon Musk is doing in developing humanoid robots that even within the next 10 years people wouldn't be preparing our food. 'Those entry-level fast food jobs might disappear.' KFC also trialled AI drive-thru technology at selected Sydney stores last year, but stressed it wasn't replacing any jobs. Restaurants in the United States have also trialled AI software, with major chains like Wendy's, McDonald's, Chipotle, Domino's and Taco Bell among those signing contracts with tech providers. Shafiabady said the move allowed businesses to save money and would reduce wait times, along with increasing their order accuracy. A Hungry Jack's spokesperson told Yahoo Finance it's trial aimed to determine the effectiveness of the AI-powered technology to deliver "service improvements". Some customers have shared their confusion over the trial, with one calling it "scary" and others even threatening to "boycott" the chain over the move. Shafiabady said the technology might not go down well with customers initially. 'People at this stage might feel uncomfortable dealing with technology and they do not really trust the technology, but I would imagine that would be temporary,' she said. 'Even if they lose some of their customers in the beginning, people would get used to the technology. 'If you recall the first time that generative AI tools came to the market, like ChatGPT, people were not really comfortable using it. But now everybody's using them.' Shafiabady also flagged the potential risk of cyber attacks in the future. Bank tellers, cashiers, postal workers and administrative assistants are among the jobs forecast to drop by 2030, according to the World Economic Forum's Future of Jobs Report. The Forum has estimated 170 million new jobs will be created in this decade, but this will be offset by the loss of 92 million jobs, leaving a net growth of 78 million by 2030. Shafiabady said jobs that involved performing repetitive tasks were the ones that would be displaced 'relatively quickly'. 'The first level of jobs that are at danger are the secretarial roles because you can have softwares that does the same thing for people,' she said. 'With the generative AI algorithms and tools that have become available, the tasks and jobs that are associated with them are at risk too. For example, translation. If someone was an interpreter their job would be at risk.' Shafiabady said she expects some technician-level jobs will also be overtaken in the future where their job involves working with a machine and analysing something. A number of Australians have raised concerns about replacing fast-food workers with AI. 'Goodbye to teenagers who need casual jobs,' one wrote. 'How will kids get job experience with these jobs being replaced?' another asked. 'You can't convince me all this investment in AI is cheaper than paying real wages,' another said. Shafiabady said it was important for young people to understand there would be fewer entry-level jobs in the future. 'Businesses are going to restructure their roles and focus on high-value tasks rather than entry-level jobs,' she said. But she said the rise of AI would open up new job opportunities, with specialists in automation, cybersecurity experts and those able to work with and analyse data expected to be needed. 'That's the reality. The types of jobs of the future will be different from the types of jobs that we have had now, and we have had before,' she said. 'If you look back hundreds of years ago, the types of jobs were different, so we are evolving and the types of jobs will be changing.'Error while retrieving data Sign in to access your portfolio Error while retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store