logo
Why you should consider using AI if you've been avoiding it

Why you should consider using AI if you've been avoiding it

If you've been avoiding generative artificial intelligence (AI) you're "missing the next industrial revolution".
That's according to Nici Sweaney, CEO of AI consultancy business AI Her Way.
Nici Sweaney says AI tools will redefine how we work, live and interact.
(
Supplied: Nici Sweaney
)
Dr Sweaney, who lives on Yuin Country on the New South Wales South Coast, says using AI systems comes with a competitive advantage.
"We think that if you use it daily in work, you get about a 40 per cent increase in productivity and efficiency," she says.
"If you work a full-time job, that's two extra days of work a week."
She says ChatGPT, Copilot, Claude and Gemini, which are known as large language models, are among the most popular tools
There's a divide in who is using these tools, with men more likely to be using them than women and "about twice as likely to be using [generative AI] in a workplace setting".
Sign up to the ABC Lifestyle newsletter
Get a mid-week boost and receive easy recipes, wellbeing ideas, and home and garden tips in your inbox every Wednesday. You'll also receive a monthly newsletter of our best recipes.
Your information is being handled in accordance with the
Email address
Subscribe
Sandra Peter is an associate professor at the University of Sydney Business School and co-director of the school's Sydney Executive Plus, which focuses on upskilling emerging leaders.
She thinks of large language models "as having a personal assistant" who is knowledgeable, eager to help, polite, but "sometimes does make mistakes".
How to get started
Dr Sweaney recommends people begin by using AI tools in low-stakes ways in their personal lives.
If you're keen to experiment with it at work, low-risk tasks are the best place to start, she says.
Experts tips on how to start using AI tools:
Start with simple and low-stakes personal tasks
Identify tasks you can explain easily to others and don't enjoy
Try using different free systems and find one you prefer
When deciding the tasks that could be delegated to AI in your life, Dr Sweaney suggests making a list of the tasks you do often, which could include responding to emails, prioritising your workload, or writing the grocery list.
If you could explain the task to someone, highlight it. Give it gold star if you don't enjoy doing it. Dr Sweaney says these tasks are "prime territory" for delegating to generative AI.
Dr Peter says if she were a beginner, she would divide her daily tasks into categories "and think about how [AI tools] can help me in those different areas". Planning and preparation tasks are often good candidates.
She suggests people try out different tools to see what works best for them.
"I want to encourage [people] to experiment in very simple, straightforward ways." For example, you could start by asking a system to proofread some text.
How do you use AI tools to make your everyday life easier? Email everyday@abc.net.au
Dr Sweaney doesn't advise paying for an AI tool. Most large language models have free versions and most people "won't be able to tell the difference".
"It's just about finding one that you enjoy using and then learning to use that well."
When not to use AI tools
Dr Sweaney says some people make the mistake of directing these tools as if they are using a search engine.
"It's much more like having an employee or an intern," she says. You're likely to get better results if you show the tool an example, and describe what you do and don't like.
Zena Assaad says there are risks involved when using AI programs and tools.
(
Supplied: Zena Assaad
)
Dr Peter says these tools don't excel at maths and recommends you use a calculator instead.
"Don't use it as an accuracy machine," she also warns. These tools are better at summarising or critiquing content you offer up, she says.
Zena Assaad, is a senior lecturer at the Australian National University's School of engineering on Ngunnawal Country, in Canberra, whose research interests include the safety of AI systems.
She encourages caution when using these tools, especially in work settings or when sensitive information is involved.
Dr Assaad says while these tools and systems can be very helpful, a lot of people are using them when they shouldn't be.
"I do think that we're seeing a loss of critical thinking skills by using these tools,: she says.
"It's your conscious choice whether or not you use it, and how you use it."
What about the information I input?
Dr Assaad says that when we engage with these systems, our personal information is being used to improve them, and these systems can then be used in ways we might not be comfortable with, such as in
You can usually opt out of your data being used to train the AI model, Dr Assad says, but it is often "hard to find" out how to do so, with many users "opting in" by default.
Dr Peter encourages people to consider and be "very mindful" of what you're submitting, particularly if any information is confidential, or not your own work or data.
Dr Sweaney says: "If you want to be really safe, turn data sharing off, and if you wouldn't put it on a public forum maybe think twice about whether you want to use AI."
Other ethical considerations

Dr Peter says there are myriad of ethical considerations that come with these tools and systems.
While you may use it to check spelling or for feedback, "you don't want to pass off AI work as your own".
Also, if you're using these tools to recreate work in the style of an author or artist, they are "not being renumerated", despite some of these systems

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple unveils software visual redesign, but no revamped Siri
Apple unveils software visual redesign, but no revamped Siri

The Age

time2 days ago

  • The Age

Apple unveils software visual redesign, but no revamped Siri

Apple has made a number of announcements at its annual worldwide developer conference (WWDC) overnight, including a visual overhaul of its software, and changes to iPad that make it more like Mac. But major expected updates to its AI platform and voice assistant did not show up. The iPhone-maker has been under pressure to deliver since its Apple Intelligence suite was unveiled at last year's WWDC, as it's been perceived as underdeveloped and promised updates have failed to eventuate. Key among them are plans to evolve the Siri voice assistant into a smarter and more capable agent that understands context, akin to ChatGPT. 'As we've shared, we're continuing our work to deliver the features that make Siri even more personal,' said Craig Federighi, Apple's senior vice president of software engineering, on stage at WWDC. 'This work needed more time to reach a high quality bar, and we look forward to sharing more about it in the coming year.' Industry watchers also anticipated an announcement that Google's Gemini would join OpenAI's ChatGPT as a service that could be leveraged through Apple Intelligence, but that didn't eventuate. Instead, Apple's primary consumer-facing announcements were a new visual language for all of its operating systems, and updates for iPad that will let apps move around like windows. From the developers in attendance, the largest applause was for the announcement that Apple would be opening up access to the large language model that powers Apple Intelligence. That means any app can make use of the iPhone's on-device AI to build on things like image recognition and creation, language translation or text summarisation. 'We think this will ignite a whole new wave of intelligent experiences in the apps that you use every day,' Federighi said. 'We couldn't be more excited about how developers can build on Apple Intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy.'

Apple unveils software visual redesign, but no revamped Siri
Apple unveils software visual redesign, but no revamped Siri

Sydney Morning Herald

time2 days ago

  • Sydney Morning Herald

Apple unveils software visual redesign, but no revamped Siri

Apple has made a number of announcements at its annual worldwide developer conference (WWDC) overnight, including a visual overhaul of its software, and changes to iPad that make it more like Mac. But major expected updates to its AI platform and voice assistant did not show up. The iPhone-maker has been under pressure to deliver since its Apple Intelligence suite was unveiled at last year's WWDC, as it's been perceived as underdeveloped and promised updates have failed to eventuate. Key among them are plans to evolve the Siri voice assistant into a smarter and more capable agent that understands context, akin to ChatGPT. 'As we've shared, we're continuing our work to deliver the features that make Siri even more personal,' said Craig Federighi, Apple's senior vice president of software engineering, on stage at WWDC. 'This work needed more time to reach a high quality bar, and we look forward to sharing more about it in the coming year.' Industry watchers also anticipated an announcement that Google's Gemini would join OpenAI's ChatGPT as a service that could be leveraged through Apple Intelligence, but that didn't eventuate. Instead, Apple's primary consumer-facing announcements were a new visual language for all of its operating systems, and updates for iPad that will let apps move around like windows. From the developers in attendance, the largest applause was for the announcement that Apple would be opening up access to the large language model that powers Apple Intelligence. That means any app can make use of the iPhone's on-device AI to build on things like image recognition and creation, language translation or text summarisation. 'We think this will ignite a whole new wave of intelligent experiences in the apps that you use every day,' Federighi said. 'We couldn't be more excited about how developers can build on Apple Intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy.'

Getty argues its UK copyright case does not threaten AI
Getty argues its UK copyright case does not threaten AI

The Advertiser

time2 days ago

  • The Advertiser

Getty argues its UK copyright case does not threaten AI

Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development". Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development". Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development". Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development".

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store