logo
Listen to The Country online: ‘Tinder for cows' with Matamata farmer Matthew Zonderop

Listen to The Country online: ‘Tinder for cows' with Matamata farmer Matthew Zonderop

NZ Herald02-07-2025
Today on The Country radio show, host Jamie Mackay catches up with Matamata dairy farmer Matthew Zonderop, who is using ChatGPT to drive Perfect Cow Breeding Solutions - aka 'Tinder for cows'.
On with the show:
Christopher Luxon:
We ask the Prime Minister if he's doing a Sir Keir
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Farmer Creates 'Tinder' For Dairy Cows
Farmer Creates 'Tinder' For Dairy Cows

Scoop

time11 hours ago

  • Scoop

Farmer Creates 'Tinder' For Dairy Cows

, Senior Journalist, Rurals A Waikato farmer created what he says is essentially Tinder for cows, after a spreadsheet error caused him to lose the breeding data for his herd. Matthew Zonderop, a 50-50 sharemilker, previously used multiple spreadsheets and coding to track his herd, insemination, and make genetic improvements. But after an error in his coding saw his system no longer working, he turned to artificial intelligence and ChatGPT, which he had previously played around with but not in a work capacity. After the AI powered tool fixed the code he uploaded the entire spreadsheet and realised it could do it all for him. He said he hadn't looked back, launching his business Perfect Cow Breeding Solutions at Fieldays earlier this year to help other farmers. It removed the hassle of spreadsheets and helped farmers breed better cows by choosing the right bull. "Basically, it's a matchmaking service for dairy farmers and their cows. "Most cows throughout New Zealand are DNA profiled, and they give specific trait data on cows, so her protein levels, her fat levels, gestation length, live weight, and they are given to us into our herd records. "So each cow has got its own head record and we can extract that data and then analyse it. "It was always available to us, but we really, I suppose, didn't have the tools available to analyse it the way we are now with using AI."

Farmer creates 'Tinder' for dairy cows
Farmer creates 'Tinder' for dairy cows

RNZ News

time17 hours ago

  • RNZ News

Farmer creates 'Tinder' for dairy cows

Photo: Supplied/Matthew Zonderop A Waikato farmer created what he says is essentially Tinder for cows, after a spreadsheet error caused him to lose the breeding data for his herd. Matthew Zonderop, a 50-50 sharemilker, previously used multiple spreadsheets and coding to track his herd, insemination, and make genetic improvements. But after an error in his coding saw his system no longer working, he turned to artificial intelligence and ChatGPT, which he had previously played around with but not in a work capacity. After the AI powered tool fixed the code he uploaded the entire spreadsheet and realised it could do it all for him. He said he hadn't looked back, launching his business Perfect Cow Breeding Solutions at Fieldays earlier this year to help other farmers. It removed the hassle of spreadsheets and helped farmers breed better cows by choosing the right bull. "Basically, it's a matchmaking service for dairy farmers and their cows. "Most cows throughout New Zealand are DNA profiled, and they give specific trait data on cows, so her protein levels, her fat levels, gestation length, live weight, and they are given to us into our herd records. "So each cow has got its own head record and we can extract that data and then analyse it. "It was always available to us, but we really, I suppose, didn't have the tools available to analyse it the way we are now with using AI."

The AI doctor will see you … soon
The AI doctor will see you … soon

Newsroom

timea day ago

  • Newsroom

The AI doctor will see you … soon

Comment: Artificial intelligence is already widely used in healthcare. There are now more than 1000 Federal Drug Administration-authorised AI systems in use in the US, and regulators around the world have allowed a variety of AI systems to support doctors and healthcare organisations. AI is being used to support radiologists examining X-rays and MRI scans by highlighting abnormal features, and to help predict how likely someone is to develop a disease based on their genetics and lifestyle. It is also integrated with consumer technology that many people use to manage their health. If you own an Apple watch, it can use AI to warn you if you develop an abnormal heart rhythm. More recently, doctors (including many GPs in Aotearoa New Zealand) have adopted AI to help them to write their medical notes. An AI system listens into the GP-patient conversation and then uses a large language model such as ChatGPT to turn the transcript of the audio into a summary of the consultation. This saves the doctor time and can help them pay closer attention to what their patient is saying rather than concentrating on writing notes. But there are still lots of things we don't know about the future of AI in health. I was recently invited to speak at the Artificial Intelligence in Medicine and Imaging conference at Stanford University, and clinicians in the audience asked questions that are quite difficult to answer. For example, if an AI system used by a doctor makes a mistake (ChatGPT is well known for 'hallucinating' incorrect information), who is liable if the error leads to a poor outcome for the patient? It can also be difficult to accurately assess the performance of AI systems. Often studies only assess AI systems in the lab, as it were, rather than in real world use on the wards. I'm the editor-in-chief of a new British Medical Journal publication, BMJ Digital Health & AI, which aims to publish high-quality studies to help doctors and healthcare organisations determine which types of AI and digital health technologies are going to be useful in healthcare. We've recently published a paper about a new AI system for identifying which artery is blocked in a heart attack, and another on how GPs in the UK are using AI for transcribing their notes. One of the most interesting topics in AI research is whether generative AI is better than a doctor for general purpose diagnosis. There seems to be some evidence emerging that AI may be starting to be better than doctors at diagnosing patients when given descriptions of complex cases. The surprising thing about this research is that it found that an AI alone might be more accurate than when a doctor uses an AI to help them. This may be because some doctors don't know how to use AI systems effectively, indicating that medical schools and training colleges should incorporate AI training into medical education programmes. Another interesting development is the use of AI avatars (simulated humans) for patient pre-consultations and triage, something that seems likely to be implemented within the next few years. The experience will be very similar to talking with a human doctor and the AI avatar could then explain to the real doctor what that they found and what they would recommend as treatment. Though this may save time, a balance will need to be struck between efficiency and patients' preferences – would you prefer to see an AI doctor now or wait longer to see a human doctor? The advancement of AI in healthcare is very exciting but there are risks. Often new technology is implemented without considering so-called human factors. These can have a big impact on whether mistakes are made using the new system, or even whether the system will get used at all. Clinicians and patients quickly stop using systems that are hard to use or that don't fit into their normal work routines. The best way to prevent this is to use 'human-centred design', where real people – doctors and patients – are included in the design process. There is also a risk that unregulated AI systems are used to diagnose patients or make treatment decisions. Most AI systems are highly regulated – patients can be reassured that any AI involved in their care is being used safely. But there is a risk that governments may not keep up with the accelerating development of AI systems. Rapid, large-scale adoption of inaccurate healthcare-related AI systems could cause a lot of problems, so it is very important governments invest in high-quality AI research and robust regulatory processes to ensure patient safety. Chris Paton will be giving a public lecture about AI in healthcare at the Liggins Institute on August 14 at 6pm. Register here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store