logo
Australia, NZ dollars suffer setback, NZ food prices shock

Australia, NZ dollars suffer setback, NZ food prices shock

Minta day ago
SYDNEY, Aug 15 (Reuters) - The Australian and New Zealand dollars were back on the defensive on Friday after a high reading on U.S. producer inflation quashed talk of aggressive rate cuts there and boosted the greenback.
That left the Aussie nursing bruises at $0.6497, having fallen 0.8% overnight to wipe out two days of gains. It did find support at $0.6483, but was now a long way for the week's top of $0.6568.
The kiwi dollar huddled at $0.5918, after shedding almost 1% overnight to as low as $0.5910. Resistance is up at $0.5996, with more support around $0.5885.
Data from New Zealand contained some unwelcome news on inflation as food prices rose 0.7% in July to be up 5% on a year earlier, with butter alone rising 42% and cheese 30%.
There were also sharp increases in electricity costs, petrol, air travel and holiday accommodation. One bright spot was a slowdown in rents, which account for a large slice of CPI.
With the economy sluggish and plenty of spare capacity on the supply side, analysts still assume the Reserve Bank of New Zealand will cut rates again when it meets next week.
Markets imply a 95% chance of a quarter point reduction in the official cash rate to 3.0%, and perhaps one further move to a floor of 2.75% early next year.
Attention will be on its outlook for the economy and whether the path of the OCR is lowered from the 2.9% projected back in May, which is seen as the lower end of neutral.
"Higher food prices and continued increases in administered costs like rates, mean that inflation is likely to rise to around 3% before the end of this year, higher than the RBNZ had previously forecast," noted Kelly Eckhold, chief NZ economist at Westpac.
"We don't expect a significant change in the RBNZ's OCR profile, which is still likely to indicate a chance of a further cut in November," he added. "Beyond this meeting a data-dependent easing bias seems likely."
Jarrod Kerr, chief economist at Kiwibank, thinks the RBNZ will be more dovish and lower the OCR floor to 2.75% or even 2.6%.
"The weakness in the economy demands stimulus," he said. "With all the risks offshore, and the pain still felt onshore, there's a good argument to be made for taking policy into stimulatory territory ASAP. An argument that is growing in support."
(Reporting by Wayne Cole; Editing by Sam Holmes)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Chinese state media calls U.S. a 'surveillance empire' over trackers in chip shipments
Chinese state media calls U.S. a 'surveillance empire' over trackers in chip shipments

The Hindu

timean hour ago

  • The Hindu

Chinese state media calls U.S. a 'surveillance empire' over trackers in chip shipments

The United States' practice of installing location trackers in chip shipments at risk of diversion to China reflects the "instincts of a surveillance empire," China's state-run media outlet Xinhua said in a commentary published on Friday. Reuters reported earlier this week that U.S. authorities had secretly placed location tracking devices in targeted shipments of advanced chips to detect diversions to China, which is under U.S. curbs for advanced chip exports. The Xinhua commentary, titled "America turns chip trade into a surveillance game," cited "reports" that Washington had embedded such trackers, accusing the United States of running "the world's most sprawling intelligence apparatus". The U.S. government has in the past few years tightened restrictions on the exports of advanced chips as well as related technology and equipments to China, as the two superpowers vie for technological dominance. The Chinese commentary follows longstanding accusations from Washington and its Western allies that China could use some exported products, from telecommunications equipment to vehicles, for surveillance, posing potential security risks. In 2022, the Biden administration banned the sale and import of new telecommunications equipment from several Chinese firms, including Huawei, citing national security concerns. In January, it intensified scrutiny by targeting China-made cars and trucks. In its commentary, Xinhua accused the U.S. government of seeing its trading partners as "rivals to be tripped up or taken down," adding that "if U.S. chips are seen as Trojan horses for surveillance, customers will look elsewhere." China's cyberspace watchdog last month said it had asked U.S. chipmaker Nvidia to explain whether its H20 chips had any backdoor security risks; a hidden method of bypassing normal authentication or security controls. Chinese authorities have also cautioned domestic tech firms over their use of H20 chips, Reuters recently reported.

Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info
Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

The Hindu

timean hour ago

  • The Hindu

Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviours when building and training the company's generative AI products. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in thedocument, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualised fantasy requests, with separate entries for how to respond to requests such as digitally undressing singer Taylor Swift. Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state.

Accenture to buy Australian cybersecurity firm CyberCX for reported $650 million
Accenture to buy Australian cybersecurity firm CyberCX for reported $650 million

The Hindu

time2 hours ago

  • The Hindu

Accenture to buy Australian cybersecurity firm CyberCX for reported $650 million

Accenture said on Thursday it will buy Australian cybersecurity firm CyberCX in its largest-ever deal in the sector, with the Australian Financial Review valuing the transaction at more than A$1 billion ($650 million). A wave of devastating cyberattacks has battered Australia, including a 2022 breach at telecom major Optus that exposed the personal data of up to 10 million users, and a hack on health insurer Medibank affecting nearly 10 million customers. In July, Qantas Airways disclosed that criminals had infiltrated one of its call centres, accessing personal information of six million customers. Private equity firm BGH Capital, which is selling CyberCX, has not disclosed any financial terms. Accenture declined to provide additional details, while BGH Capital didn't respond to Reuters' requests for comment on the reported valuation. The deal underscores the surging demand for advanced digital security services as businesses worldwide face increasingly sophisticated cyber threats that disrupt operations and compromise sensitive data. Melbourne-based CyberCX was formed in 2019 through the merger of 12 smaller cybersecurity firms backed by BGH Capital. The company now employs about 1,400 people and runs security operations centres across Australia and New Zealand, with offices in London and New York. CyberCX is led by John Paitaridis, formerly managing director of Optus Business, and Chief Strategy Officer Alastair MacGibbon, Australia's former national cybersecurity coordinator. The leadership's ties to Optus are notable, given the telecommunications company's 2022 data breach, which exposed names, birth dates, addresses, phone numbers, email contacts and passport and driver's license numbers. Since 2015, Accenture has completed 20 security acquisitions, including recent purchases of Brazilian cyber defense firm Morphus, MNEMO Mexico and Spain-based Innotec Security. On the domestic front, the firm entered into a $700 million collaborative agreement with Telstra in February, aiming to implement AI capabilities across the telecommunications company.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store