
WAFIC calling for Mid West fishers to speak about how planned Pilot Energy survey could impact operations
The WA Fishing Industry Council is calling for local fishers to have their say on a 3D marine seismic survey planned to take place in the Mid West.
Pilot Energy has put forward an environmental plan to undertake the Eureka 3D marine seismic survey next year to collect data about the underlying rock types as research for oil and gas exploration.
According to the plan, Pilot holds a 21.25 per cent interest in the Cliff Head Oil field and Cliff Head Infrastructure at the North Perth Basin, about 7km south of Dongara.
Plans indicated the survey would involve a single seismic survey vessel towing a seismic source array, using compressed air to emit sound pulses reflecting off the seabed and rock formations.
WAFIC expressed concerns with the survey, saying seismic blasts could result in a reduction of catch both during and after the survey, due to behavioural change for the animals.
Feedback for the project was previously sought in February and March last year, which found some people were opposed to either Cliff Head being used for carbon capture, or in general were opposed to seismic activity.
According to the document, Pilot Energy had outlined potential impacts and risks, the majority of which were given low residual risk rankings.
However, fishing groups, including the Western Rock Lobster Council, had previously expressed concerns with the project.
WAFIC said it was advocating for consultation with Pilot Energy to ensure the perspective of the fishing industry was heard and sought feedback from impacted fishers.
The organisation asked anyone wishing to provide feedback to send it via email to olivia.mickle@wafic.org.au, where WAFIC will compile the responses to Pilot Energy.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
15 minutes ago
- Yahoo
AP Top Extended Financial Headlines at 12:44 a.m. EDT
Leaders of some of the world's biggest economic powers arrive in the Canadian Rockies for a Group of Seven summit that's been shadowed by an escalating conflict between Israel and Iran and U.S. President Donald Trump's unresolved trade war Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
15 minutes ago
- Yahoo
Your boss is probably using AI more than you
Leaders use AI around twice as much as individual contributors, a new Gallup Poll finds. Gallup data indicates AI adoption has risen, especially in white-collar roles, with tech leading at 50%. 16% of employees surveyed who use AI "strongly agree" that AI tools provided by their company are useful. There's a good chance your boss is using AI more than you. Leaders are adopting AI at nearly double the rate of individual contributors, a new Gallup poll released Monday indicates. The survey found that 33% of leaders, or those who identified as "managers of managers," use AI frequently, meaning a few times a week or more, compared to 16% of individual contributors. Gallup's chief scientist for workplace management and wellbeing, Jim Harter, told Business Insider that leaders are likely feeling added pressure to think about AI and how it can increase efficiency and effectiveness. "There's probably more leaders experimenting with it because they see the urgency and they see it as a competitive threat potentially," Harter said. The data point was one of several findings from Gallup's survey on AI adoption in the workplace, including: The number of US employees who use AI at work at least a few times a year has increased from 21% to 40% in the past two years Frequent AI use increased from 11% to 19% since 2023 Daily use of AI doubled in the past year from 4% to 8% 15% of employees surveyed said it was "very or somewhat likely that automation, robots, or AI" would eliminate their jobs in a five-year period 44% of employees said their company has started to integrate AI, but only 22% say their company shared a plan or strategy 30% of employees said their company has "general guidelines or formal policies" in place for using AI at work 16% of the employees who use AI "strongly agree" that AI tools provided by their company are helpful for their job While AI adoption has increased overall in the last two years, that increase isn't evenly distributed across industries. The Gallup report said that AI adoption "increased primarily for white-collar roles," with 27% surveyed now saying they use AI frequently on the job, a 12% increase from last year. Among white-collar workers, frequent AI is most common in the tech industry, at 50%, according to the survey, followed by professional services at 34%, and finance at 32%. Meanwhile, frequent AI use among production and front-line workers has dropped from 11% in 2023 to 9% this year, according to Gallup's polling. Concerns that AI will eliminate jobs have also not increased overall in the last two years, but the report indicated that employees in industries like technology, retail, and finance are more likely than others to believe AI will one day take their jobs. The most common challenge with AI adoption, according to those surveyed, is "unclear use case or value proposition," suggesting that companies may not providing clear guidance. The report said that when employees say they "strongly agree" that leadership has shared a clear plan for using AI, they're three times as likely to feel "very prepared to work with AI" and 2.6 times as likely to feel comfortable using it at work. "In some cases, you've got to have the training to be able to use AI as a complement with other text analytic tools that are more precise," Gallup's Harter told BI. Harter said that while organizations are increasingly developing plans around AI usage, "there's still a long way to go," and it may not be a one-and-done approach. "They're going to have to continue to be trained in how to use it because it's going to evolve itself," Harter said. Read the original article on Business Insider
Yahoo
16 minutes ago
- Yahoo
Chinese scientists claim AI is capable of spontaneous human-like understanding
Chinese researchers claim to have found evidence that large language models (LLMs) can comprehend and process natural objects like human beings. This, they suggest, is done spontaneously even without being explicitly trained to do so. According to the researchers from the Chinese Academy of Sciences and South China University of Technology in Guangzhou, some AIs (like ChatGPT or Gemini) can mirror a key part of human cognition, which is sorting information. Their study, published in Nature Machine Intelligence, investigated whether LLM models can develop cognitive processes similar to those of human object representation. Or, in other words, to find out if LLMs can recognize and categorize things based on function, emotion, environment, etc. To discover if this is the case, the researchers gave AIs 'odd-one-out' tasks using either text (for ChatGPT-3.5) or images (for Gemini Pro Vision). To this end, they collected 4.7 million responses across 1,854 natural objects (like dogs, chairs, apples, and cars). They found that of the models created, sixty-six conceptual dimensions were created to organize objects, just the way humans would. These dimensions extended beyond basic categories (such as 'food') to encompass complex attributes, including texture, emotional relevance, and suitability for children. The scientists also found that multimodal models (combining text and image) aligned even more closely with human thinking, as AIs process both visual and semantic features simultaneously. Furthermore, the team discovered that brain scan data (neuroimaging) revealed an overlap between how AI and the human brain respond to objects. The findings are interesting and provide, it appears, evidence that AI systems might be capable of genuinely 'understanding' in a human-like way, rather than just mimicking responses. It also suggests that future AIs could have more intuitive, human-compatible reasoning, which is essential for robotics, education, and human-AI collaboration. However, it is also important to note that LLMs don't understand objects the way humans do emotionally or experientially. AIs work by recognizing patterns in language or images that often correspond closely to human concepts. While that may appear to be 'understanding' on the surface, it's not based on lived experience or grounded sensory-motor interaction. Also, some parts of AI representations may correlate with brain activity, but this doesn't mean they can 'think' like humans or share the same architecture. If anything, they can be thought of as more of a sophisticated facsimile of human pattern recognition rather than a thinking machine. LLMs are more like a mirror made from millions of books and pictures, reflecting those models at the user based on learned patterns. The study's findings suggest that LLMs and humans might be converging on similar functional patterns, such as organizing the world into categories. This challenges the view that AIs can only 'appear' smart by repeating patterns in data. But, if, as the study argues, LLMS are starting to build conceptual models of the world independently, it would mean that we could be edging closer to artificial general intelligence (AGI)—a system that can think and reason across many tasks like a human. You can access the study in the journal Nature Machine Intelligence.