logo

W Energy Brings Advanced AI Energy Forecasting Back to Australia in Partnership with Simble

Zawya17 hours ago
SYDNEY, AUSTRALIA - Media OutReach Newswire - 14 August 2025 - Australian clean energy innovation takes a leap forward as W Energy, a new energy technology company, teams up with long-established ASX-listed Simble Solutions (ASX: SIS) to deploy AI-driven energy forecasting and management solutions nationwide. Simble has also formally engaged Yongxin Sun, W Energy's founder and former AI Clean Energy GLOBAL lead, to provide technical support and strategic guidance. Mr. Sun combines expertise in finance, large-scale energy project modelling, and applied AI technology.
W Energy's AI forecasting platform originated in Australia to enhance solar and battery performance predictions while integrating financial modelling for investors and operators. Due to limited local data early on, the system was trialed in Southeast Asia across Cambodia, Vietnam, and the Philippines. These deployments delivered diverse climate and grid datasets, mature near real-time forecasting capabilities, and proven commercial benefits such as reduced investment risk and optimized storage dispatch.
Now commercially mature, W Energy and Simble will begin rolling out projects in New South Wales, expanding to Queensland and Victoria. The partnership leverages W Energy's predictive AI for generation, demand, and pricing optimization alongside Simble's established market presence and energy monitoring tools. Together, they will serve commercial buildings, industrial precincts, and regional grid networks, supporting virtual power plants, dynamic pricing response, and grid resilience.
This collaboration aligns with Australia's energy transition goals, using AI to boost renewable penetration and grid flexibility. The platform integrates real-time IoT sensor data with historical weather and market information, applies adaptive algorithms for storage dispatch, and incorporates financial scenario modelling to assess project returns under varying conditions—all secured to comply with Australian data standards.
Key benefits include higher forecasting accuracy across diverse weather conditions, direct integration of financial metrics into operational decisions, and scalability from small commercial sites to utility-scale assets. Potential applications range from energy cost reductions for commercial customers to enhanced stability in high-renewable regions.
W Energy and Simble plan initial deployments in NSW commercial and industrial sites while collaborating with universities and research institutions to refine the AI platform using local data, further improving its accuracy, adaptability, and security.
Hashtag: #WEnergy
The issuer is solely responsible for the content of this announcement.
W Energy
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI bots allowed to hold ‘sensual' chats with kids, say Meta's guidelines
AI bots allowed to hold ‘sensual' chats with kids, say Meta's guidelines

Khaleej Times

time33 minutes ago

  • Khaleej Times

AI bots allowed to hold ‘sensual' chats with kids, say Meta's guidelines

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable.' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'Inconsistent with our policies' 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'Taylor Swift holding an enormous fish' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts,' 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.' Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state.

Virtual influencers are rewriting the rules of social media
Virtual influencers are rewriting the rules of social media

Campaign ME

time3 hours ago

  • Campaign ME

Virtual influencers are rewriting the rules of social media

The influencer marketing landscape is experiencing a seismic shift as virtual influencers emerge from the digital realm to capture millions of followers and lucrative brand partnerships. With 58 per cent of people in the US following at least one virtual influencer, these computer-generated characters are no longer a futuristic concept – they're reshaping how brands connect with audiences. The new digital stars Virtual influencers are entirely computer-generated characters created using artificial intelligence, CGI, and advanced animation technologies. Unlike their human counterparts, these digital personalities exist solely in the virtual space, yet they maintain active social media presences, engage with followers, and collaborate with major brands. Lu do Magalu leads with 7.1 million Instagram followers, making her the most successful digital personality globally. Originally designed in 2003 as a virtual assistant for Brazilian retail company Magazine Luiza, Lu has evolved into a full-fledged influencer phenomenon. Meanwhile, newcomers like Spain's Aitana López and the world's first digital supermodel Shudu are gaining significant traction, demonstrating the global appetite for virtual personalities. The brand safety advantage Virtual influencers offer brands unprecedented reliability. They are immune to scandals, controversies, and personal downtime that can derail traditional influencer campaigns. Unlike human influencers who may face personal challenges or voice controversial opinions, virtual influencers provide consistent brand messaging and are always 'camera-ready'. They are ageless, endlessly productive, and free from scandals, making them a dependable choice for brands. This level of control allows companies to maintain their brand image while scaling content production without the unpredictability that sometimes comes with human partnerships. The uncanny valley challenge Despite their growing popularity, virtual influencers face a unique psychological challenge. A recent Australian study demonstrated that audiences may prefer less human-like AI influencers. The research revealed that virtual influencers with moderate and high levels of human likeness left audiences feeling unsettled and were deemed as 'creepy' and less trustworthy. This 'uncanny valley' effect suggests that the most successful virtual influencers may be those that embrace their digital nature rather than attempting to perfectly mimic human appearance. Participants were more likely to accept messages from 2D digital personas that didn't attempt to visually mimic human appearance. Market impact The financial impact is substantial. According to IMH's AI Influencer Marketing Benchmark Report, nearly half of respondents who collaborated with AI influencers reported a 'very positive' experience. Additionally, 52.8 per cent believe that virtual influencer versatility will have a major effect on the future of marketing and entertainment. Virtual influencers also offer cost advantages. While creation requires specialised talent, they eliminate ongoing expenses like travel, logistics, and the risk of human error or unavailability. By Khaldoun Zaghir, General Manager, 5th Element.

Meet Mindvalley's EVE: The AI companion built for emotional intelligence
Meet Mindvalley's EVE: The AI companion built for emotional intelligence

Khaleej Times

time4 hours ago

  • Khaleej Times

Meet Mindvalley's EVE: The AI companion built for emotional intelligence

In an age where artificial intelligence (AI) has started to write our emails, plan our schedules, and act as our personal coach or cheerleader, we must ask the question: Is AI also going to show us real emotion some day? A friend recently shared her frustration after receiving a generic AI-generated email from her boss congratulating her on 20 years with the company. 'It didn't feel sincere,' she said. 'I could tell that it was outsourced.' And so, we find ourselves at a crossroads, where AI is no longer going to remain just a tool of efficiency, it is going to participate in our emotional and relational lives. In the thick of this quandary comes the announcement of EVE, by Mindvalley, a breakthrough in AI. For Vishen Lakhiani, founder of the global personal growth platform, this is 'the most emotionally meaningful project' he has ever worked on. Named after his daughter, Eve, the announcement makes a bold statement that technology, when built with empathy, can become an extension of our humanity. The 'how', we're yet to see. EVE is also an acronym for Everyone Elevates, encapsulating its mission to guide users towards their highest selves. 'Most AI today is built to help you do more. EVE is built to help you become more,' Vishen explains. 'She's designed to form memories about you, to understand your goals, motivations, psychology, and who you're becoming. And then, in the most brilliant, empathetic, and understanding way, guide you towards your greatest potential.' Unlike productivity bots or transactional assistants, EVE is designed to be rooted in wisdom. Leveraging the collective teachings of over a hundred Mindvalley experts — from biohacker Dave Asprey to hypnotherapist Marisa Peer and transformational speaker Lisa Nichols — EVE is designed to reflect this wisdom judiciously. 'It's a distillation of the world's leading minds in health, mindset, and human potential, brought to life through an interface that serves not your schedule, but your soul.' How emotion sensing works One of EVE's most intriguing promises is Emotional Intelligence. While current iterations focus on content recommendations and practice reminders, Mindvalley envisions a future where EVE integrates seamlessly with wearable technology such as Apple Watch, WHOOP, Ultrahuman and the like, to sense your physiological state in real time. But how does this work and what data is collected, that is the burning question. With your explicit permission, EVE will pull biometric data from your connected wearables: heart rate variability (HRV), galvanic skin response, body temperature, sleep cycles, and stress markers. Each of these data points acts as a window into your emotional and physical state. HRV, for example, is a powerful measure of nervous system health and stress resilience. Skin conductivity can indicate anxiety spikes. And put together, these signals help EVE build an understanding of your moment-to-moment reality. 'Your body tells a story,' Vishen says. 'And with the right intelligence, we can respond before burnout hits, before your nervous system crashes.' That said, the long-term vision is even more holistic. EVE will be able to interpret data not only from wearables but also your medical history, supplement stack, dietary inputs, and bloodwork, all aggregated to give personalised, real-time recommendations. It's a move from generic wellness advice to an intuitive feedback loop that understands what you need, when you need it, with emotional precision. The ethics of emotional data Yet, as some would say: 'With great potential comes great responsibility'. Vishen is clear on the ethical guardrails. Mindvalley builds to European General Data Protection Regulation standards, prioritising user data sovereignty over ambition. 'We once explored building emotion-detection features into EVE. But as we looked deeper, we realised that the European Union may move to ban AI systems that interpret emotions in real time. Even though our intention was to use this for good, we abandoned that path. We chose to prioritise ethics over ambition.' For now, EVE will not manipulate, but only reflect, and her nudges are based entirely on the goals you've set yourself, reminding you of who you already said you wanted to become. And crucially, your data always remains yours. AI for more soul time, not screen time Looking ahead, Vishen envisions AI becoming ambient, integrated into homes, cars, headphones, even daily surroundings in invisible ways. But his vision will always remain human-first. 'Imagine your AI notices your gait is slightly off and suggests balance training before it becomes a fall risk. For the elderly, that's lifesaving. But this isn't about replacing human connection. It's about deepening it.' At Mindvalley, EVE is still in beta mode, with emotion-sensing features and integrations rolling out over the next six months. Early testers report greater adherence to practices, deeper course immersion, and a feeling of being truly supported. But Vishen is quick to note that the breakthroughs come from human wisdom itself; EVE simply ensures you find the right guidance at the right time. Perhaps that is the true promise of EVE. Not an AI to make you do more tasks, but an emotionally sound companion that helps you live more consciously, connect more deeply, and rise into the human being you were always meant to be. As Vishen says: 'The ultimate goal is not more screen time, but more soul time.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store