logo
#

Latest news with #DOCA

Federal Government and other creditors facing $300m wipeout on failed mineral sands projects
Federal Government and other creditors facing $300m wipeout on failed mineral sands projects

West Australian

time25-07-2025

  • Business
  • West Australian

Federal Government and other creditors facing $300m wipeout on failed mineral sands projects

The Federal Government and other creditors of mineral sands miner Strandline Resources are facing a $300 million wipeout even with the sale of the collapsed company's flagship WA project to Japanese group Iwatani. Iwatani, which already owns South West mineral sands miner Doral, is proposing to take control of the mothballed Coburn mine near Shark Bay from receivers with a $15m cash offer that would see secured creditors repaid less than 5¢ in every dollar they are owed. The deed of company arrangement for Coburn, if approved by creditors next week, would crystallise a loss of about $160m for the project's biggest backer, the government-owned Northern Australia Infrastructure Facility. A statutory report by administrators Cor Cordis into the collapse of Strandline and its operating subsidiary Coburn Resources reveals NAIF is owed $167m, having advanced a final $5m just three months before the miner collapsed in February. With bondholders owed $94m and NAB nearly $17m, secured creditors alone are on the hook for Coburn for $277.5m. Under the DOCA, they would likely collectively recover less than $10m, while 224 unsecured mainly trade creditors would share just $1.5m to settle another $49m of claims. Subject to clarification about which company actually employed them, the deed funds would also be used to pay $5 million in outstanding entitlements owed to nearly 170 employees. The ASX-listed Strandline was put into administration on February 21 after its backers ran out of patience with protracted efforts to address Coburn's poor operating performance and restructure the group's hefty debt. Receivers from McGrathNicol took control of the mine under an almost simultaneous appointment by the secured creditors. Strandline spent $260m developing Coburn to exploit a large tonnage, but low-grade deposit, about 300km north of Geraldton. It entered commercial production in November 2022 but struggled from the start, falling well short of the targets assumed in the feasibility study that underpinned the development. Directors sheeted home blame to various factors, including design and construction flaws, unreliable equipment, labour shortages, and higher-than-expected handling and operating costs. However, administrators Thomas Birch and Jeremy Nipps added that Coburn never produced enough to do better than break even. Strandline and Coburn, they said, 'were reliant on funding from lenders to bridge their collective working capital deficit in circumstances where operations were never generating sufficient cash or gross profit'. Iwatani's was one of two proposals received by McGrathNicol after a sale campaign, with the receivers opting for the Japanese company, partly because its offer was better, it had more certainty and it 'would see the continuation of the Coburn project after a short period of care and maintenance'. Iwatani could not be immediately contacted on Friday.

F5 and NVIDIA to meet the needs of accelerated computing and AI
F5 and NVIDIA to meet the needs of accelerated computing and AI

Tahawul Tech

time24-06-2025

  • Business
  • Tahawul Tech

F5 and NVIDIA to meet the needs of accelerated computing and AI

F5 has announced new capabilities for F5 BIG-IP Next for Kubernetes accelerated with NVIDIA BlueField-3 DPUs and the NVIDIA DOCA software framework, underscored by customer Sesterce's validation deployment. Sesterce is a leading European operator specialising in next-generation infrastructures and sovereign AI, designed to meet the needs of accelerated computing and artificial intelligence. Extending the F5 Application Delivery and Security Platform, BIG-IP Next for Kubernetes running natively on NVIDIA BlueField-3 DPUs delivers high-performance traffic management and security for large-scale AI infrastructure, unlocking greater efficiency, control, and performance for AI applications. In tandem with the compelling performance advantages announced along with general availability earlier this year, Sesterce has successfully completed validation of the F5 and NVIDIA solution across a number of key capabilities, including the following areas: – Enhanced performance, multi-tenancy, and security to meet cloud-grade expectations, initially showing a 20% improvement in GPU utilisation. – Integration with NVIDIA Dynamo and KV Cache Manager to reduce latency for the reasoning of large language model (LLM) inference systems and optimisation of GPUs and memory resources. – Smart LLM routing on BlueField DPUs, running effectively with NVIDIA NIM microservices for workloads requiring multiple models, providing customers the best of all available models. – Scaling and securing Model Context Protocol (MCP) including reverse proxy capabilities and protections for more scalable and secure LLMs, enabling customers to swiftly and safely utilise the power of MCP servers. – Powerful data programmability with robust F5 iRules capabilities, allowing rapid customisation to support AI applications and evolving security requirements. 'Integration between F5 and NVIDIA was enticing even before we conducted any tests', said Youssef El Manssouri, CEO and Co-Founder at Sesterce. 'Our results underline the benefits of F5's dynamic load balancing with high-volume Kubernetes ingress and egress in AI environments. This approach empowers us to more efficiently distribute traffic and optimise the use of our GPUs while allowing us to bring additional and unique value to our customers. We are pleased to see F5's support for a growing number of NVIDIA use cases, including enhanced multi-tenancy, and we look forward to additional innovation between the companies in supporting next-generation AI infrastructure'. Highlights of new solution capabilities include: LLM Routing and Dynamic Load Balancing with BIG-IP Next for Kubernetes With this collaborative solution, simple AI-related tasks can be routed to less expensive, lightweight LLMs in supporting generative AI while reserving advanced models for complex queries. This level of customisable intelligence also enables routing functions to leverage domain-specific LLMs, improving output quality and significantly enhancing customer experiences. F5's advanced traffic management ensures queries are sent to the most suitable LLM, lowering latency and improving time to first token. 'Enterprises are increasingly deploying multiple LLMs to power advanced AI experiences—but routing and classifying LLM traffic can be compute-heavy, degrading performance and user experience', said Kunal Anand, Chief Innovation Officer at F5. 'By programming routing logic directly on NVIDIA BlueField-3 DPUs, F5 BIG-IP Next for Kubernetes is the most efficient approach for delivering and securing LLM traffic. This is just the beginning. Our platform unlocks new possibilities for AI infrastructure, and we're excited to deepen co-innovation with NVIDIA as enterprise AI continues to scale'. Optimizing GPUs for Distributed AI Inference at Scale with NVIDIA Dynamo and KV Cache Integration Earlier this year, NVIDIA Dynamo was introduced, providing a supplementary framework for deploying generative AI and reasoning models in large-scale distributed environments. NVIDIA Dynamo streamlines the complexity of running AI inference in distributed environments by orchestrating tasks like scheduling, routing, and memory management to ensure seamless operation under dynamic workloads. Offloading specific operations from CPUs to BlueField DPUs is one of the core benefits of the combined F5 and NVIDIA solution. With F5, the Dynamo KV Cache Manager feature can intelligently route requests based on capacity, using Key-Value (KV) caching to accelerate generative AI use cases by speeding up processes based on retaining information from previous operations (rather than requiring resource-intensive recomputation). From an infrastructure perspective, organisations storing and reusing KV cache data can do so at a fraction of the cost of using GPU memory for this purpose. 'BIG-IP Next for Kubernetes accelerated with NVIDIA BlueField-3 DPUs gives enterprises and service providers a single point of control for efficiently routing traffic to AI factories to optimize GPU efficiency and to accelerate AI traffic for data ingestion, model training, inference, RAG, and agentic AI,' said Ash Bhalgat, Senior Director of AI Networking and Security Solutions, Ecosystem and Marketing at NVIDIA. 'In addition, F5's support for multi-tenancy and enhanced programmability with iRules continue to provide a platform that is well-suited for continued integration and feature additions such as support for NVIDIA Dynamo Distributed KV Cache Manager'. Improved Protection for MCP Servers with F5 and NVIDIA Model Context Protocol (MCP) is an open protocol developed by Anthropic that standardizes how applications provide context to LLMs. Deploying the combined F5 and NVIDIA solution in front of MCP servers allows F5 technology to serve as a reverse proxy, bolstering security capabilities for MCP solutions and the LLMs they support. In addition, the full data programmability enabled by F5 iRules promotes rapid adaptation and resilience for fast-evolving AI protocol requirements, as well as additional protection against emerging cybersecurity risks. 'Organisations implementing agentic AI are increasingly relying on MCP deployments to improve the security and performance of LLMs', said Greg Schoeny, SVP, Global Service Provider at World Wide Technology. 'By bringing advanced traffic management and security to extensive Kubernetes environments, F5 and NVIDIA are delivering integrated AI feature sets—along with programmability and automation capabilities—that we aren't seeing elsewhere in the industry right now'. F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs is generally available now. For additional technology details and deployment benefits, go to and visit the companies at NVIDIA GTC Paris, part of this week's VivaTech 2025 event. Further details can also be found in a companion blog from F5. Image Credit: F5

Spectro Cloud Integrates Palette with NVIDIA DOCA and NVIDIA AI Enterprise, Empowering Seamless AI Deployment Across Telco, Enterprise, and Edge
Spectro Cloud Integrates Palette with NVIDIA DOCA and NVIDIA AI Enterprise, Empowering Seamless AI Deployment Across Telco, Enterprise, and Edge

Business Wire

time10-06-2025

  • Business
  • Business Wire

Spectro Cloud Integrates Palette with NVIDIA DOCA and NVIDIA AI Enterprise, Empowering Seamless AI Deployment Across Telco, Enterprise, and Edge

SAN JOSE, Calif.--(BUSINESS WIRE)--Spectro Cloud, a leading provider of Kubernetes management solutions, today announced the integration of NVIDIA DOCA Platform Framework (DPF), part of NVIDIA's latest DOCA 3.0 and NVIDIA AI Enterprise software, into its Palette platform. Building on its proven track record as a trusted partner for major organizations deploying Kubernetes in the cloud, at the data center, and at the edge, Spectro Cloud continues to expand its leadership in enabling production-ready infrastructure for AI and modern applications. This integration empowers organizations to efficiently deploy and manage NVIDIA BlueField-3 DPUs alongside AI workloads across diverse environments, including telco, enterprise, and edge. Spectro Cloud is excited to meet, discuss, and demonstrate this integration at GTC Paris, June 11-12. With the integration of DPF, Palette users gain access to a suite of advanced features designed to optimize data center operations: Comprehensive provisioning and lifecycle management: Palette streamlines the deployment and management of NVIDIA BlueField-accelerated infrastructure, ensuring seamless operations across various environments. Enhanced security service deployment: With the integration of NVIDIA DOCA Argus, customers can elevate cybersecurity capabilities, providing real-time threat detection for AI workloads. DOCA Argus operates autonomously on NVIDIA BlueField, enabling runtime threat detection, agentless operation, and seamless integration into existing enterprise security platforms. Support for Advanced DOCA Networking Features: Palette now supports deployment of DOCA FLOW features, including ACL pipe, LPM pipe, CT pipe, ordered list pipe, external send queue (SQ), and pipe resize, enabling more granular control over data traffic and improved network efficiency. NVIDIA AI Enterprise-ready deployments with Palette Palette now supports NVIDIA AI Enterprise-ready deployments, streamlining how organizations operationalize AI across their infrastructure stack. With deep integration of NVIDIA AI Enterprise software components, Palette provides a turnkey experience to provision, manage, and scale AI workloads, including: NVIDIA GPU Operator Automates the provisioning, health monitoring, and lifecycle management of GPU resources in Kubernetes environments, reducing the operational burden of running GPU-intensive AI/ML workloads. NVIDIA Network Operator Delivers accelerated network performance using DOCA infrastructure. It enables low-latency, high-throughput communication critical for distributed AI inference and training workloads. NVIDIA NIM Microservices Palette simplifies the deployment of NVIDIA NIM microservices, a new class of optimized, containerized inference APIs that allow organizations to instantly serve popular foundation models, including LLMs, vision models, and ASR pipelines. With Palette, users can launch NIM endpoints on GPU-accelerated infrastructure with policy-based governance, lifecycle management, and integration into CI/CD pipelines — enabling rapid experimentation and production scaling of AI applications. NVIDIA NeMo With Palette's industry-leading declarative management, platform teams can easily define reusable cluster configurations that includes everything from NVIDIA NeMo microservices to build, customize, evaluate and guardrail LLMs; to GPU drivers and NVIDIA CUDA libraries; to the NVIDIA Dynamo Inference framework; plus PyTorch/TensorFlow, and Helm chart deployments. This approach enables a scalable, repeatable, and operationally efficient foundation for AI workloads. By integrating these components, Palette empowers teams to rapidly build, test, and deploy AI services, while maintaining enterprise-grade control and visibility. This eliminates the traditional friction of managing disparate software stacks, GPU configurations, and AI model serving infrastructure. "Integrating NVIDIA DPF into our Palette platform marks a significant step forward in delivering scalable and efficient AI infrastructure solutions," said Saad Malik, CTO and co-founder, Spectro Cloud. "Our customers can now harness the full potential of NVIDIA BlueField's latest advancements to drive accelerated networking, infrastructure optimization, AI security, and innovation across telco, enterprise, and edge environments." 'Organizations are rapidly building AI factories and need intelligent, easy-to-use infrastructure solutions to power their transformation,' said Dror Goldenberg, senior vice president of Networking Software at NVIDIA. 'Building on the DOCA Platform Framework, the Palette platform enables enterprises and telcos to deploy and operate BlueField-accelerated AI infrastructure with greater speed and efficiency.' This strategic integration positions Palette as a comprehensive platform for organizations aiming to operationalize AI at scale, including: Telco solutions: High-performance, low-latency infrastructure tailored for telecommunications applications. Enterprise deployments: Scalable and secure AI infrastructure to support diverse enterprise workloads. Edge computing: Lightweight, GPU-accelerated solutions designed for resource-constrained edge environments. Palette is available today for deployment and proof of concept (POC) projects. For more information about Spectro Cloud's Palette platform, visit Learn more about our work with NVIDIA, including technical blogs, here. About Spectro Cloud Spectro Cloud delivers simplicity and control to organizations running Kubernetes at any scale. With its Palette platform, Spectro Cloud empowers businesses to deploy, manage, and scale Kubernetes clusters effortlessly — from edge to data center to cloud — while maintaining the freedom to build their perfect stack. Trusted by leading organizations worldwide, Spectro Cloud transforms Kubernetes complexity into elegant, scalable solutions, enabling customers to master their cloud-native journey with confidence. Spectro Cloud is a Gartner Cool Vendor, CRN Tech Innovator, and a 'leader' and 'outperformer' in GigaOm's 2025 Radars for Kubernetes for Edge Computing, and Managed Kubernetes. Co-founded in 2019 by CEO Tenry Fu, Vice President of Engineering Gautam Joshi and Chief Technology Officer Saad Malik, Spectro Cloud is backed by Alter Venture Partners, Boldstart Ventures, Firebolt Ventures, Growth Equity at Goldman Sachs Alternatives, NEC and Translink Orchestrating Future Fund, Qualcomm Ventures, Sierra Ventures, Stripes, T-Mobile Ventures, TSG and WestWave Capital. For more information, visit or follow @spectrocloudinc and @spectrocloudgov on X.

Luxury bridal house collapses for a second time
Luxury bridal house collapses for a second time

Perth Now

time08-05-2025

  • Business
  • Perth Now

Luxury bridal house collapses for a second time

Luxury bridal house Pallas Couture has collapsed into administration for a second time in less than eight years, but worried customers have been assured they will still receive their wedding dresses. Jeremy Nipps and Thomas Birch of Cor Cordis were appointed as administrators for Evercentre Pty Ltd — trading as Pallas Couture — earlier this month. They are now assessing the retailer's financial position. Pallas employs 12 staff and has studios in Subiaco and Paddington in Sydney. Cor Cordis said it would evaluate operations and explore various avenues to restructure or recapitalise the business, including the possibility of a deed of company arrangement. 'The process will allow for Pallas Couture to continue prioritising all brides and continue operations as normal, with no disruption to the business or any creation and delivery of gowns currently in production,' they said. Mr Nipps told The West Australian on Thursday that funding had been secured from an undisclosed third party, ensuring wedding gowns can be completed and delivered to customers. He could not yet comment on what led to the company's demise. Pallas Couture's Joy Morris. Credit: Rob Duncan / The West Australian Pallas Couture's collapse comes amid tough trading conditions for retailers as persistent cost-of-living pressures force consumers to tighten their belts, including brides-to-be when it comes to their wedding gowns. Pallas founder Joy Morris told The West in 2015 her elaborate frocks don't come cheap, sayings brides had to have a desire to spend in excess of $5000. Cor Cordis and Mr Nipps had already helped pull Pallas from the brink after it entered administration in November 2017, when the Australian Taxation Office applied to wind it up over an unpaid debt. Administrators at the time attributed the demise of the business — then called Pallas Bride and Fashion — to inadequate cashflow management. The administration concluded with Pallas' creditors approving a DOCA floated by Ms Morris, saying It represented the best option for employees. Mr Nipps on Thursday said he was confident the business could be revived a second time. 'There is a viable business that can get through this process and continue trading as it did previously for the past eight years,' he said. 'Subject to doing a bit more investigation, this could just be a bit of a bump in the road.'

Telcos seek rules to tackle spam via business communications
Telcos seek rules to tackle spam via business communications

Time of India

time06-05-2025

  • Business
  • Time of India

Telcos seek rules to tackle spam via business communications

India's top telcos Reliance Jio , Bharti Airtel and Vodafone Idea have called on the Department of Consumer Affairs (DOCA) to urgently notify guidelines aimed at preventing spam through business communications. In a letter written last week to DOCA secretary Nidhi Khare through industry body Cellular Operators Association of India (COAI), the telcos said the guidelines can bridge the regulatory gaps, which are being exploited by spammers. "We respectfully reiterate that the department may, under the powers conferred by Section 18 of the Consumer Protection Act, 2019, kindly notify the said guidelines at the earliest," COAI said. Telecom executives and experts believe that the DOCA through the guidelines can curb unwanted communications from all stakeholders like unregistered telemarketers, including over the top (OTT) players. Such players currently evade any action either from the Telecom Regulatory Authority of India (Trai) or the Department of Telecommunications (DoT). While Trai has prescribed Telecom Commercial Communication Customer Preference Regulation (TCCCPR), it only caters to registered telemarketers. But even through the rules, the telecom operators are made the primary stakeholders, while telemarketers remain out of bounds. The DoT had written to Trai to send recommendations for regulating telemarketers but the sectoral watchdog is yet to come out with a consultation paper.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store