
VistaShares Announces April 2025 Distribution for OMAH ETF
SAN FRANCISCO & BOSTON & NEW YORK--(BUSINESS WIRE)-- VistaShares, an innovative asset manager seeking to disrupt the status quo in thematic exposures, income investing, and more, is today announcing the April monthly distribution amount for the VistaShares Target 15™ Berkshire Select Income ETF (OMAH).
Distributions currently include a return of investor capital.
For more information and updates from VistaShares, please visit www.VistaShares.com and follow the firm on Linkedin @VistaShares, and on X @VistaSharesETFs.
About VistaShares
At VistaShares, we strive to deliver innovative investment solutions for today's investors, helping them navigate evolving market opportunities with confidence. VistaShares ETFs are actively managed by industry and investment experts, offering two distinct strategies. Our Pure Exposure™ ETFs target technology-driven economic Supercycles™ that we believe are poised for significant growth. Additionally, our Target 15™ option-based income ETFs are designed to generate high monthly income while complementing a core equity portfolio.
Investors should consider the investment objectives, risks, charges and expenses carefully before investing. For a prospectus or summary prospectus with this and other information about the Fund, please call (844) 875-2288. Read the prospectus or summary prospectus carefully before investing.
Investing involves risk, including possible loss of principal.
Index / Strategy Risks. The Index's holdings are derived from publicly available data, which may be delayed relative to the then current portfolio of Berkshire Hathaway. Consequently, the Fund's holdings, which are based on the Index, may not accurately reflect Berkshire Hathaway's most recent publicly-disclosed investment positions and may deviate substantially from its actual current Portfolio. The equity securities represented in the Index are subject to a range of risks, including, but not limited to, fluctuations in Market conditions, increased competition, and evolving regulatory environments, all of which could adversely affect their performance.
Focused Portfolio Risk. The Fund will hold a relatively focused portfolio that may contain exposure to the securities of fewer issuers than the portfolios of other ETFs. Holding a relatively concentrated portfolio may increase the risk that the value of the Fund could go down because of the poor performance of one or a few investments.
Distribution Risk. Although the Fund has an annual income target, the Fund intends to distribute income on a monthly basis. There is no assurance that the Fund will make a distribution in any given month.
Derivatives Risk. Derivatives are financial instruments that derive value from the underlying reference asset or assets, such as stocks, bonds, or funds (including ETFs), interest rates or indexes.
Options Contracts Risk. The use of options contracts involves investment strategies and risks different from those associated with ordinary portfolio securities transactions. The prices of options are volatile and are influenced by, among other things, actual and anticipated changes in the value of the underlying instrument, including the anticipated volatility, which are affected by fiscal and monetary policies and by national and international political, changes in the actual or implied volatility or the reference asset, the time remaining until the expiration of the option contract and economic events.
Equity Market Risk. Common stocks are generally exposed to greater risk than other types of securities, such as preferred stock and debt obligations, because common stockholders generally have inferior rights to receive payment from specific issuers. The equity securities held in the Fund's portfolio may experience sudden, unpredictable drops in value or long periods of decline in value.
U.S. Government and U.S. Agency Obligations Risk. The Fund may invest in securities issued by the U.S. government or its agencies or instrumentalities. U.S. Government obligations include securities issued or guaranteed as to principal and interest by the U.S. Government, its agencies or instrumentalities, such as the U.S. Treasury.
New Fund Risk. The Fund is a recently organized management investment company with no operating history. As a result, prospective Investors do not have a track record or history on which to base their investment decisions.
Newer Sub-Adviser Risk. VistaShares is a recently formed entity and has limited experience with managing an exchange-traded fund, which may limit the Sub-Adviser's effectiveness.
Foreside Fund Services, LLC, distributor.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Associated Press
7 minutes ago
- Associated Press
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
SINGAPORE--(BUSINESS WIRE)--Jun 12, 2025-- KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. This press release features multimedia. View the full release here: MotusAI Dashboard As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X. View source version on CONTACT: Media Contacts [email protected] KEYWORD: EUROPE SINGAPORE SOUTHEAST ASIA ASIA PACIFIC INDUSTRY KEYWORD: APPS/APPLICATIONS TECHNOLOGY OTHER TECHNOLOGY SOFTWARE NETWORKS INTERNET HARDWARE DATA MANAGEMENT ARTIFICIAL INTELLIGENCE SOURCE: KAYTUS Copyright Business Wire 2025. PUB: 06/12/2025 07:11 AM/DISC: 06/12/2025 07:10 AM


Business Wire
17 minutes ago
- Business Wire
KAYTUS Unveils Upgraded MotusAI to Accelerate LLM Deployment
SINGAPORE--(BUSINESS WIRE)-- KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, today announced the release of the latest version of its MotusAI AI DevOps Platform at ISC High Performance 2025. The upgraded MotusAI platform delivers significant enhancements in large model inference performance and offers broad compatibility with multiple open-source tools covering the full lifecycle of large models. Engineered for unified and dynamic resource scheduling, it dramatically improves resource utilization and operational efficiency in large-scale AI model development and deployment. This latest release of MotusAI is set to further accelerate AI adoption and fuel business innovation across key sectors such as education, finance, energy, automotive, and manufacturing. As large AI models become increasingly embedded in real-world applications, enterprises are deploying them at scale, to generate tangible value across a wide range of sectors. Yet, many organizations continue to face critical challenges in AI adoption, including prolonged deployment cycles, stringent stability requirements, fragmented open-source tool management, and low compute resource utilization. To address these pain points, KAYTUS has introduced the latest version of its MotusAI AI DevOps Platform, purpose-built to streamline AI deployment, enhance system stability, and optimize AI infrastructure efficiency for large-scale model operations. Enhanced Inference Performance to Ensure Service Quality Deploying AI inference services is a complex undertaking that involves service deployment, management, and continuous health monitoring. These tasks require stringent standards in model and service governance, performance tuning via acceleration frameworks, and long-term service stability, all of which typically demand substantial investments in manpower, time, and technical expertise. The upgraded MotusAI delivers robust large-model deployment capabilities that bring visibility and performance into perfect alignment. By integrating optimized frameworks such as SGLang and vLLM, MotusAI ensures high-performance, distributed inference services that enterprises can deploy quickly and with confidence. Designed to support large-parameter models, MotusAI leverages intelligent resource and network affinity scheduling to accelerate time-to-launch while maximizing hardware utilization. Its built-in monitoring capabilities span the full stack—from hardware and platforms to pods and services—offering automated fault diagnosis and rapid service recovery. MotusAI also supports dynamic scaling of inference workloads based on real-time usage and resource monitoring, delivering enhanced service stability. Comprehensive Tool Support to Accelerate AI Adoption As AI model technologies evolve rapidly, the supporting ecosystem of development tools continues to grow in complexity. Developers require a streamlined, universal platform to efficiently select, deploy, and operate these tools. The upgraded MotusAI provides extensive support for a wide range of leading open-source tools, enabling enterprise users to configure and manage their model development environments on demand. With built-in tools such as LabelStudio, MotusAI accelerates data annotation and synchronization across diverse categories, improving data processing efficiency and expediting model development cycles. MotusAI also offers an integrated toolchain for the entire AI model lifecycle. This includes LabelStudio and OpenRefine for data annotation and governance, LLaMA-Factory for fine-tuning large models, Dify and Confluence for large model application development, and Stable Diffusion for text-to-image generation. Together, these tools empower users to adopt large models quickly and boost development productivity at scale. Hybrid Training-Inference Scheduling on the Same Node to Maximize Resource Efficiency Efficient utilization of computing resources remains a critical priority for AI startups and small to mid-sized enterprises in the early stages of AI adoption. Traditional AI clusters typically allocate compute nodes separately for training and inference tasks, limiting the flexibility and efficiency of resource scheduling across the two types of workloads. The upgraded MotusAI overcomes traditional limitations by enabling hybrid scheduling of training and inference workloads on a single node, allowing for seamless integration and dynamic orchestration of diverse task types. Equipped with advanced GPU scheduling capabilities, MotusAI supports on-demand resource allocation, empowering users to efficiently manage GPU resources based on workload requirements. MotusAI also features multi-dimensional GPU scheduling, including fine-grained partitioning and support for Multi-Instance GPU (MIG), addressing a wide range of use cases across model development, debugging, and inference. MotusAI's enhanced scheduler significantly outperforms community-based versions, delivering a 5× improvement in task throughput and 5× reduction in latency for large-scale POD deployments. It enables rapid startup and environment readiness for hundreds of PODs while supporting dynamic workload scaling and tidal scheduling for both training and inference. These capabilities empower seamless task orchestration across a wide range of real-world AI scenarios. About KAYTUS KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X.


Associated Press
17 minutes ago
- Associated Press
HASI Receives Ratings Upgrade from S&P Global Ratings
ANNAPOLIS, Md.--(BUSINESS WIRE)--Jun 12, 2025-- HA Sustainable Infrastructure Capital, Inc. ('HASI,' 'We,' 'Our,' or the 'Company') (NYSE: HASI), a leading investor in sustainable infrastructure assets, today announced it has received an investment grade credit rating of BBB- from S&P Global Ratings ('S&P'). On June 11, 2025, S&P upgraded HASI's corporate and issuer credit ratings to BBB- from BB+ with a stable outlook. The Company has maintained an investment grade credit rating of BBB- from Fitch Ratings, Inc. ('Fitch') since May 2024 and an investment grade credit rating of Baa3 from Moody's Investors Service ('Moody's') since June 2022. 'Securing a third investment grade rating is a significant milestone that reflects our financial strength and the resilience of our business model,' said HASI Chief Financial Officer Chuck Melko. 'This achievement underscores our proven strategy and disciplined track record, and enhances our capacity to scale high-impact investments in energy transition projects while continuing to deliver long-term value to our stakeholders.' In its report, S&P stated that the upgrade reflects the Company's scale and strength, fueled by HASI's steady business growth and asset quality. Despite macroeconomic headwinds in the sustainable infrastructure and energy sector such as tariffs and potential revisions to the Inflation Reduction Act, S&P expects HASI will continue to source investment opportunities at profitable yields, further noting that HASI primarily invests in stabilized projects with minimal construction risk, and that most of the Company's $5.5 billion 12-month pipeline are comprised of projects that are already under construction. Additionally, the report notes that energy demand has outpaced supply in recent years and highlights that HASI's 10-year track record suggests it is well-positioned to source investment opportunities in new sustainable infrastructure asset classes to address this demand. More information regarding the Company's investment grade credit rating assignments can be found on the respective websites of the rating agencies or accessed directly through HASI's investor relations website at About HASI HASI is an investor in sustainable infrastructure assets advancing the energy transition. With more than $14 billion in managed assets, our investments are diversified across multiple asset classes, including utility-scale solar, onshore wind, and storage; distributed solar and storage; RNG; and energy efficiency. We combine deep expertise in energy markets and financial structuring with long-standing programmatic client partnerships to deliver superior risk-adjusted returns and measurable environmental benefits. HA Sustainable Infrastructure Capital, Inc. is listed on the New York Stock Exchange (Ticker: HASI). For more information, please visit Forward-Looking Statements Some of the information in this press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended, that are subject to risks and uncertainties. For these statements, we claim the protections of the safe harbor for forward-looking statements contained in such Sections. These forward-looking statements include information about possible or assumed future results of our business, financial condition, liquidity, results of operations, plans and objectives. When we use the words 'believe,' 'expect,' 'anticipate,' 'estimate,' 'plan,' 'continue,' 'intend,' 'should,' 'may' or similar expressions, we intend to identify forward-looking statements. Forward-looking statements are subject to significant risks and uncertainties. Investors are cautioned against placing undue reliance on such statements. Actual results may differ materially from those set forth in the forward-looking statements. Factors that could cause actual results to differ materially from those described in the forward-looking statements include those discussed under the caption 'Risk Factors' included in our most recent Annual Report on Form 10-K as well as in other periodic reports that we file with the U.S. Securities and Exchange Commission. Forward-looking statements are based on beliefs, assumptions and expectations as of the date of this press release. We disclaim any obligation to publicly release the results of any revisions to these forward-looking statements reflecting new estimates, events or circumstances after the date of this press release. View source version on CONTACT: Investor Contact: Aaron Chew [email protected] 410-571-6189Media Contact: Kenny Gayles [email protected] 443-321-5756 KEYWORD: UNITED STATES NORTH AMERICA MARYLAND INDUSTRY KEYWORD: PROFESSIONAL SERVICES UTILITIES ALTERNATIVE ENERGY ENERGY FINANCE ASSET MANAGEMENT SOURCE: HA Sustainable Infrastructure Capital, Inc. Copyright Business Wire 2025. PUB: 06/12/2025 07:00 AM/DISC: 06/12/2025 07:01 AM