
Warburg Pincus-backed Miami International raises $345 million in US IPO
The Princeton, New Jersey-based company sold 15 million shares priced at $23 apiece, compared to its marketed range of $19 to $21 per share. This will give Miami International a potential valuation of $1.82 billion.
Activity in the U.S. IPO market has picked up in recent months, boosted by a resilient equity market and notable debuts from companies such as stablecoin issuer Circle and space technology firm Firefly Aerospace, signaling renewed investor confidence in fresh offerings.
Bullish, a cryptocurrency exchange operator and the owner of media outlet CoinDesk, went public earlier in the day with an overwhelming response as its shares more than doubled at their debut.
A recent run of volatility in the markets, sparked by trade policy changes and global geopolitical woes, has helped exchanges across the country rake in the gains from higher trading activity.
Listed peers of Miami International — such as CME Group, Cboe Global Markets, Nasdaq and NYSE-owner Intercontinental Exchange — all beat estimates for quarterly profit last month.
Exchange IPOs are rare. Less than half of U.S. bourse operators are public. Cboe Global was the last big one to get listed, back in 2010. Miami International had confidentially filed its paperwork in 2022, hinting at its long-term plan of going public.
Miami International's flagship exchange — Miami International Securities Exchange — was launched in 2012 and operates as an equity options exchange. It is the fourth-largest options exchange in the U.S. by market share, according to the Options Clearing Corporation.
J.P. Morgan, Morgan Stanley and Piper Sandler are the lead joint bookrunning managers. Miami International will list on the NYSE under the symbol "MIAX" on Thursday.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Geek Wire
a minute ago
- Geek Wire
Building an AI-first company: What these two business leaders learned from top experts
Adam Brotman, left, and Andy Sack, authors of the book, 'AI First.' (Photo Courtesy Forum3) This week on the GeekWire Podcast, our guests are Adam Brotman and Andy Sack, co-authors of AI First: The Playbook for a Future-Proof Business and Brand. Brotman was Starbucks' chief digital officer and later co-CEO of Sack is a founder, investor, and longtime advisor to tech leaders. Together, they run Forum3, a Seattle-based company that helps brands with customer loyalty and engagement. For their book, they interviewed experts including Bill Gates, Sam Altman, Reid Hoffman and Ethan Mollick, and spent time with companies and leaders that have seen early AI success. We talk about the shocking prediction that Altman gave them, how Moderna achieved 80% employee participation in an AI prompt contest, the CEO who supercharged sales by using AI to analyze call transcripts, and what businesses can do to roll out AI successfully. Listen below, and continue reading for my 5 top takeaways. 1. Leaders need their own 'holy shit' moment. AI has a better chance of being adopted when executives personally experience and use the technology themselves. 'It doesn't mean that the CEO has to become an expert in AI,' Brotman said, 'but they have to at least demonstrate that mindset, that curiosity, and a little bit of passion for what they don't know, and empower the organization to go ahead.' 2. Formalize AI efforts with a dedicated team. Instead of ad-hoc adoption, create an internal group to lead the charge. A good starting point is a cross-functional 'AI Council' or task force composed of passionate employees and at least one C-suite member. Brotman and Sack were challenged by Wharton professor Ethan Mollick to push companies even further, to establish internal 'AI Labs' to truly go all-in on experimentation. 3. Treat AI like an evolving intelligence, not static software. Unlike traditional technology implementations, AI capabilities change weekly. Companies need an 'always-on experimentation mindset' rather than a deploy-and-maintain approach. 'This is a new thing. This is not software,' Sack said. 'It's a being, an alien intelligence.' 4. Make AI adoption fun and experimental. Moderna succeeded by turning AI learning into a 'prompt-a-thon contest' with prizes, making employees feel comfortable with experimentation. This tapped into human psychology and removed the fear often associated with new technology. 'They really integrated the launch of that contest in the culture of the company,' Brotman said. 'The ROI has been off-the-charts in terms of productivity for them as a company.' 5. The transformation is happening faster than you think. When Brotman and Sack interviewed Altman, the OpenAI CEO casually dropped a bombshell prediction: 95% of marketing as we know it today will be done by artificial intelligence within three to five years. That shifted their thinking and approach to the book. As Brotman noted, 'If you look at how the technology has progressed since we've had that interview, it's right on schedule.' AI First: The Playbook for a Future-Proof Business and Brand, by Adam Brotman and Andy Sack, is published by Harvard Business Review Press. Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.


TechCrunch
a minute ago
- TechCrunch
Anthropic says some Claude models can now end ‘harmful or abusive' conversations
Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as 'rare, extreme cases of persistently harmful or abusive user interactions.' Strikingly, Anthropic says it's doing this not to protect the human user, but rather the AI model itself. To be clear, the company isn't claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains 'highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.' However, its announcement points to a recent program created to study what it calls 'model welfare' and says Anthropic is essentially taking a just-in-case approach, 'working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.' This latest change is currently limited to Claude Opus 4 and 4.1. And again, it's only supposed to happen in 'extreme edge cases,' such as 'requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.' While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users' delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a 'strong preference against' responding to these requests and a 'pattern of apparent distress' when it did so. As for these new conversation-ending capabilities, the company says, 'In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.' Anthropic also says Claude has been 'directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.' Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses. 'We're treating this feature as an ongoing experiment and will continue refining our approach,' the company says.


Forbes
2 minutes ago
- Forbes
Target Sued In Two Class Actions Over Gift Card Scams
Scammers are big fans of gift cards because they are easy to purchase, easy to send to the scammer and impossible to trace. It is not even necessary for the scammer to be in possession of the actual gift card to use it. Sending the gift card numbers or taking a picture on your phone and transmitting it to the scammer is sufficient for the scammer to use the gift card to buy things that can then be sold and converted into cash. In many instances scammers pose as large companies or government agencies such as the IRS demanding payments. This is called an "imposter scam.' According to the FTC, in 2024 Americans lost $2.95 to imposter scams, second only to investment fraud. In 2021 the FTC noted that Target gift cards were the most popular choice for scammers with scammers asking specifically for Target gift cards in twice as many instances as the next most popular gift card and even when the gift card requested by the scammers was not a Target gift card, the scammers asked their victims to purchase the particular gift cards at a Target store. More recent data of the FTC indicates that Target gift cards were the second most popular gift card of scammers with Apple gift cards being most used by scammers. Recently four victims of the imposter scam sued Target seeking class action status alleging that Target failed to use its own security algorithms and real-time tracking software to prevent these scams. The plaintiffs further allege that Target benefited financially from gift card scams. Responding to the lawsuit a Target spokesperson said 'While we cannot comment on pending litigation, we take significant steps to combat this type of criminal activity and protect consumers.' One of the plaintiffs, Robert Reese received an email from a scammer posing as an Amazon customer service representative who convinced him he needed to send Amazon $10,800 in gift cards, instructing him to get $6,000 of that amount from Target through the purchase of twelve $500 gift cards. HOW TO AVOID GIFT CARD SCAMS Fortunately, scams requiring payment through gift cards are easy to avoid. Anytime anyone approaches you with a business transaction in which you are asked to pay through gift cards, you can be confident that it is a scam. The IRS even posts on its website that it does not accept gift cards as payments. An important thing to remember is that gift cards are gifts, they are not used as a payment method for any legitimate transaction so if you are asked to pay for any business transaction through a gift card, you can be sure it is a scam. Target has also been sued in a class action regarding gift card scams by customers from 21 states who bought Apple gift cards at Target that had been tampered with by scammers that resulted in the scammers emptying the gift cards of their value. The plaintiffs allege that Target is aware of this problem and has not done enough to stop it. This type of scam is called gift card draining. The most common way gift card draining occurs involves scammers going to racks of gift cards in stores and, using handheld scanners, read the code on the strip of the card and the number on the front. They then put the card back in the display and periodically check with the retailer by calling its 800 number to find out whether the card has been activated and what the balance is on the card. Once they have this information, they either create a counterfeit card using the information they have stolen or order merchandise online without having the actual card in hand. Another common way gift card draining occurs is when scammers place a sticker with the barcode of a gift card that the scammers possess over the actual barcode of the gift card in the rack. Thus, when the card is taken by the gift card purchaser to the checkout counter to have the card activated, the funds used to purchase the gift card are credited to the card of the scammer. It is not until the gift card purchaser tries to use his or her card that it is discovered that there are no funds credited to the card. Some retailers, to reduce gift card fraud put a PIN on the gift card so that if the card is used online, the user must have access to the PIN which is generally covered and must have the covering material scratched off in order to be visible. Unfortunately, many purchasers of gift cards are not aware of this, so they don't even notice that the PIN on the card that they are purchasing has already had covering material scratched off by the scammer who has recorded the PIN. HOW TO AVOID GIFT CARD DRAINING As with so many scams, the best place to look for a helping hand is at the end of your own arm. Always inspect the card carefully to make sure that the barcode has not been tampered with in any fashion and that the PIN is still covered and when buying a gift card, only purchase cards from behind the customer service desk. If the card is preloaded, always ask for the card to be scanned to show that it is still fully valued.