
NextNav to Participate in the Oppenheimer 28th Annual Virtual Technology, Internet & Communications Conference
NextNav will present from 9:05-9:45 a.m. on Wednesday, August 13, and will host meetings with investors on all three days.
To schedule a meeting with NextNav, please reach out to your Oppenheimer representative.
A live webcast of the presentation will be available on the NextNav investor relations website at https://ir.nextnav.com.
About NextNav
NextNav Inc. (Nasdaq: NN) is a leader in next-generation 3D Positioning, Navigation, and Timing (PNT) solutions. As the nation's largest license holder in a spectrum band expressly designated for terrestrial positioning services, NextNav is uniquely positioned to enable a widescale terrestrial complement and backup to GPS. Leveraging licensed low-band spectrum and the global 5G ecosystem, NextNav is focused on delivering an accurate, reliable, and resilient 3D PNT solution to protect national security, public safety, and the economy. Learn more at www.nextnav.com.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
13 minutes ago
- Forbes
Will Reinforcement Learning Take Us To AGI?
There aren't many truly new ideas in artificial intelligence. More often, breakthroughs in AI happen when concepts that have existed for years suddenly take on new power because underlying technology inputs—in particular, raw computing power—finally catch up to unlock those concepts' full potential. Famously, Geoff Hinton and a small group of collaborators devoted themselves tirelessly to neural networks starting in the early 1970s. For decades, the technology didn't really work and the outside world paid little attention. It was not until the early 2010s—thanks to the arrival of sufficiently powerful Nvidia GPUs and internet-scale training data—that the potential of neural networks was finally unleashed for all to see. In 2024, more than half a century after he began working on neural networks, Hinton was awarded the Nobel Prize for pioneering the field of modern AI. Reinforcement learning has followed a similar arc. Richard Sutton and Andrew Barto, the fathers of modern reinforcement learning, laid down the foundations of the field starting in the 1970s. Even before Sutton and Barto began their work, the basic principles underlying reinforcement learning—in short, learning by trial and error based on positive and negative feedback—had been developed by behavioral psychologists and animal researchers going back to the early twentieth century. Yet in just the past year, advances in reinforcement learning (RL) have taken on newfound importance and urgency in the world of AI. It has become increasingly clear that the next leap in AI capabilities will be driven by RL. If artificial general intelligence (AGI) is in fact around the corner, reinforcement learning will play a central role in getting us there. Just a few years ago, when ChatGPT's launch ushered in the era of generative AI, almost no one would have predicted this. Deep questions remain unanswered about reinforcement learning's capabilities and its limits. No field in AI is moving more quickly today than RL. It has never been more important to understand this technology, its history and its Learning 101 The basic principles of reinforcement learning have remained consistent since Sutton and Barto established the field in the 1970s. The essence of RL is to learn by interacting with the world and seeing what happens. It is a universal and foundational form of learning; every human and animal does it. In the context of artificial intelligence, a reinforcement learning system consists of an agent interacting with an environment. RL agents are not given direct instructions or answers by humans; instead, they learn through trial and error. When an agent takes an action in an environment, it receives a reward signal from the environment, indicating that the action produced either a positive or a negative outcome. The agent's goal is to adjust its behavior to maximize positive rewards and minimize negative rewards over time. How does the agent decide which actions to take? Every agent acts according to a policy, which can be understood as the formula or calculus that determines the agent's action based on the particular state of the environment. A policy can be a simple set of rules, or even pure randomness, or it can be represented by a far more complex system, like a deep neural network. One final concept that is important to understand in RL, closely related to the reward signal, is the value function. The value function is the agent's estimate of how favorable a given state of the environment will be (that is, how many positive and negative rewards it will lead to) over the long run. Whereas reward signals are immediate pieces of feedback that come from the environment based on current conditions, the value function is the agent's own learned estimate of how things will play out in the long term. The entire purpose of value functions is to estimate reward signals, but unlike reward signals, value functions enable agents to reason and plan over longer time horizons. For instance, value functions can incentivize actions even when they lead to negative near-term rewards because the long-term benefit is estimated to be worth it. When RL agents learn, they do so in one of three ways: by updating their policy, updating their value function, or updating both together. A brief example will help make these concepts concrete. Imagine applying reinforcement learning to the game of chess. In this case, the agent is an AI chess player. The environment is the chess board, with any given configuration of chess pieces representing a state of that environment. The agent's policy is the function (whether a simple set of rules, or a decision tree, or a neural network, or something else) that determines which move to make based on the current board state. The reward signal is simple: positive when the agent wins a game, negative when it loses a game. The agent's value function is its learned estimate of how favorable or unfavorable any given board position is—that is, how likely the position is to lead to a win or a loss. As the agent plays more games, strategies that lead to wins will be positively reinforced and strategies that lead to losses will be negatively reinforced via updates to the agent's policy and value function. Gradually, the AI system will become a stronger chess player. In the twenty-first century, one organization has championed and advanced the field of reinforcement learning more than any other: DeepMind. Founded in 2010 as a startup devoted to solving artificial intelligence and then acquired by Google in 2014 for ~$600 million, London-based DeepMind made a big early bet on reinforcement learning as the most promising path forward in AI. And the bet paid off. The second half of the 2010s were triumphant years for the field of reinforcement learning. In 2016, DeepMind's AlphaGo became the first AI system to defeat a human world champion at the ancient Chinese game of Go, a feat that many AI experts had believed was impossible. In 2017, DeepMind debuted AlphaZero, which taught itself Go, chess and Japanese chess entirely via self-play and bested every other AI and human competitor in those games. And in 2019, DeepMind unveiled AlphaStar, which mastered the video game StarCraft—an even more complex environment than Go given the vast action space, imperfect information, numerous agents and real-time gameplay. AlphaGo, AlphaZero, AlphaStar—reinforcement learning powered each of these landmark achievements. As the 2010s drew to a close, RL seemed poised to dominate the coming generation of artificial intelligence breakthroughs, with DeepMind leading the way. But that's not what happened. Right around that time, a new AI paradigm unexpectedly burst into the spotlight: self-supervised learning for autoregressive language models. In 2019, a small nonprofit research lab named OpenAI released a model named GPT-2 that demonstrated surprisingly powerful general-purpose language capabilities. The following summer, OpenAI debuted GPT-3, whose astonishing abilities represented a massive leap in performance from GPT-2 and took the AI world by storm. In 2022 came ChatGPT. In short order, every AI organization in the world reoriented its research focus to prioritize large language models and generative AI. These large language models (LLMs) were based on the transformer architecture and made possible by a strategy of aggressive scaling. They were trained on unlabeled datasets that were bigger than any previous AI training data corpus—essentially the entire internet—and were scaled up to unprecedented model sizes. (GPT-2 was considered mind-bogglingly large at 1.5 billion parameters; one year later, GPT-3 debuted at 175 billion parameters.) Reinforcement learning fell out of fashion for half a decade. A widely repeated narrative during the early 2020s was that DeepMind had seriously misread technology trends by committing itself to reinforcement learning and missing the boat on generative AI. Yet today, reinforcement learning has reemerged as the hottest field within AI. What happened? In short, AI researchers discovered that applying reinforcement learning to generative AI models was a killer combination. Starting with a base LLM and then applying reinforcement learning on top of it meant that, for the first time, RL could natively operate with the gift of language and broad knowledge about the world. Pretrained foundation models represented a powerful base on which RL could work its magic. The results have been dazzling—and we are just getting Meets LLMs What does it mean, exactly, to combine reinforcement learning with large language models? A key insight to start with is that the core concepts of RL can be mapped directly and elegantly to the world of LLMs. In this mapping, the LLM itself is the agent. The environment is the full digital context in which the LLM is operating, including the prompts it is presented with, its context window, and any tools and external information it has access to. The model's weights represent the policy: they determine how the agent acts when presented with any particular state of the environment. Acting, in this context, means generating tokens. What about the reward signal and the value function? Defining a reward signal for LLMs is where things get interesting and complicated. It is this topic, more than any other, that will determine how far RL can take us on the path to superintelligence. The first major application of RL to LLMs was reinforcement learning from human feedback, or RLHF. The frontiers of AI research have since advanced to more cutting-edge methods of combining RL and LLMs, but RLHF represents an important step on the journey, and it provides a concrete illustration of the concept of reward signals for LLMs. RLHF was invented by DeepMind and OpenAI researchers back in 2017. (As a side note, given today's competitive and closed research environment, it is remarkable to remember that OpenAI and DeepMind used to conduct and publish foundational research together.) RLHF's true coming-out party, though, was ChatGPT. When ChatGPT debuted in November 2022, the underlying AI model on which it was based was not new; it had already been publicly available for many months. The reason that ChatGPT became an overnight success was that it was approachable, easy to talk to, helpful, good at following directions. The technology that made this possible was RLHF. In a nutshell, RLHF is a method to adapt LLMs' style and tone to be consistent with human-expressed preferences, whatever those preferences may be. RLHF is most often used to make LLMs 'helpful, harmless and honest,' but it can equally be used to make them more flirtatious, or rude, or sarcastic, or progressive, or conservative. How does RLHF work? The key ingredient in RLHF is 'preference data' generated by human subjects. Specifically, humans are asked to consider two responses from the model for a given prompt and to select which one of the two responses they prefer. This pairwise preference data is used to train a separate model, known as the reward model, which learns to produce a numerical rating of how desirable or undesirable any given output from the main model is. This is where RL comes in. Now that we have a reward signal, an RL algorithm can be used to fine-tune the main model—in other words, the RL agent—so that it generates responses that maximize the reward model's scores. In this way, the main model comes to incorporate the style and values reflected in the human-generated preference data. Circling back to reward signals and LLMs: in the case of RLHF, as we have seen, the reward signal comes directly from humans and human-generated preference data, which is then distilled into a reward model. What if we want to use RL to give LLMs powerful new capabilities beyond simply adhering to human preferences?The Next Frontier The most important development in AI over the past year has been language models' improved ability to engage in reasoning. What exactly does it mean for an AI model to 'reason'? Unlike first-generation LLMs, which respond to prompts using next-token prediction with no planning or reflection, reasoning models spend time thinking before producing a response. These models think by generating 'chains of thought,' enabling them to systematically break down a given task into smaller steps and then work through each step in order to arrive at a well-thought-through answer. They also know how and when to use external tools—like a calculator, a code interpreter or the internet—to help solve problems. The world's first reasoning model, OpenAI's o1, debuted less than a year ago. A few months later, China-based DeepSeek captured world headlines when it released its own reasoning model, R1, that was near parity with o1, fully open and trained using far less compute. The secret sauce that gives AI models the ability to reason is reinforcement learning—specifically, an approach to RL known as reinforcement learning from verifiable rewards (RLVR). Like RLHF, RLVR entails taking a base model and fine-tuning it using RL. But the source of the reward signal, and therefore the types of new capabilities that the AI gains, are quite different. As its name suggests, RLVR improves AI models by training them on problems whose answers can be objectively verified—most commonly, math or coding tasks. First, a model is presented with such a task—say, a challenging math problem—and prompted to generate a chain of thought in order to solve the problem. The final answer that the model produces is then formally determined to be either correct or incorrect. (If it's a math question, the final answer can be run through a calculator or a more complex symbolic math engine; if it's a coding task, the model's code can be executed in a sandboxed environment.) Because we now have a reward signal—positive if the final answer is correct, negative if it is incorrect—RL can be used to positively reinforce the types of chains of thought that lead to correct answers and to discourage those that lead to incorrect answers. The end result is a model that is far more effective at reasoning: that is, at accurately working through complex multi-step problems and landing on the correct solution. This new generation of reasoning models has demonstrated astonishing capabilities in math competitions like the International Math Olympiad and on logical tests like the ARC-AGI benchmark. So—is AGI right around the corner? Not necessarily. A few big-picture questions about reinforcement learning and language models remain unanswered and loom large. These questions inspire lively debate and widely varying opinions in the world of artificial intelligence today. Their answers will determine how powerful AI gets in the coming months.A Few Big Unanswered Questions Today's cutting-edge RL methods rely on problems whose answers can be objectively verified as either right or wrong. Unsurprisingly, then, RL has proven exceptional at producing AI systems that are world-class at math, coding, logic puzzles and standardized tests. But what about the many problems in the world that don't have easily verifiable answers? In a provocative essay titled 'The Problem With Reasoners', Aidan McLaughlin elegantly articulates this point: 'Remember that reasoning models use RL, RL works best in domains with clear/frequent reward, and most domains lack clear/frequent reward.' McLaughlin argues that most domains that humans actually care about are not easily verifiable, and we will therefore have little success using RL to make AI superhuman at them: for instance, giving career advice, managing a team, understanding social trends, writing original poetry, investing in startups. A few counterarguments to this critique are worth considering. The first centers on the concepts of transfer learning and generalizability. Transfer learning is the idea that models trained in one area can transfer those learnings to improve in other areas. Proponents of transfer learning in RL argue that, even if reasoning models are trained only on math and coding problems, this will endow them with broad-based reasoning skills that will generalize beyond those domains and enhance their ability to tackle all sorts of cognitive tasks. 'Learning to think in a structured way, breaking topics down into smaller subtopics, understanding cause and effect, tracing the connections between different ideas—these skills should be broadly helpful across problem spaces,' said Dhruv Batra, cofounder/chief scientist at Yutori and former senior AI researcher at Meta. 'This is not so different from how we approach education for humans: we teach kids basic numeracy and literacy in the hopes of creating a generally well-informed and well-reasoning population.' Put more strongly: if you can solve math, you can solve anything. Anything that can be done with a computer, after all, ultimately boils down to math. It is an intriguing hypothesis. But to date, there is no conclusive evidence that RL endows LLMs with reasoning capabilities that generalize beyond easily verifiable domains like math and coding. It is no coincidence that the most important advances in AI in recent months—both from a research and a commercial perspective—have occurred in precisely these two fields. If RL can only give AI models superhuman powers in domains that can be easily verified, this represents a serious limit to how far RL can advance the frontiers of AI's capabilities. AI systems that can write code or do mathematics as well as or better than humans are undoubtedly valuable. But true general-purpose intelligence consists of much more than this. Let us consider another counterpoint on this topic, though: what if verification systems can in fact be built for many (or even all) domains, even when those domains are not as clearly deterministic and checkable as a math problem? Might it be possible to develop a verification system that can reliably determine whether a novel, or a government policy, or a piece of career advice, is 'good' or 'successful' and therefore should be positively reinforced? This line of thinking quickly leads us into borderline philosophical considerations. In many fields, determining the 'goodness' or 'badness' of a given outcome would seem to involve value judgments that are irreducibly subjective, whether on ethical or aesthetic grounds. For instance, is it possible to determine that one public policy outcome (say, reducing the federal deficit) is objectively superior to another (say, expanding a certain social welfare program)? Is it possible to objectively identify that a painting or a poem is or is not 'good'? What makes art 'good'? Is beauty not, after all, in the eye of the beholder? Certain domains simply do not possess a 'ground truth' to learn from, but rather only differing values and tradeoffs to be weighed. Even in such domains, though, another possible approach exists. What if we could train an AI via many examples to instinctively identify 'good' and 'bad' outcomes, even if we can't formally define them, and then have that AI serve as our verifier? As Julien Launay, CEO/cofounder of RL startup Adaptive ML, put it: 'In bridging the gap from verifiable to non-verifiable domains, we are essentially looking for a compiler for natural language…but we already have built this compiler: that's what large language models are.' This approach is often referred to as reinforcement learning from AI feedback (RLAIF) or 'LLM-as-a-Judge.' Some researchers believe it is the key to making verification possible across more domains. But it is not clear how far LLM-as-a-Judge can take us. The reason that reinforcement learning from verifiable rewards has led to such incisive reasoning capabilities in LLMs in the first place is that it relies on formal verification methods: correct and incorrect answers exist to be discovered and learned. LLM-as-a-Judge seems to bring us back to a regime more closely resembling RLHF, whereby AI models can be fine-tuned to internalize whatever preferences and value judgments are contained in the training data, arbitrary though they may be. This merely punts the problem of verifying subjective domains to the training data, where it may remain as unsolvable as ever. We can say this much for sure: to date, neither OpenAI nor Anthropic nor any other frontier lab has debuted an RL-based system with superhuman capabilities in writing novels, or advising governments, or starting companies, or any other activity that lacks obvious verifiability. This doesn't mean that the frontier labs are not making progress on the problem. Indeed, just last month, leading OpenAI researcher Noam Brown shared on X: 'We developed new techniques that make LLMs a lot better at hard-to-verify tasks.' Rumors have even begun to circulate that OpenAI has developed a so-called 'universal verifier,' which can provide an accurate reward signal in any domain. It is hard to imagine how such a universal verifier would work; no concrete details have been shared publicly. Time will tell how powerful these new techniques is important to remember that we are still in the earliest innings of the reinforcement learning era in generative AI. We have just begun to scale RL. The total amount of compute and training data devoted to reinforcement learning remains modest compared to the level of resources spent on pretraining foundation models. This chart from a recent OpenAI presentation speaks volumes: At this very moment, AI organizations are preparing to deploy vast sums to scale up their reinforcement learning efforts as quickly as they can. As the chart above depicts, RL is about to transition from a relatively minor component of AI training budgets to the main focus. What does it mean to scale RL? 'Perhaps the most important ingredient when scaling RL is the environments—in other words, the settings in which you unleash the AI to explore and learn,' said Stanford AI researcher Andy Zhang. 'In addition to sheer quantity of environments, we need higher-quality environments, especially as model capabilities improve. This will require thoughtful design and implementation of environments to ensure diversity and goldilocks difficulty and to avoid reward hacking and broken tasks.' When xAI debuted its new frontier model Grok 4 last month, it announced that it had devoted 'over an order of magnitude more compute' to reinforcement learning than it had with previous models. We have many more orders of magnitude to go. Today's RL-powered models, while powerful, face shortcomings. The unsolved challenge of difficult-to-verify domains, discussed above, is one. Another critique is known as elicitation: the hypothesis that reinforcement learning doesn't actually endow AI models with greater intelligence but rather just elicits capabilities that the base model already possessed. Yet another obstacle that RL faces is its inherent sample inefficiency compared to other AI paradigms: RL agents must do a tremendous amount of work to receive a single bit of feedback. This 'reward sparsity' has made RL impracticable to deploy in many contexts. It is possible that scale will be a tidal wave that washes all of these concerns away. If there is one principle that has defined frontier AI in recent years, after all, it is this: nothing matters more than scale. When OpenAI scaled from GPT-2 to GPT-3 to GPT-4 between 2019 and 2023, the models' performance gains and emergent capabilities were astonishing, far exceeding the community's expectations. At every step, skeptics identified shortcomings and failure modes with these models, claiming that they revealed fundamental weaknesses in the technology paradigm and predicting that progress would soon hit a wall. Instead, the next generation of models would blow past these shortcomings, advancing the frontier by leaps and bounds and demonstrating new capabilities that critics had previously argued were impossible. The world's leading AI players are betting that a similar pattern will play out with reinforcement learning. If recent history is any guide, it is a good bet to make. But it is important to remember that AI 'scaling laws'—which predict that AI performance increases as data, compute and model size increase—are not actually laws in any sense of that word. They are empirical observations that for a time proved reliable and predictive for pretraining language models and that have been preliminarily demonstrated in other data modalities. There is no formal guarantee that scaling laws will always hold in AI, nor how long they will last, nor how steep their slope will be. The truth is that no one knows for sure what will happen when we massively scale up RL. But we are all about to find tuned for our follow-up article on this topic—or feel free to reach out directly to discuss!Looking Forward Reinforcement learning represents a compelling approach to building machine intelligence for one profound reason: it is not bound by human competence or imagination. Training an AI model on vast troves of labeled data (supervised learning) will make the model exceptional at understanding those labels, but its knowledge will be limited to the annotated data that humans have prepared. Training an AI model on the entire internet (self-supervised learning) will make the model exceptional at understanding the totality of humanity's existing knowledge, but it is not clear that this will enable it to generate novel insights that go beyond what humans have already put forth. Reinforcement learning faces no such ceiling. It does not take its cues from existing human data. An RL agent learns for itself, from first principles, through first-hand experience. AlphaGo's 'Move 37' serves as the archetypal example here. In one of its matches against human world champion Lee Sedol, AlphaGo played a move that violated thousands of years of accumulated human wisdom about Go strategy. Most observers assumed it was a miscue. Instead, Move 37 proved to be a brilliant play that gave AlphaGo a decisive advantage over Sedol. The move taught humanity something new about the game of Go. It has forever changed the way that human experts play the game. The ultimate promise of artificial intelligence is not simply to replicate human intelligence. Rather, it is to unlock new forms of intelligence that are radically different from our own—forms of intelligence that can come up with ideas that we never would have come up with, make discoveries that we never would have made, help us see the world in previously unimaginable ways. We have yet to see a 'Move 37 moment' in the world of generative AI. It may be a matter of weeks or months—or it may never happen. Watch this space.
Yahoo
41 minutes ago
- Yahoo
Western Union to Acquire International Money Express, Inc.
Strategic acquisition strengthens North America retail presence and operating model, expands Intermex beyond its historically high growth Latin America corridors, and is expected to accelerate digital new customer acquisition DENVER & MIAMI, August 10, 2025--(BUSINESS WIRE)--The Western Union Company ("Western Union") (NYSE: WU) and International Money Express, Inc. ("Intermex") (NASDAQ: IMXI) today announced they have entered into a definitive agreement under which Western Union will acquire Intermex in an all-cash transaction at $16.00 per IMXI share, representing a total equity and enterprise value of approximately $500 million. This acquisition strengthens Western Union's retail offering in the U.S., expands market coverage in high potential geographies, and is expected to accelerate digital new customer acquisition. Intermex's deep market knowledge, strong agent relationships, and operational expertise further positions Western Union to capture growth in the Americas. "This acquisition is a disciplined, strategic step that strengthens our North America operations and expands our presence with key consumer segments across the U.S." said Devin McGranahan, President and CEO of Western Union. "Intermex has built a well-recognized brand, as well as strong agent and customer relationships. Together, we will expand our retail footprint, unlock operational efficiencies, and accelerate digital engagement." "This agreement represents an exciting opportunity to provide Intermex's shareholders with significant and certain value, accelerating our omni-channel strategy, while continuing to deliver for our customers" said Bob Lisy, Chairman and CEO of Intermex. "This combination with Western Union brings together two complementary businesses that are well positioned to drive growth across North America." Strategic Rationale and Benefits: Strategic Alignment Unique opportunity for Western Union to acquire a well-positioned remittance business, adding scale in historically high-growth Latin America geographies. Opportunity to serve Intermex's 6 million customers, giving them access to Western Union's robust digital platforms and capabilities. Strengthened U.S. Retail Platform Expands and stabilizes Western Union's U.S. retail footprint, enhancing resilience and improving customer access across the Americas. Creates an opportunity to leverage Intermex's decades of operational and cultural expertise to drive targeted, sustainable retail growth. Meaningful Synergy Potential Expect $30 million in annual run-rate cost synergies within 24 months. Potential for additional revenue synergies through broader distribution and product offerings, enhancing speed, reliability, and customer value. Transaction Details: Under the terms of the agreement, Western Union will acquire Intermex for $16.00 per share in cash, representing approximately $500 million in equity and enterprise value. This reflects a roughly 50% premium to its 90-day volume-weighted average price. The acquisition is expected to be immediately accretive to Western Union's adjusted EPS by more than $0.10 in the first full year post close and to generate approximately $30 million in annual run-rate cost synergies within the first 24 months, with potential further upside from revenue synergies by integrating Intermex's capabilities into Western Union's partner and customer network. The transaction has been unanimously approved by Western Union's Board of Directors. Intermex's Board of Directors – acting on the unanimous recommendation of its independent Strategic Alternatives Committee – has also unanimously approved the transaction and recommends that Intermex stockholders vote in favor of the merger. The transaction, expected to close in mid-2026, is subject to customary closing conditions and regulatory approvals, including clearance under the Hart-Scott-Rodino Act and approvals from financial regulators, as well as approval by Intermex's stockholders. Following completion, the companies expect to implement a coordinated integration plan designed to provide a smooth transition for all customers, agents, and partners. Advisors: PJT Partners is serving as exclusive financial advisor and Sidley Austin LLP as legal advisor to Western Union. Financial Technology Partners LP is serving as financial advisor and Holland & Knight LLP as legal advisor to Intermex. Lazard Frères & Co. LLC is serving as financial advisor and Cravath, Swaine & Moore LLP as legal advisor to Intermex's Strategic Alternatives Committee. Investor and Analyst Conference Call and Presentation: Western Union will host a conference call and webcast at 8:30 a.m. ET on Monday, August 11, 2025. The webcast and presentation will be available at Registration for the event is required. Please register at least 15 minutes prior to the scheduled start time. A webcast replay will be available shortly after the event. To listen to the webcast, please visit the Investor Relations section of Western Union's website or use the following link: Webcast Link. Alternatively, participants may join via telephone. In the U.S., dial +1 (719) 359-4580, followed by the meeting ID, which is 997 4264 7200, and the passcode, which is 985803. For participants outside the U.S., dial the country number from the international directory, followed by the meeting ID, which is 997 4264 7200, and the passcode, which is 985803. About Western Union The Western Union Company (NYSE: WU) is committed to helping people around the world who aspire to build financial futures for themselves, their loved ones and their communities. Our leading cross-border, cross-currency money movement, payments and digital financial services empower consumers, businesses, financial institutions and governments—across more than 200 countries and territories and over 130 currencies—to connect with billions of bank accounts, millions of digital wallets and cards, and a global footprint of hundreds of thousands of retail locations. Our goal is to offer accessible financial services that help people and communities prosper. For more information, visit About Intermex Founded in 1994, Intermex applies proprietary technology enabling consumers to send money from the United States, Canada, Spain, Italy, the United Kingdom and Germany to more than 60 countries. The Company provides the digital movement of money through a network of agent retailers in the United States, Canada, Spain, Italy, the United Kingdom and Germany; Company-operated stores; our mobile apps; and the Company's websites. Transactions are fulfilled and paid through thousands of retail and bank locations around the world. Intermex is headquartered in Miami, Florida, with international offices in Puebla, Mexico, Guatemala City, Guatemala, London, England, and Madrid, Spain. For more information about Intermex, please visit Safe Harbor Compliance Statement for Forward-Looking Statements This press release contains certain statements that are forward-looking within the meaning of the Private Securities Litigation Reform Act of 1995. These statements are not guarantees of future performance and involve certain risks, uncertainties, and assumptions that are difficult to predict. Actual outcomes and results may differ materially from those expressed in, or implied by, our forward-looking statements. Words such as "expects," "intends," "targets," "anticipates," "believes," "estimates," "guides," "provides guidance," "provides outlook," "projects," "designed to," and other similar expressions or future or conditional verbs such as "may," "will," "should," "would," "could," and "might" are intended to identify such forward-looking statements. Readers of this press release of The Western Union Company (the "Company," "Western Union," "we," "our," or "us") should not rely solely on the forward-looking statements and should consider all uncertainties and risks discussed in the Risk Factors section of our Annual Report on Form 10-K for the year ended December 31, 2024 and in our subsequent filings with the Securities and Exchange Commission. The statements are only as of the date they are made, and the Company undertakes no obligation to update any forward-looking statement. Possible events or factors that could cause results or performance to differ materially from those expressed in our forward-looking statements include the following: changes in economic conditions, trade disruptions, or significantly slower growth or declines in the money transfer, payment service, and other markets in which we operate; interruptions in migration patterns or other events, such as public health emergencies, any changes arising as a result of policy changes in the United States and/or other key markets, civil unrest, war, terrorism, natural disasters, or non-performance by our banks, lenders, insurers, or other financial services providers; failure to compete effectively in the money transfer and payment service industry, including among other things, with respect to digital, mobile and internet-based services, card associations, and card-based payment providers, and with digital currencies, including cryptocurrencies; geopolitical tensions, political conditions and related actions, including trade restrictions, tariffs, and government sanctions; deterioration in customer confidence in our business; failure to maintain our agent network and business relationships; our ability to adopt new technology; the failure to realize anticipated financial benefits from mergers, acquisitions and divestitures; decisions to change our business mix; exposure to foreign exchange rates; changes in tax laws, or their interpretation, and unfavorable resolution of tax contingencies; cybersecurity incidents involving any of our systems or those of our vendors or other third parties; cessation of or defects in various services provided to us by third-party vendors; our ability to realize the anticipated benefits from restructuring-related initiatives; our ability to attract and retain qualified key employees; failure to manage credit and fraud risks presented by our agents, clients, and consumers; adverse rating actions by credit rating agencies; our ability to protect our intellectual property rights, and to defend ourselves against potential intellectual property infringement claims; material changes in the market value or liquidity of securities that we hold; restrictions imposed by our debt obligations; liabilities or loss of business resulting from a failure by us, our agents, or their subagents to comply with laws and regulations and regulatory or judicial interpretations thereof; increased costs or loss of business due to regulatory initiatives and changes in laws, regulations, and industry practices and standards; developments resulting from governmental investigations and consent agreements with, or investigations or enforcement actions by, regulators and other government authorities; liabilities resulting from litigation; failure to comply with regulations and evolving industry standards regarding data privacy; failure to comply with consumer protection laws; effects of unclaimed property laws or their interpretation or the enforcement thereof; failure to comply with working capital requirements; changes in accounting standards, rules and interpretations; and other unanticipated events and management's ability to identify and manage these and other risks. Important factors that could cause Western Union's or Intermex's or the combined company's actual results to differ materially from the results referred to in the forward-looking statements Western Union makes in this release include: the possibility that the conditions to the consummation of the proposed acquisition of Intermex (the "Proposed Acquisition") will not be satisfied on the terms or timeline expected, or at all; failure to obtain, or delays in obtaining, or adverse conditions related to obtaining stockholder or regulatory approvals sought in connection with the Proposed Acquisition; dependence on key agents and the potential effects of network disruption; the possibility that Western Union may be unable to achieve expected benefits, synergies and operating efficiencies in connection with the Proposed Acquisition; and failure to retain key management of Western Union or Intermex. Additional Information and Where to Find It This communication relates to a proposed acquisition (the "Transaction") of International Money Express, Inc. ("Intermex") by The Western Union Company ("Western Union"). In connection with the proposed transaction between Intermex and Western Union, Intermex will file with the Securities and Exchange Commission (the "SEC") a proxy statement (the "Proxy Statement"), the definitive version of which will be sent or provided to Intermex stockholders. Intermex may also file other documents with the SEC regarding the proposed transaction. This document is not a substitute for the Proxy Statement or any other document which Intermex may file with the SEC. INVESTORS AND SECURITY HOLDERS ARE URGED TO READ THE PROXY STATEMENT AND ANY OTHER RELEVANT DOCUMENTS THAT ARE FILED OR WILL BE FILED WITH THE SEC, AS WELL AS ANY AMENDMENTS OR SUPPLEMENTS TO THESE DOCUMENTS, CAREFULLY AND IN THEIR ENTIRETY BECAUSE THEY CONTAIN OR WILL CONTAIN IMPORTANT INFORMATION ABOUT THE PROPOSED TRANSACTION AND RELATED MATTERS. Investors and security holders may obtain free copies of the Proxy Statement (when it is available) and other documents that are filed with the SEC or will be filed with the SEC by Intermex (when they become available) through the website maintained by the SEC at or from Intermex at its website, Participants in the Solicitation Intermex, and certain of its directors and executive officers may be deemed to be participants in the solicitation of proxies from the stockholders of Intermex in connection with the Transaction under the rules of the SEC. Information about the interests of the directors and executive officers of Intermex and other persons who may be deemed to be participants in the solicitation of stockholders of Intermex in connection with the Transaction and a description of their direct and indirect interests, by security holdings or otherwise, will be included in the Proxy Statement related to the Transaction, which will be filed with the SEC. Additional information about Intermex, the directors and executive officers of Intermex and their ownership of Intermex common stock can also be found in its Annual Report on Form 10-K for the year ended December 31, 2024, as filed with the SEC on February 27, 2025, and its definitive proxy statement, as amended, as filed with the SEC on May 12, 2025, and other documents subsequently filed by Intermex with the SEC. Free copies of these documents may be obtained as described above. To the extent holdings of Intermex securities by its directors or executive officers have changed since the amounts set forth in such documents, such changes have been or will be reflected on Initial Statements of Beneficial Ownership on Form 3 or Statements of Beneficial Ownership on Form 4 filed with the SEC. Additional information regarding the identity of potential participants, and their direct or indirect interests, by security holdings or otherwise, will be included in the Proxy Statement relating to the proposed transaction when it is filed with the SEC. WU-G View source version on Contacts Western Union Contacts: Media Relations: Amanda Demarestmedia@ Investor Relations: Tom Intermex Contacts:Investor Relations CoordinatorAlex SadowskiIR@ Sign in to access your portfolio


Business Wire
42 minutes ago
- Business Wire
Narro Associates Expands Further South with Opening of a New London Studio
LONDON--(BUSINESS WIRE)--Narro Associates, a leading structural and civil engineering consultancy renowned for its expertise in conservation engineering, is proud to announce the opening of its seventh location, this time in the heart of London. This new studio will officially open its doors on 11 th August 2025. Narro has one of the largest teams of Conservation Accredited Engineers of any Practice in the UK. Share Building on the success of its existing offices in Edinburgh, Glasgow, Inverness, Stirling, Aberdeen, and Newcastle, the London studio marks a significant milestone in Narro's continued growth and commitment to delivering specialist structural and civil engineering consultancy services across the UK. The London team will be led by Managing Director Ben Adam and Associate Director Lily Erskine. Both Ben and Lily are Conservation Accredited Engineers (CARE), having demonstrated their commitment to the preservation of the built environment, and the protection of historic structures. The studio will build on Narro's four decades of experience by focusing on conservation-led projects, as well as opportunities in the cultural sector, such as museums, galleries, and events spaces. 'Opening a studio in London is a natural next step for Narro,' said Ben Adam, Managing Director. 'We've built a strong reputation for our conservation and cultural sector expertise, and we're excited to bring that experience to a city with such a rich architectural heritage. We have one of the largest teams of CARE engineers of any practice in the UK. We also have a fantastic team across the Practice. While Lily and I are heading up the initial London team, we will also have an extensive team of over 80 engineering staff across our other locations. I'm confident that Narro is well-positioned to contribute meaningfully to London's built environment, and that we have the capabilities and experience for projects of all sizes.' Lily Erskine added, 'I'm absolutely thrilled to be part of this new chapter for Narro. We're passionate about working with heritage infrastructure, and London offers an incredible variety of historic buildings and cultural institutions. I'm excited to work on projects that celebrate and protect that heritage.' Lily moved to Narro from Alan Baxter Ltd in 2016, where she worked on the Palace of Westminster, Goldsmith Contemporary Arts Centre and the Royal Academy of Arts. With a team of 100 professionals across the UK, Narro Associates continues to champion sustainable and sensitive engineering solutions. The London studio will serve as a hub for collaboration with architects, heritage consultants, and cultural institutions, reinforcing Narro's dedication to thoughtful design and technical excellence. For more information about Narro and its services, please visit About Narro: Narro Associates was established in Edinburgh in 1986 with the purpose of providing a high-quality consulting structural and civil engineering service with a focus on architectural engineering. From the original office, the organisation now has over a hundred employees in seven locations: Edinburgh, Glasgow, Stirling, Inverness, Aberdeen, Newcastle, and London. The practice has experience in conservation, commercial and industrial projects, residential, hotels and leisure and the public sector. In 2014 the company became Employee Owned, enabling all staff to play a vital role in ensuring the continued and long term success of the practice.