
Inflation-Linked Bonds Rebound on Trump Tariffs: Credit Weekly
By , Greg Ritchie, and Helene Durand
Save
Money managers are flocking to bonds that hedge against inflation amid uncertainty about tariffs and their impact on the cost of living.
Federal Reserve Bank of Philadelphia President Patrick Harker warned this past week that risks to the economy are rising, in part due to increasing prices. That fear helped spur the Bloomberg Global Inflation Linked Index, a gauge of investment-grade inflation-linked debt in developed markets, to gain about 5% from Jan. 13 through Thursday's close.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Los Angeles Times
2 days ago
- Los Angeles Times
CalAssist Mortgage Fund provides $105M in aid for California Disaster Survivors
The CalAssist Mortgage Fund is a new source of financial relief for California homeowners whose properties were destroyed after recent disasters. The fund provides grants of up to $20,000 to cover three months of mortgage payments for eligible homeowners. The state has allocated $105 million for the program. 'California is extending this ongoing support to disaster victims in Los Angeles and beyond, by assisting with mortgage payments to relieve financial pressure and stress as families rebuild and recover,' Governor Gavin Newsom said in a press release. The grants do not need to be repaid and will be sent directly to mortgage companies, minimizing the steps required for processing and allowing homeowners to focus on recovery. Major fires, floods, and other government-declared disasters or states of emergency that occurred between January 2023 and January 2025 are qualifying events for the grants, including the Eaton and Palisades fires that burned thousands of homes. Research by the Federal Reserve Bank of Philadelphia, which analyzed wildfire losses in California between 2017 and 2021, found that nearly 40% of damaged homes were underinsured by more than $100,000, and more than 1 in 5 had shortfalls exceeding $200,000. The prevalence of these types of insurance gaps, combined with the emotional and financial toll of losing a home, highlights the importance of programs like the CalAssist Mortgage Fund. 'Homeowners whose home was destroyed in a recent fire, flood or other disaster deserve support in their recovery. We know that recovery takes time, and the state is here to support,' Newsom said in the release. The California Housing Finance Agency (CalHFA) is administering the program. In addition to the mortgage grants, CalHFA is also providing $25 million in housing counseling support through its National Mortgage Settlement (NMS) Housing Counseling program. This program is designed to help homeowners understand their options, complete applications, and navigate the recovery process. To be eligible for financial assistance through the CalAssist Mortgage Fund, an applicant's primary residence must be a single-family home, condo, or permanently affixed manufactured home. Properties with up to four units are included. Disaster survivors must also have an active mortgage or reverse mortgage, and the property cannot currently be in foreclosure. Applications for the CalAssist Mortgage Fund open on June 12, 2025. Homeowners are encouraged to apply as soon as possible before funds are exhausted. Steps to Apply For more information on eligibility and the application process, visit Disaster survivors who need assistance can also call 1-800-501-0019 between 8 a.m. and 5 p.m., Monday through Friday. The LA County Department of Consumer and Business Affairs' foreclosure prevention expert can help homeowners facing difficulty paying their mortgage. You can also discuss other relief options with a U.S. Department of Housing and Urban Development (HUD)-certified housing counselor.
Yahoo
21-05-2025
- Yahoo
Opinion - Looking back at election forecasts
Predicting future events is difficult. The Babylonians discovered this nearly 2,700 years ago, when they began trying to predict the weather. We have been working to improve those forecasts ever since. Lives, crops and more depended on them. It took until 1859 for a country (Britain) to offer its first official weather forecast (for shipping, the lifeblood of the maritime empire). After millennia of refinement, just how accurate are weather forecasts? The National Oceanic and Atmospheric Administration tells us that five-day forecasts are accurate nearly 90 percent of the time. Ten-day forecasts and longer are only correct about half the time. When it comes to where hurricanes will make landfall, even a 48-hour forecast has a margin of error around 50 nautical miles. Humans can be even less predictable than weather patterns. Yet here, too, the stakes can be sky-high. Billions, if not trillions, of dollars are at stake in economic forecasts. Corporations, stock market investors and even the Federal Reserve rely on them to make consequential decisions. Two Berkeley business school researchers analyzed responses to the Survey of Professional Forecasters, conducted by the Federal Reserve Bank of Philadelphia since 1968. They found forecasters were correct a mere 23 percent of the time. To take just one recent example, economists predicted U.S. gross domestic product would grow by 1.3 percent in 2024. In fact, the growth rate was more than twice the forecast. A dear friend who spent a few years working at a prominent econometric forecasting firm decades ago, reported their staff motto was 'we predicted 10 of the last three recessions.' Election forecasting has a shorter history. It is both more difficult and less consequential, since the forecasts have no effect on the real world. But it has grown into a cottage industry. Given the difficulties, it is surprising just how accurate these forecasts have proven to be, especially when they employ data collected many months prior to the event itself. The American Political Science Association recently published a journal with a dozen forecasts all completed well before the election, each of which used somewhat different data and varying methodologies. Most of them foresaw the close popular vote finish. The high-end prediction for then-Vice President Kamala Harris's share of the popular vote was 54.5 percent and the low-end was 45 percent — the first based on online betting data, the second on the expectations of ordinary people, techniques that I would caution against. Still, most of the predictions clustered within a few points of the actual results. Of the 11 entrants who forecast the popular vote, five foresaw victory for President Trump and six a win for Harris. Five predicted an Electoral College victory for Trump, whereas three wrongly anticipated that Harris would win the electoral vote. As regular readers would expect, the predictions based on fundamentals (the economy, partisanship, presidential approval) tended to be the most accurate. As I have described before, Ray Fair's model, the longest running such forecast (but not included in the American Political Science Association collection), and based largely on hard economic indicators, was within a quarter point of the actual result. Charles Tien and Michael Lewis-Beck added presidential approval to a smaller array of economic variables, producing a forecast also less than a point off the mark. Models employing poll data tended to be slightly farther off. I have previously quoted statistician George Box saying that 'all models are wrong. Some are useful.' Models are (over-) simplifications of the world. To be wholly right, they'd have to be as rich, complex, and confusing as the world itself. But these simplifications can tell us something about the 'whys' of this and other presidential elections. For example, despite the conventional wisdom asserting elections are about the future, most of the accurate models use retrospective information about the past, not data about future expectations. None of these models use information about the candidates' personalities, abilities or issue positions. Which is to say, the 2024 election was destined to be close, but any Democrat would have had a difficult time winning it. The situational deck was stacked against us, and neither candidate had a secret formula for greatly exceeding expectations. An exceptional candidate backed by an exceptional campaign may have been able to overcome the odds, but that's exactly what would have been required — beating the odds. Would a different candidate, or one who had faced a primary, have done better? We have no way of knowing, but there is no evidence or suggestion Harris blew a race that was hers to lose. Would former President Biden have done better or worse? Again, we cannot know, though one of the American Political Science Association modelers claims evidence that Biden himself would have done slightly worse than Harris did. It is no longer fashionable to quote Karl Marx, but he was right in saying that individual people 'make history, but not in circumstances of their own choosing.' Psychology teaches us that humans put too much weight on personal factors while underrating the power of circumstances and situations in shaping behavior. These models remind us that circumstances count for a lot and that the new science of presidential election forecasting stacks up pretty well, as predictions go. Mark Mellman is president of The Mellman Group a consultancy that has helped elect 30 U.S. senators, 12 governors and dozens of House members. He served as pollster to Senate Democratic leaders for over 30 years and is a member of the American Association of Political Consultants' Hall of Fame. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.


The Hill
21-05-2025
- The Hill
Looking back at election forecasts
Predicting future events is difficult. The Babylonians discovered this nearly 2,700 years ago, when they began trying to predict the weather. We have been working to improve those forecasts ever since. Lives, crops and more depended on them. It took until 1859 for a country (Britian) to offer its first official weather forecast (for shipping, the lifeblood of the maritime empire). After millennia of refinement, just how accurate are weather forecasts? The National Oceanic and Atmospheric Administration tells us that five-day forecasts are accurate nearly 90 percent of the time. Ten-day forecasts and longer are only correct about half the time. When it comes to where hurricanes will make landfall, even a 48-hour forecast has a margin of error around 50 nautical miles. Humans can be even less predictable than weather patterns. Yet here, too, the stakes can be sky-high. Billions, if not trillions, of dollars are at stake in economic forecasts. Corporations, stock market investors and even the Federal Reserve rely on them to make consequential decisions. Two Berkeley business school researchers analyzed responses to the Survey of Professional Forecasters, conducted by the Federal Reserve Bank of Philadelphia since 1968. They found forecasters were correct a mere 23 percent of the time. To take just one recent example, economists predicted U.S. gross domestic product would grow by 1.3 percent in 2024. In fact, the growth rate was more than twice the forecast. A dear friend who spent a few years working at a prominent econometric forecasting firm decades ago, reported their staff motto was 'we predicted 10 of the last three recessions.' Election forecasting has a shorter history. It is both more difficult and less consequential, since the forecasts have no effect on the real world. But it has grown into a cottage industry. Given the difficulties, it is surprising just how accurate these forecasts have proven to be, especially when they employ data collected many months prior to the event itself. The American Political Science Association recently published a journal with a dozen forecasts all completed well before the election, each of which used somewhat different data and varying methodologies. Most of them foresaw the close popular vote finish. The high-end prediction for Harris's share of the popular vote was 54.5 percent and the low-end was 45 percent — the first based on online betting data, the second on the expectations of ordinary people, techniques that I would caution against. Still, most of the predictions clustered within a few points of the actual results. Of the 11 entrants who forecast the popular vote, five foresaw victory for President Trump and six a win for Kamala Harris. Five predicted an Electoral College victory for Trump, whereas three wrongly anticipated that Harris would win the electoral vote. As regular readers would expect, the predictions based on fundamentals (the economy, partisanship, presidential approval) tended to be the most accurate. As I have described before, Ray Fair's model, the longest running such forecast (but not included in the American Political Science Association collection), and based largely on hard economic indicators, was within a quarter point of the actual result. Charles Tien and Michael Lewis-Beck added presidential approval to a smaller array of economic variables, producing a forecast also less than a point off the mark. Models employing poll data tended to be slightly farther off. I have previously quoted statistician George Box saying that 'all models are wrong. Some are useful.' Models are (over-) simplifications of the world. To be wholly right, they'd have to be as rich, complex, and confusing as the world itself. But these simplifications can tell us something about the 'whys' of this and other presidential elections. For example, despite the conventional wisdom asserting elections are about the future, most of the accurate models use retrospective information about the past, not data about future expectations. None of these models use information about the candidates' personalities, abilities or issue positions. Which is to say, the 2024 election was destined to be close, but any Democrat would have had a difficult time winning it. The situational deck was stacked against us, and neither candidate had a secret formula for greatly exceeding expectations. An exceptional candidate backed by an exceptional campaign may have been able to overcome the odds, but that's exactly what would have been required — beating the odds. Would a different candidate, or one who had faced a primary, have done better? We have no way of knowing, but there is no evidence or suggestion Vice President Kamala Harris blew a race that was hers to lose. Would former President Joe Biden have done better or worse? Again, we cannot know, though one of the American Political Science Association modelers claims evidence that Biden himself would have done slightly worse than Harris did. It is no longer fashionable to quote Karl Marx, but he was right in saying that individual people 'make history, but not in circumstances of their own choosing.' Psychology teaches us that humans put too much weight on personal factors while underrating the power of circumstances and situations in shaping behavior. These models remind us that circumstances count for a lot and that the new science of presidential election forecasting stacks up pretty well, as predictions go. Mark Mellman is president of The Mellman Group a consultancy that has helped elect 30 U.S. senators, 12 governors and dozens of House members. He served as pollster to Senate Democratic leaders for over 30 years and is a member of the American Association of Political Consultants' Hall of Fame.