logo
Rayonier Advanced Materials President Acquires 15% More Stock

Rayonier Advanced Materials President Acquires 15% More Stock

Yahoo22-05-2025

Investors who take an interest in Rayonier Advanced Materials Inc. (NYSE:RYAM) should definitely note that the President, De Lyle Bloomquist, recently paid US$3.95 per share to buy US$250k worth of the stock. That's a very solid buy in our book, and increased their holding by a noteworthy 15%.
AI is about to change healthcare. These 20 stocks are working on everything from early diagnostics to drug discovery. The best part - they are all under $10bn in marketcap - there is still time to get in early.
In fact, the recent purchase by De Lyle Bloomquist was the biggest purchase of Rayonier Advanced Materials shares made by an insider individual in the last twelve months, according to our records. That implies that an insider found the current price of US$3.97 per share to be enticing. That means they have been optimistic about the company in the past, though they may have changed their mind. If someone buys shares at well below current prices, it's a good sign on balance, but keep in mind they may no longer see value. The good news for Rayonier Advanced Materials share holders is that insiders were buying at near the current price.
In the last twelve months Rayonier Advanced Materials insiders were buying shares, but not selling. The average buy price was around US$4.62. This is nice to see since it implies that insiders might see value around current prices. You can see a visual depiction of insider transactions (by companies and individuals) over the last 12 months, below. If you want to know exactly who sold, for how much, and when, simply click on the graph below!
View our latest analysis for Rayonier Advanced Materials
Rayonier Advanced Materials is not the only stock insiders are buying. So take a peek at this free list of under-the-radar companies with insider buying.
I like to look at how many shares insiders own in a company, to help inform my view of how aligned they are with insiders. A high insider ownership often makes company leadership more mindful of shareholder interests. It appears that Rayonier Advanced Materials insiders own 4.6% of the company, worth about US$13m. This level of insider ownership is good but just short of being particularly stand-out. It certainly does suggest a reasonable degree of alignment.
It is good to see the recent insider purchase. We also take confidence from the longer term picture of insider transactions. But we don't feel the same about the fact the company is making losses. When combined with notable insider ownership, these factors suggest Rayonier Advanced Materials insiders are well aligned, and that they may think the share price is too low. So while it's helpful to know what insiders are doing in terms of buying or selling, it's also helpful to know the risks that a particular company is facing. Every company has risks, and we've spotted 1 warning sign for Rayonier Advanced Materials you should know about.
Of course, you might find a fantastic investment by looking elsewhere. So take a peek at this free list of interesting companies.
For the purposes of this article, insiders are those individuals who report their transactions to the relevant regulatory body. We currently account for open market transactions and private dispositions of direct interests only, but not derivative transactions or indirect interests.
Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com.This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This AI Company Wants Washington To Keep Its Competitors Off the Market
This AI Company Wants Washington To Keep Its Competitors Off the Market

Yahoo

time22 minutes ago

  • Yahoo

This AI Company Wants Washington To Keep Its Competitors Off the Market

Dario Amodei, CEO of the artificial intelligence company Anthropic, published a guest essay in The New York Times Thursday arguing against a proposed 10-year moratorium on state AI regulation. Amodei argues that a patchwork of regulations would be better than no regulation whatsoever. Skepticism is warranted whenever the head of an incumbent firm calls for more regulation, and this case is no different. If Amodei gets his way, Anthropic would face less competition—to the detriment of AI innovation, AI security, and the consumer. Amodei's op-ed came in a response to a provision of the so-called One Big Beautiful Bill Act, which would prevent any states, cities, and counties from enforcing any regulation that specifically targets AI models, AI systems, or automated decision systems for 10 years. Senate Republicans have amended the clause from a simple requirement to a condition for receiving federal broadband funds, in order to comply with the Byrd Rule, which in Politico's words "blocks anything but budgetary issues from inclusion in reconciliation." Amodei begins by describing how, in a recent stress test conducted at his company, a chatbot threatened an experimenter to forward evidence of his adultery to his wife unless he withdrew plans to shut the AI down. The CEO also raises more tangible concerns, such as reports that a version of Google's Gemini model is "approaching a point where it could help people carry out cyberattacks." Matthew Mittelsteadt, a technology fellow at the Cato Institute, tells Reason that the stress test was "very contrived" and that "there are no AI systems where you must prompt it to turn it off." You can just turn it off. He also acknowledges that, while there is "a real cybersecurity danger [of] AI being used to spot and exploit cyber-vulnerabilities, it can also be used to spot and patch" them. Outside of cyberspace and in, well, actual space, Amodei sounds the alarm that AI could acquire the ability "to produce biological and other weapons." But there's nothing new about that: Knowledge and reasoning, organic or artificial—ultimately wielded by people in either case—can be used to cause problems as well as to solve them. An AI that can model three-dimensional protein structures to create cures for previously untreatable diseases can also create virulent, lethal pathogens. Amodei recognizes the double-edged nature of AI and says voluntary model evaluation and publication are insufficient to ensure that benefits outweigh costs. Instead of a 10-year moratorium, Amodei calls on the White House and Congress to work together on a transparency standard for AI companies. In lieu of federal testing standards, Amodei says state laws should pick up the slack without being "overly prescriptive or burdensome." But that caveat is exactly the kind of wishful thinking Amodei indicts proponents of the moratorium for: Not only would 50 state transparency laws be burdensome, says Mittelsteadt, but they could "actually make models less legible." Neil Chilson of the Abundance Institute also inveighed against Amodei's call for state-level regulation, which is much more onerous than Amodei suggests. "The leading state proposals…include audit requirements, algorithmic assessments, consumer disclosures, and some even have criminal penalties," Chilson tweeted, so "the real debate isn't 'transparency vs. nothing,' but 'transparency-only federal floor vs. intrusive state regimes with audits, liability, and even criminal sanctions.'" Mittelsteadt thinks national transparency regulation is "absolutely the way to go." But how the U.S. chooses to regulate AI might not have much bearing on Skynet-doomsday scenarios, because, while America leads the way in AI, it's not the only player in the game. "If bad actors abroad create Amodei's theoretical 'kill everyone bot,' no [American] law will matter," says Mittelsteadt. But such a law can "stand in the way of good actors using these tools for defense." Amodei is not the only CEO of a leading AI company to call for regulation. In 2023, Sam Altman, co-founder and then-CEO of Open AI, called on lawmakers to consider "intergovernmental oversight mechanisms and standard-setting" of AI. In both cases and in any others that come along, the public should beware of calls for AI regulation that will foreclose market entry, protect incumbent firms' profits from being bid away by competitors, and reduce the incentives to maintain market share the benign way: through innovation and product differentiation. The post This AI Company Wants Washington To Keep Its Competitors Off the Market appeared first on

D.R. Horton, Inc. Announces Dual Listing on NYSE Texas
D.R. Horton, Inc. Announces Dual Listing on NYSE Texas

Business Wire

time24 minutes ago

  • Business Wire

D.R. Horton, Inc. Announces Dual Listing on NYSE Texas

ARLINGTON, Texas--(BUSINESS WIRE)-- D.R. Horton, Inc. (NYSE:DHI), America's Builder, announced today the dual listing of its common stock on NYSE Texas, the newly launched fully electronic equities exchange headquartered in Dallas, TX. The Company will maintain its primary listing on the New York Stock Exchange and trade with the same 'DHI' ticker symbol on NYSE Texas. David Auld, Chairman of the Board, said, 'We are pleased to be a Founding Member of NYSE Texas and show our support to the state we have called home for nearly fifty years. We believe Texas's long-standing commitment to pro-growth, business-friendly policies promotes a resilient economy. As the Lone Star State's economy continues to thrive and build momentum, we are committed to contributing in a meaningful way by providing housing to the rapidly growing population of Texas.' 'Based in Texas, D.R. Horton has been the largest homebuilder by volume in the United States for over two decades,' said Chris Taylor, Chief Development Officer, NYSE Group. 'We are proud to welcome the Company to NYSE Texas as a Founding Member.' About D.R. Horton, Inc. D.R. Horton, Inc., America's Builder, has been the largest homebuilder by volume in the United States since 2002 and has closed more than 1,100,000 homes in its 46-year history. D.R. Horton has operations in 126 markets in 36 states across the United States and is engaged in the construction and sale of high-quality homes through its diverse product portfolio with sales prices generally ranging from $250,000 to over $1,000,000. The Company also constructs and sells both single-family and multi-family rental properties. During the twelve-month period ended March 31, 2025, D.R. Horton closed 86,137 homes in its homebuilding operations, in addition to 3,312 single-family rental homes and 2,282 multi-family rental units in its rental operations. D.R. Horton also provides mortgage financing, title services and insurance agency services for its homebuyers and is the majority-owner of Forestar Group Inc., a publicly traded national residential lot development company.

Anthropic's AI-generated blog dies an early death
Anthropic's AI-generated blog dies an early death

Yahoo

time28 minutes ago

  • Yahoo

Anthropic's AI-generated blog dies an early death

Claude's blog is no more. A week after TechCrunch profiled Anthropic's experiment to task the company's Claude AI models with writing blog posts, Anthropic wound down the blog and redirected the address to its homepage. Sometime over the weekend, the Claude Explains blog disappeared — along with its initial few posts. A source familiar tells TechCrunch the blog was a "pilot" meant to help Anthropic's team combine customer requests for explainer-type "tips and tricks" content with marketing goals. Claude Explains, which had a dedicated page on Anthropic's website and was edited for accuracy by humans, was populated by posts on technical topics related to various Claude use cases (e.g. 'Simplify complex codebases with Claude'). The blog, which was intended to be a showcase of sorts for Claude's writing abilities, wasn't clear about how much of Claude's raw writing was making its way into each post. An Anthropic spokesperson previously told TechCrunch that the blog was overseen by "subject matter experts and editorial teams" who 'enhance[d]' Claude's drafts with 'insights, practical examples, and […] contextual knowledge.' The spokesperson also said Claude Explains would expand to topics ranging from creative writing to data analysis to business strategy. Apparently, those plans changed in pretty short order. "[Claude Explains is a] demonstration of how human expertise and AI capabilities can work together,' the spokesperson told TechCrunch earlier this month. "[The blog] is an early example of how teams can use AI to augment their work and provide greater value to their users. Rather than replacing human expertise, we're showing how AI can amplify what subject matter experts can accomplish." Claude Explains didn't get the rosiest reception on social media, in part due to the lack of transparency about which copy was AI-generated. Some users pointed out it looked a lot like an attempt to automate content marketing, an ad tactic that relies on generating content on popular topics to serve as a funnel for potential customers. More than 24 websites were linking to Claude Explains posts before Anthropic wound down the pilot, according to search engine optimization tool Ahrefs. That's not bad for a blog that was only live for around a month. Anthropic might've also grown wary of implying Claude performs better at writing tasks than is actually the case. Even the best AI today is prone to confidently making things up, which has led to embarrassing gaffes on the part of publishers that have publicly embraced the tech. For example, Bloomberg has had to correct dozens of AI-generated summaries of its articles, and G/O Media's error-riddled AI-written features — published against editors' wishes — attracted widespread ridicule. This article originally appeared on TechCrunch at Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store