Children's Place (PLCE) Reports Earnings Tomorrow: What To Expect
Kid's apparel and accessories retailer The Children's Place (NASDAQ:PLCE) will be announcing earnings results tomorrow after the bell. Here's what to look for.
Children's Place beat analysts' revenue expectations by 3.4% last quarter, reporting revenues of $480.2 million, down 5.7% year on year. It was a slower quarter for the company, with full-year EPS guidance missing analysts' expectations and a significant miss of analysts' EBITDA estimates.
Is Children's Place a buy or sell going into earnings? Read our full analysis here, it's free.
This quarter, analysts are expecting Children's Place's revenue to grow 1.6% year on year to $463.6 million, a reversal from the 10.2% decrease it recorded in the same quarter last year. Adjusted loss is expected to come in at -$1.66 per share.
Analysts covering the company have generally reconfirmed their estimates over the last 30 days, suggesting they anticipate the business to stay the course heading into earnings. Children's Place has missed Wall Street's revenue estimates four times over the last two years.
With Children's Place being the first among its peers to report earnings this season, we don't have anywhere else to look to get a hint at how this quarter will unravel for apparel and footwear retail stocks. However, the segment has faced declining investor sentiment as Children's Place's peer group is down 3.8% on average over the last month. Children's Place's stock price was unchanged during the same time and is heading into earnings with an average analyst price target of $8 (compared to the current share price of $7.14).
Today's young investors likely haven't read the timeless lessons in Gorilla Game: Picking Winners In High Technology because it was written more than 20 years ago when Microsoft and Apple were first establishing their supremacy. But if we apply the same principles, then enterprise software stocks leveraging their own generative AI capabilities may well be the Gorillas of the future. So, in that spirit, we are excited to present our Special Free Report on a profitable, fast-growing enterprise software stock that is already riding the automation wave and looking to catch the generative AI next.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

2 hours ago
AI chatbots need more books to learn from. These libraries are opening their stacks
CAMBRIDGE, Mass. -- Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artists and others whose creative works have been scooped up without their consent to train AI chatbots. 'It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft. Davis said libraries also hold 'significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Supported by 'unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. 'We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. 'Librarians have always been the stewards of data and the stewards of information.' Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. 'A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens — units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from 'shadow libraries' of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitizing rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitized would be for everyone, said Jessica Chapel, its chief of digital and online services. 'OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said. Digitization is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. 'We've been very clear that, 'Hey, we're a public library,'" Chapel said. 'Our collections are held for public use, and anything we digitized as part of this project will be made public.' Harvard's collection was already digitized starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be 'immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. 'At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. 'You have a lot of scientific information about how to run processes and how to run analyses.' At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. 'When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to 'help them make their own informed decisions and use AI responsibly.'


San Francisco Chronicle
2 hours ago
- San Francisco Chronicle
AI chatbots need more books to learn from. These libraries are opening their stacks
CAMBRIDGE, Mass. (AP) — Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artistsand others whose creative works have been scooped up without their consent to train AI chatbots. 'It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft. Davis said libraries also hold 'significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Supported by 'unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. 'We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. 'Librarians have always been the stewards of data and the stewards of information.' Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. 'A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens — units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from 'shadow libraries' of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitizing rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitized would be for everyone, said Jessica Chapel, its chief of digital and online services. 'OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said. Digitization is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. 'We've been very clear that, 'Hey, we're a public library,'" Chapel said. 'Our collections are held for public use, and anything we digitized as part of this project will be made public.' Harvard's collection was already digitized starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be 'immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. 'At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. 'You have a lot of scientific information about how to run processes and how to run analyses.' At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. 'When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to 'help them make their own informed decisions and use AI responsibly.'
Yahoo
2 hours ago
- Yahoo
America's Car-Mart (NASDAQ:CRMT) Delivers Strong Q1 Numbers, Stock Soars
Used-car retailer America's Car-Mart (NASDAQ:CRMT) beat Wall Street's revenue expectations in Q1 CY2025, with sales up 1.9% year on year to $370.2 million. Its GAAP profit of $1.26 per share was 46.1% above analysts' consensus estimates. Is now the time to buy America's Car-Mart? Find out in our full research report. Revenue: $370.2 million vs analyst estimates of $343.5 million (1.9% year-on-year growth, 7.8% beat) EPS (GAAP): $1.26 vs analyst estimates of $0.86 (46.1% beat) Adjusted EBITDA: $33.92 million vs analyst estimates of $25.12 million (9.2% margin, 35.1% beat) Operating Margin: 8.6%, up from 5.1% in the same quarter last year Free Cash Flow was $18.44 million, up from -$12.01 million in the same quarter last year Locations: 154 at quarter end, in line with the same quarter last year Same-Store Sales fell 3.9% year on year (-5.3% in the same quarter last year) Market Capitalization: $477 million With a strong presence in the Southern and Central US, America's Car-Mart (NASDAQ:CRMT) sells used cars to budget-conscious consumers. Examining a company's long-term performance can provide clues about its quality. Any business can have short-term success, but a top-tier one grows for years. With $1.39 billion in revenue over the past 12 months, America's Car-Mart is a small retailer, which sometimes brings disadvantages compared to larger competitors benefiting from economies of scale and negotiating leverage with suppliers. On the bright side, it can grow faster because it has more white space to build new stores. As you can see below, America's Car-Mart grew its sales at a solid 13% compounded annual growth rate over the last six years (we compare to 2019 to normalize for COVID-19 impacts) despite not opening many new stores. This quarter, America's Car-Mart reported modest year-on-year revenue growth of 1.9% but beat Wall Street's estimates by 7.8%. Looking ahead, sell-side analysts expect revenue to grow 4.7% over the next 12 months, a deceleration versus the last six years. Still, this projection is healthy and implies the market sees success for its products. Here at StockStory, we certainly understand the potential of thematic investing. Diverse winners from Microsoft (MSFT) to Alphabet (GOOG), Coca-Cola (KO) to Monster Beverage (MNST) could all have been identified as promising growth stories with a megatrend driving the growth. So, in that spirit, we've identified a relatively under-the-radar profitable growth stock benefiting from the rise of AI, available to you FREE via this link. America's Car-Mart listed 154 locations in the latest quarter and has kept its store count flat over the last two years while other consumer retail businesses have opted for growth. When a retailer keeps its store footprint steady, it usually means demand is stable and it's focusing on operational efficiency to increase profitability. The change in a company's store base only tells one side of the story. The other is the performance of its existing locations and e-commerce sales, which informs management teams whether they should expand or downsize their physical footprints. Same-store sales provides a deeper understanding of this issue because it measures organic growth at brick-and-mortar shops for at least a year. America's Car-Mart's demand has been shrinking over the last two years as its same-store sales have averaged 2.7% annual declines. This performance isn't ideal, and we'd be concerned if America's Car-Mart starts opening new stores to artificially boost revenue growth. In the latest quarter, America's Car-Mart's same-store sales fell by 3.9% year on year. This decrease represents a further deceleration from its historical levels. We hope the business can get back on track. We were impressed by how significantly America's Car-Mart blew past analysts' revenue, EPS, and adjusted operating income expectations this quarter. Zooming out, we think this quarter featured some important positives. The stock traded up 6.9% to $61.75 immediately following the results. America's Car-Mart may have had a good quarter, but does that mean you should invest right now? The latest quarter does matter, but not nearly as much as longer-term fundamentals and valuation, when deciding if the stock is a buy. We cover that in our actionable full research report which you can read here, it's free. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data