Latest news with #Tech

8 hours ago
- Sport
Texas Tech tops UCLA 3-1 behind Canady's pitching to reach Women's College World Series semifinals
OKLAHOMA CITY -- NiJaree Canady gave up just four hits and struck out seven, and Texas Tech defeated UCLA 3-1 on Saturday to reach the Women's College World Series semifinals for the first time. The Red Raiders (52-12) need one win against Oregon or Oklahoma on Monday to reach the best-of-three championship series. UCLA (55-12) is still alive in the double-elimination format. The Bruins will play Tennessee in an elimination game on Sunday. Saturday's contest matched programs with very different histories. UCLA has a record 12 World Series championships while Texas Tech just won its first World Series game on Thursday in its first ever trip. 'It feels amazing, just this being our first time here as a team and just being able to get to the semifinals," Canady said. "I feel like, it's a huge accomplishment by itself, but obviously we're not finished. We're going for the whole thing like every other team here. But it's definitely something to be proud of.' Canady has plenty of World Series experience. She led Stanford to the semifinals the past two years and eliminated UCLA from the World Series last year before transferring to Tech. Canady ran into trouble against UCLA a few times on Saturday before coming through, like she did so many times before in a Cardinal uniform. 'I guess you've got to start everything with NiJa in the circle," Tech coach Gerry Glasco said. "She's just so fantastic, and I thought she pitched a gem of a game.' UCLA loaded the bases with one out in the second, yet Canady, the National Fastpitch Coaches Association's Pitcher of the Year, escaped without allowing a run. Texas Tech's Makayla Garcia stole home in the fifth to open the scoring. She slid between UCLA catcher Alexis Ramirez's legs to score the run. 'Coach Glasco told me, 'Hey, we're gonna go and we're going to take a chance,'' Garcia said. "And I had to trust him in that moment, and I trust him -- he's a great coach. And so I was like, 'You know what? We're gonna do it.' And it had to work in our favor. And luckily it did.' UCLA's Kaniya Bragg answered with a solo homer in the bottom of the inning. Hailey Toney's solo blast in the sixth put Tech ahead 2-1 and Raegan Jennings' RBI single in the seventh made it 3-1. UCLA got two on with no outs in the seventh, and Canady again avoided damage. 'We just found a way to win," Glasco said. "And that's kind of what we've become. We pride ourselves on being a mentally tough team, a resilient team that can go out under pressure and play defense when we have to play defense in tight moments.' Taylor Tinsley took the loss. She went the distance and gave up three runs on four hits. Tinsley said the Bruins are ready to move on. 'I feel like the big thing with this team is we have a really short-term memory,' she said. 'Each play is kind of like in the past. This game is already as old as dirt for us, so we're just ready to get back out there.'


Los Angeles Times
10 hours ago
- Sport
- Los Angeles Times
UCLA facing WCWS elimination after comeback sputters in loss to Texas Tech
OKLAHOMA CITY — NiJaree Canady gave up just four hits and struck out seven, and Texas Tech defeated UCLA 3-1 on Saturday to reach the Women's College World Series semifinals for the first time. The Red Raiders (52-12) need one win against Oregon or Oklahoma on Monday to reach the best-of-three championship series. UCLA (55-12) is still alive in the double-elimination format. The Bruins will play Tennessee in an elimination game on Sunday at noon PDT. Saturday's contest matched programs with very different histories. UCLA has a record 12 World Series championships while Texas Tech just won its first World Series game on Thursday. Canady has plenty of World Series experience. She led Stanford to the semifinals the past two years and eliminated UCLA from the World Series last year before transferring to Tech. Canady ran into trouble against UCLA a few times on Saturday before coming through. UCLA loaded the bases with one out in the second, yet Canady, the National Fastpitch Coaches Assn.'s Pitcher of the Year, escaped without allowing a run. Texas Tech's Makayla Garcia stole home in the fifth to open the scoring. Kaniya Bragg answered with a solo homer in the bottom of the inning. Hailey Toney's solo blast in the sixth put Tech ahead 2-1 and Raegan Jennings' RBI single in the seventh made it 3-1. UCLA got two on with no outs in the seventh, and Canady again avoided damage.

Business Insider
2 days ago
- Business
- Business Insider
Inside Amazon's radical redo of the 'Everything Store'
Hello, and welcome to your weekly dose of Big Tech news and insights. I'm your host, Alistair Barr. My dog Maisie came through her surgery. That cost thousands of dollars. How much would you pay to keep your furry friend alive? Agenda We reveal a radical overhaul of Amazon's online marketplace that's been hotly debated inside the tech giant. An exclusive look at one of Microsoft's top cloud customers, sharing big numbers you've never seen before. New data suggests Big Tech stock-based compensation could be under pressure. Central story unit In 2013, my old boss Brad Stone published "The Everything Store." It's the defining book about Amazon's giant e-commerce business. The key idea in the book was infinite product selection. This strategy propelled the company to become the Western world's largest retailer. Based on quarterly sales, it overtook Walmart earlier this year. Having endless inventory means shoppers are more likely to find what they're looking for on Amazon, increasing the chances they buy something, and return again. That's been a powerful advantage over physical retail stores, which can only stock so much. However, in recent years, some of Amazon's digital aisles have become cluttered and outdated, which could confuse or frustrate shoppers. So, under CEO Andy Jassy, the company has been purging billions of product listings via a secret project known as "Bend the Curve." Business Insider's star tech reporter Eugene Kim has the scoop with all the juicy details. Does this spell the end of The Everything Store? Nope. There's no way Amazon would give up this hard-won advantage. Instead, it's mostly about cleaning up this giant online marketplace. Product listings get old. Sellers can chuck thousands of listings on there, and some are inaccurate or worse. There are also millions of dollars in cloud savings from not having to host billions of unproductive listings. Still, this big move has been debated inside Amazon, according to Kim's report. And surveys by Evercore ISI found that fewer shoppers think Amazon's product selection is the best. News++ Other BI tech stories that caught my eye lately: Exclusive: New numbers show just how big a customer Walmart is for Microsoft's cloud business. This venture capital firm bought a hospital chain. Why? Exclusive: Meta's big bet on virtual reality isn't stopping it from opening retail stores. I thought everyone knew not to speak their minds in work surveys? Apparently not. The life of the digital nomad is getting harder. Eval time My take on who's up and down in the tech industry right now, including updates on Big Tech employee pay. DOWN: In late February, investor Ross Gerber predicted a 50% drop in Tesla's share price. The stock is up roughly 20% since then. Ouch! COMP UPDATE: Analysts at Cantor Fitzgerald looked at restricted stock units issued recently by tech companies including Meta, Google, and Uber. RSUs are the main way tech employees get paid. The latest numbers show these equity awards are slowing down or even falling at some companies. The chart below shows changes in RSU grant value per employee. From the group chat Other Big Tech stories I found on the interwebs: Making a video with fancy new AI tools is harder than you might think. (WSJ) Satellite smackdown: Apple versus SpaceX. (The Information) A self-driving truck startup siphoned trade secrets to Chinese companies. (WSJ) You can't develop chips without software from Cadence and Synopsys. The US is trying to limit China's access to this tech. (FT) AI playground This week, I'm telling you about an AI tool that may not be immediately obvious as AI. But it most certainly is. Tesla uses thousands of chips in massive data centers to train AI models that understand video collected from millions of the company's vehicles. This is used to develop FSD software for near-autonomous driving. I've been using FSD a lot this year in my Tesla Model 3 Performance. Here are the highs and lows. Is this a fair assessment? Tesla plans to roll out a full robotaxi service in Austin in June. This will be fully autonomous, with no human supervision. It's a huge leap. My FSD software still requires me to be responsible and alert. But this FSD diary gives some pretty solid clues to how capable Tesla's current software is. What AI tool should I use next week? Let me know. User feedback Specifically, though: I want to know about your recent experiences with Amazon's online marketplace. Have you noticed an improvement in the quality of listings lately? Or have you sensed any change in product selection? Let Eugene Kim know at ekim@


CNET
2 days ago
- Business
- CNET
Want to Buy a New iPhone? Now's Not the Time, and Here's Why
If you're ready to upgrade your iPhone, you may want to hang tight. Apple unveiled the iPhone 16 lineup back in September, which means the company is due to launch the next generation of its handset, likely in the fall. That means if you can wait a couple of months to buy your next iPhone, you can either score the latest device or get a discount on previous models. Newer iPhones tend to include camera and processor upgrades, as well as new features to make them more enticing. For instance, the iPhone 14 Pro models introduced Dynamic Island, the iPhone 15 Pro and Pro Max debuted the Action button, and the iPhone 16 series added the Camera Control button and Apple Intelligence across the full lineup, rather than on just the Pro models. According to leaks and reports, the iPhone 17 lineup, which Apple technically has yet to confirm, could have a fresh camera setup and new color options and, perhaps most notably, could include a slimmer version of the iPhone, to compete with similar offerings like Samsung's Galaxy S25 Edge. The next version of iOS could also get a makeover, in what Bloomberg has described as Apple's biggest software shakeup in years. You can check out our iPhone 17 rumor roundup for more on what might be coming in the fall. Will waiting for the iPhone 17 be worth it? Overall, iPhone upgrades over the last several years have been relatively modest. And it's likely, based on rumors, that the iPhone 17 lineup will generally follow that mold. But even with more moderate changes, now's not a good time to buy a new iPhone, if you can help it. We're just about four months away from the anticipated launch of Apple's next smartphone. So if you hold on a little longer, you can snag that flashy new device when it drops, likely in September, based on previous iPhone launches. If you buy a new iPhone 16 now, you'll probably pay full price for something that in just a few months' time will technically be outdated (the harsh reality of the annual phone release cycle). And even if you get a good deal through your carrier now, if you stick it out just a bit longer, you could potentially get an even more lucrative deal once the iPhone 17 drops and carriers ramp up their promotions. If anything, waiting to see what the iPhone 17 has in store could at least help you confirm whether going with the newest device or an older one like the iPhone 16 or 15 is worth it. After all, if the differences are minimal, you might as well save a couple hundred dollars by choosing a previous model. And chances are -- if the iPhone 17 is anything like the last several iPhones -- no one will even be able to tell. But what about tariffs? One big unknown is whether tariffs will affect the price of the iPhone 17, which could sway your purchasing decisions. While smartphones and computers were given an exemption from President Donald Trump's more extensive tariffs, he recently said Apple will still have to pay a 25% tariff on iPhones made outside the US. This would almost certainly lead to a price hike. But even without tariffs, the iPhone is due for a markup, according to CNET's Patrick Holland. "The iPhone hasn't had a price hike in five years and is due for one," he writes. "Historically, that's the longest stretch of time the company has gone without an increase." (You can check out more of his thoughts here.) So, is it still worth waiting for the iPhone 17? It depends. If you were already planning on purchasing a new iPhone and can't wait much longer, I can understand panic-buying now. But bear in mind you'll still likely pay full price for an iPhone 16 model that will be worth less the moment the iPhone 17 drops. So, you might be coughing up more for an iPhone 17, but at least you'll get more bang for your buck. So, when's the best time to buy a new iPhone? There's not necessarily a "best" time to buy a new iPhone, since prices are pretty consistent throughout the year, but the fall is an enticing option. That's when Apple introduces its latest slate of iPhones, and when carriers are eager to attract new customers and lure in business with abundant trade-in deals and promotions. And again, even if you don't want the latest and greatest iPhone, you can at least snag an older version at a discount right after the iPhone 17 drops. In general, we recommend upgrading to a new phone if your existing one is more than two generations old. You can typically wring more life out of your device, but if you want to stay on top of the latest features like Apple Intelligence, leveling up is the way to go. And with just a few more months left before the anticipated drop of the iPhone 17, you might as well see what fresh capabilities Apple's got up its sleeve.


Coin Geek
2 days ago
- Business
- Coin Geek
Could foundation models make RAG obsolete?
Homepage > News > Tech > Could foundation models make RAG obsolete? Getting your Trinity Audio player ready... This post is a guest contribution by George Siosi Samuels , managing director at Faiā. See how Faiā is committed to staying at the forefront of technological advancements here . Even the smartest systems can become outdated if the paradigm shifts. Reid Hoffman recently argued that it's not the end of RAG—Retrieval Augmented Generation. But for those of us watching the evolution of large language models (LLMs) through a sharper lens, the writing might already be on the wall. Just as Yahoo's exhaustive web directory model was outpaced by Google's (NASDAQ: GOOGL) probabilistic search engine, RAG may soon find itself outdated in the face of increasingly powerful foundation models. It's not about whether RAG works. It's about whether it will matter. From Yahoo to Google: A signal from the past To understand the trajectory we're on, we need only look back. Yahoo believed in curating the Internet. Directories. Taxonomies. Human-reviewed indexes. But Google introduced a radically different idea: don't catalog everything—just rank relevance dynamically. Instead of organizing knowledge beforehand, Google inferred what mattered most through algorithms and backlinks. That wasn't just a technological improvement—it was a shift in philosophy. A move from structure to signal. From effortful storage to elegant retrieval. RAG, in many ways, feels like Yahoo. It's a bolted-on system that tries to enhance LLMs by grafting in 'clean,' retrievable knowledge from databases and vector stores. The goal is noble: improve the factuality and trustworthiness of artificial intelligence (AI) responses by injecting it with curated context. But what if that need disappears? Why RAG feels like a transitional technology RAG solves a real problem: hallucination. LLMs, especially in their earlier versions, had a tendency to fabricate facts. By adding a retrieval layer—pulling in external documents to ground the generation—RAG helped bridge the gap between generative flexibility and factual precision. But in solving one problem, it introduces others: Latency and complexity : RAG pipelines require orchestration between multiple components—vector databases, embedding models, retrievers, and re-rankers. : RAG pipelines require orchestration between multiple components—vector databases, embedding models, retrievers, and re-rankers. Data management burden : Enterprises must constantly update and maintain high-quality corpora, often requiring labor-intensive cleanup and formatting. : Enterprises must constantly update and maintain high-quality corpora, often requiring labor-intensive cleanup and formatting. Hard to generalize: RAG systems perform well in narrow domains but can break or return noise when facing edge cases or unfamiliar queries. It feels like scaffolding. Useful during construction—but not part of the finished architecture. Inference is eating Search Recent breakthroughs in LLM capabilities suggest that we're entering a new paradigm—one where inference can increasingly replace retrieval. With the emergence of models like GPT-4o, Claude 3 Opus, and even Google Gemini Pro 2.5, we're witnessing: Longer context windows : These models can now ingest and reason over hundreds of pages of content without needing external retrieval mechanisms. : These models can now ingest and reason over hundreds of pages of content without needing external retrieval mechanisms. Better zero-shot performance : The models are learning to generalize across vast domains without needing hand-fed examples or fine-tuned prompts. : The models are learning to generalize across vast domains without needing hand-fed examples or fine-tuned prompts. Higher factual accuracy: As foundation models train on more comprehensive data, their inherent 'memory' becomes more useful than brittle plug-ins or patched-on sources. In other words, the model itself is the database. This mirrors Google's dominance over Yahoo. When Google proved you didn't need to manually catalog the Internet to find useful content, the race was over. In the same way, when LLMs can consistently generate accurate answers without needing retrieval scaffolding, the RAG era ends. Enterprise blockchain implications So why does this matter to the blockchain and Web3 space? Because the architecture of how we store and surface data is changing. In the past, enterprise blockchain projects focused heavily on data provenance , auditability , and structured information flows . Naturally, RAG-like systems seemed appealing—pair a blockchain ledger (structured, secure) with a retriever that could pull trusted data into AI responses. But if inference can outpace retrieval—if models become so strong that they infer trustworthiness based on deep pretraining and internal reasoning—the value of these data layer bolt-ons will shift. It could go three ways: Legacy enterprise solutions double down on RAG-like hybrids, bolting AI onto databases and chains for compliance reasons. Next-gen startups skip RAG entirely, trusting LLMs' inference power and layering blockchain only for verifiability , not retrieval. A new form of 'self-attesting' data emerges, where models generate and verify their own responses using on-chain signals—but without traditional RAG scaffolding. Blockchain, in this context, becomes a reference point , not a library. The foundation model becomes both the interface and the reasoner. Is clean data still necessary? One of the assumptions keeping RAG alive is this: clean data = better output. That's partially true. But it's also a bit of an old-world assumption. Think about Gmail, Google Drive, or even Google Photos. You don't have to organize these meticulously. You just type and Google finds . The same is starting to happen with LLMs. You no longer need perfectly labeled, indexed corpora. You just need volume and diverse context —and the model figures it out. Clean data helps, yes. However, the new AI paradigm values signal density more than signal purity . The cleaner your data, the less your model has to guess. But the better your model, the more it can intuit even from messy, unstructured information. That's a core shift—and one that should change how enterprises think about knowledge management and blockchain-based storage. RAG's final role: A stepping stone, not a standard So, where does this leave RAG? Likely as a valuable bridge—but not the destination. We'll probably still see RAG-like systems in regulated industries and legacy enterprise stacks for a while. But betting on the future of AI on retrieval is like betting on the future of maps on phonebooks. The terrain is changing. Foundation models won't need retrieval in the way we think about it today. Their training and inference engines will absorb and transmute information in ways that feel alien to traditional IT logic. Blockchain will still play a role—especially in authentication and timestamping—but less as a knowledge base, and more as a consensus layer that LLMs can reference like a cryptographic compass. Conclusion: The search for simplicity RAG helped patch early AI flaws. But patchwork can't match the architecture. The best technologies don't just solve problems—they disappear . Google didn't ask users to understand PageRank. It simply worked. In the same way, the most powerful LLMs won't require RAG—they'll simply respond with clarity and resonance. And that's the signal we should be tracking. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI. Watch | IEEE COINS Conference: Intersection of blockchain, AI, IoT & IPv6 technologies title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">