
AI is coming to the NFL, and it could transform the game
In 1968, Stanley Kubrick released '2001: A Space Odyssey' and creeped out an entire country with the idea of a future controlled by artificial intelligence. In the summer of 2025, Zac Robinson is facing the idea of watching football and discussing strategy with a computer, and he's a little creeped out, too.
Advertisement
The 38-year-old Atlanta Falcons offensive coordinator worked as an analyst for Pro Football Focus before starting his coaching career in 2019, a stint that convinced him of the value and potential of advanced analytics. But there's a wide gulf between the math used to optimize fourth-down decisions and a voiced AI agent telling you to look out for the weakside linebacker while you're sitting alone in your office on a Tuesday night.
'I don't know,' Robinson said, considering the scenario. 'I'm a little scared.'
He and other NFL coaches are going to have to get comfortable crossing that water soon. Instead of Hal 9000, think of it as the Bill Walsh 3000, which could be assigned to watch the rotations of the secondary while a human coach focuses on the front seven.
'I'd have to see what that looks like,' Robinson said. '(A computer) barking at me, I might get a little frustrated, but if it ends up being a cool tool, that'll be interesting.'
Ryan Paganetti got his job in part because of artificial intelligence. He was hired by Las Vegas Raiders head coach Pete Carroll in March as the team's 'Head Coach Research Specialist,' but the job may be better understood as AI coordinator.
'I don't think when I was hired the idea was, 'This is our AI guy,' but there is no doubt whatsoever that I am going to be using AI every single day,' he said. 'And probably in increasingly larger amounts every month that goes by.'
In a league in which teams are constantly looking for an edge, the next big one won't be coming through the draft or free agency, Paganetti believes, but from artificial intelligence tools that are on the verge of transforming how coaches think about the game and do their jobs … and maybe even which coaches still have those jobs in a decade.
'It almost might be a blockbuster moment where some coaches, their roles are replaced entirely,' Paganetti said. 'That's an issue in all sorts of industries where AI is just better and more accurate. I think that is going to happen with the football industry, to some degree.
Advertisement
'I feel pretty confident saying some team is going to win a Super Bowl in the next few years utilizing AI at a very high rate, significantly higher than it has ever been used before,' he said. 'It's really an opportunity to differentiate yourself from a team that might have a more talented roster or better coaches or whatnot. There is going to be more and more separation with teams that are bought in.'
Carroll is fully bought in. The NFL's oldest head coach is maybe its biggest believer in its youngest technology. 'Everything you can think of is possible right now,' the 73-year-old said. His early adopter status isn't surprising considering his history, which includes head coaching stints with the New York Jets, New England Patriots, Seattle Seahawks and at the University of Southern California, where last year he taught a class called 'The Game of Life.' As part of that class, Carroll spoke with author and new-age guru Deepak Chopra.
'Check this out,' Carroll said, 'he talked about AI giving him the opportunity to interview himself, talking to himself through AI so he was actually questioning his own person and being answered by his own person in return. Some of it does feel like science fiction, I get that, but AI is around the corner for us.'
Nearly three decades ago, IBM began developing the supercomputer Deep Blue to face off against world chess champion Garry Kasparov. Kasparov won his first match against the machine in 1996, but Deep Blue won the rematch the next year, and humans haven't provided a chess challenge to computers since. Computers have since mastered the ancient Chinese board game Go, which involves exponentially more possible moves than chess. Football presents a much tougher computer problem than chess or Go for myriad reasons, but many experts agree that some of the analytical functions done by human coaches could be done better, or at least more efficiently, by artificial intelligence, and the current rate of improvement in the industry suggests that moment might not be far away.
While the world ponders a future where computers can generate their own decisions, the technology still is almost entirely machine learning and brute computing power rather than human-like intelligence. 'Think of machine learning as a technique for achieving artificial intelligence,' said John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering at MIT.
Advertisement
The large language models that power most AI and machine learning 'don't know how to watch football yet, but I think with some work, they can be taught to watch football,' said Udit Ranasaria, a senior researcher at SumerSports, one of a handful of companies developing artificial intelligence tools with the potential to reshape professional football. 'We can get to a place where we have something like ChatGPT that understands what's happening in the NFL.'
It probably won't take long, said Guttag, who leads the school's Computer Science and Artificial Intelligence Laboratory Data Driven Inference Group and has co-presented several papers about the uses of machine learning in the NBA and Major League Baseball. In 2020, he was the thesis supervisor for a 55-page dissertation written by Udgam Goyal titled 'Leveraging Machine Learning to Predict Playcalling Tendencies in the NFL.'
'A big branch of artificial intelligence from almost the beginning has been computer vision, trying to get computers to see things and figure out what is in the image,' Guttag said. But football is a more complex problem for computer vision than basketball, baseball or soccer because of the proximity of players to the line of scrimmage and the variance in personnel.
'Fourth-and-1 with Mike Vick and Alge Crumpler looks a lot different than fourth-and-1 with Kirk Cousins and Kyle Pitts,' said Omar Ajmeri, the CEO and co-founder of Slants, which uses machine learning to pull scouting information from football film.
Current artificial intelligence is capable of 'watching' game film from two teams, formulating a game plan and printing out call sheets for offensive and defensive coordinators, said Vishakh Sandwar, one half of the winning team at this year's Big Data Bowl, which is sponsored by the NFL. 'It's just a matter of the quality at this point,' he said.
Sandwar and fellow NYU alum Smit Bajaj's winning project created an algorithm that can identify coverages based on the computer's 'visual' analysis of defenders. The model, which used technology developed by Sumer, achieved an accuracy level of 89 percent based only on pre-snap alignments. It adjusts in real time as defensive players move and can identify which are the worst offenders in giving away coverages before the snap. It also allows coaches to create custom looks by moving defenders on a digital whiteboard.
Artificial intelligence 'is very good at piecing together relationships in very, very high-dimensional spaces,' Bajaj said. 'With languages, it's able to piece together and understand that based on the entire history of the internet, this is the word that is likely to come next. It's increasingly being used, I would assume, in NFL buildings to piece together player-to-player relationships as well.'
Advertisement
'Over time, it will get better and better,' Guttag said. 'And what you'll do is say, 'Here are all the series that led to first downs. Here are all the series that didn't lead to first downs. What are the important differences?' — without hypothesizing before. You'll just let the AI machine learning look at all that data and say, 'Here are some interesting differences.' One of the great things about machine learning is it finds things you didn't know were there.'
Bajaj spoke to The Athletic for this article in March. By May, he had been hired by the Philadelphia Eagles (Sandwar was hired by Sumer this spring). Before joining the Eagles, Bajaj was interning in the Philadelphia Phillies' analytics department, which has more than 35 employees. In the NFL, only three teams have more than six employees in their departments, according to research by ESPN's Seth Walder. Fourteen have three or fewer, and none have more than the Cleveland Browns' 10.
'I do know there are opportunities, but it requires a real commitment,' Guttag said. 'If you're going to do this, it's going to take premier talent. We're not going to be able to take an ex-player and say, 'Go run this department.' You look at what Google pays their top machine-learning people. It's not NFL player salaries, but it's not NFL office salaries, either.'
After Ajmeri presented at MIT's Sloan Sports Analytics Conference in 2018, he was asked to meet with NFL teams in 'really far corners of the conference center,' even across the street at a Starbucks. The upcoming arms race in artificial intelligence hiring will stay in the shadows, predicted Paganetti, who declined to discuss any specifics about how the Raiders will use the upcoming advancements.
An artificial intelligence agent could assist in play calling during games, but NFL rules ban that sort of assistance from kickoff until the clock hits zero. During the week, everything in the AI realm is in bounds, although the league continues to monitor developments, at least the ones it knows about.
'There's still an extreme level of secrecy,' Paganetti said. 'Even people who work in analytics have very little idea what people working in analytics for other teams do sometimes because it's considered company secrets. We know what the scouts do on the other team: They scout. We know what the coaches do on the other teams: They coach. But when it comes to the actual contribution of the analytics department of another team, it's really open-ended.'
Atlanta passing game coordinator T.J. Yates, like coaches in many buildings in the NFL, already works with Telemetry Sports for computer-generated coaching aids. The son of an engineer and the Falcons' coaching staff's biggest trumpeter of technological possibilities, Yates knows other advancements are looming.
Advertisement
'If you're not using it, it's dumb, because it's there for us,' Yates said. 'The days of sitting there grinding until two or three o'clock in the morning, there are way too many available opportunities to cut that out and be efficient and go home and get some sleep and have a sharper mind and have good energy for your players the next day.'
SumerSports' technology isn't built to replace coaches, just to make their jobs easier, CEO Lorrissa Horton said: 'Our question is 'How can we help them be more efficient?'' Former Falcons and Patriots executive Thomas Dimitroff is the director of football operations at Sumer and has led the organization's presentations to coaches and executives around the league.
'Everyone is on the edge of their seats during those meetings,' Dimitroff said. 'They are salivating at the idea of 'How can I be able to do this?' Coaches would welcome nothing more than to be able to do these things faster and more effectively than they are doing them now.'
The key, he said, will be making sure the technology is easily accessible.
'There are a lot of very, very smart coaches,' he said, 'but oftentimes they don't have the time in their schedule to learn what Lorrissa's group can teach so they get a little antsy with it and say, 'Screw it, I'll get to that later.''
Tennessee Titans head coach Brian Callahan believes artificial intelligence acceptance around the league will vary.
'Anytime you are talking to a football coach who has done one thing for a long time, it takes time for that to take hold, but I do think there is a much more open mind to all of those things: data, analytics, new processes,' the 41-year-old said. 'Yeah, there will be some pushback in some spots, but there are a lot of other spots where guys will look at it as something that can really help.'
Advertisement
Guttag is less optimistic about buy-in, pointing out the resistance coaches showed to accepting the math behind fourth-down decision-making, maybe the most rudimentary form of machine learning introduced to the game.
'Anyone who knew any math at all knew they were behaving stupidly, and yet they continued to do it,' he said. 'It's kind of remarkable.'
The next wave of artificial intelligence will make the fourth-down bot look like an abacus. The NFL already is using an AI application called Digital Athlete to help teams predict injuries, but the upcoming coaching applications are where NFL fans are most likely to see results.
'With things like 'What play should you run against this look? What blitz should you run against this alignment?' — those are areas where AI can really move the needle or come up with ideas that you might otherwise never have thought of,' Paganetti said.
This season, the league will implement Sony's Hawk-Eye system to measure first downs with computer vision, which means six 8K cameras will be used in every stadium. If the footage from those cameras is someday fed into AI applications, it could further accelerate the pace of advancements.
Dimitroff estimates that 75 percent of NFL teams are using some sort of artificial intelligence in their weekly preparation but that most are using it only at the most basic level. Carroll, at least, plans to be on the cutting edge soon.
'It's just such a wide-open domain to kind of figure things out and do things new, take advantage and utilize everything you can think of,' Carroll said. 'That's something I like, man. If you're not curious, you're not growing. The last thing I'm going to do is ignore AI.'
(Illustration: Demetrius Robinson / The Athletic; Photo: Scott Winters / Icon Sportswire via Getty Images)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
31 minutes ago
- Forbes
How To Build Scalable, Reliable And Effective Internal Tech Systems
In many businesses, platform engineers serve two sets of customers: external clients and internal colleagues. When building tools for internal use, following the same user-centered design principles applied to customer-facing products isn't just good practice—it's a proven way to boost team efficiency, accelerate development and improve overall user satisfaction. Below, members of Forbes Technology Council share key design principles platform engineers should keep front and center whether they're building for clients or colleagues. From prioritizing real team needs to planning ahead for worst-case scenarios, these strategies can ensure internal systems are scalable, reliable and truly supportive of the teams they're built for. 1. Minimize User Friction The one core design principle platform engineers should keep front and center when building internal tools is minimizing user friction by streamlining the journey and improving cycle time. Additionally, internal tools should include clear feedback mechanisms to help users quickly identify and resolve issues, along with just-in-time guidance to support user education as needed. - Naman Raval 2. Build With External Use In Mind You should always consider the possibility that an internal tool may eventually end up being an external tool. With that in mind, you should try not to couple core logic to internal user information. - David Van Ronk, Bridgehead IT Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? 3. Design With Empathy It's important to design with empathy. Internal tools should prioritize user experience for the engineers and teams who rely on them. Simple, intuitive interfaces and seamless workflows reduce friction, enhance productivity and encourage adoption—making the tool not just functional, but loved. - Luis Peralta, Parallel Plus, Inc. 4. Focus On Simplicity Ease of use and intuitive design must be front and center when building internal tools. Features that are overly nested or require significant learning time directly impact productivity. This inefficiency can be quantified in terms of human hours multiplied by the number of resources affected, potentially leading to substantial revenue loss, especially for larger organizations. - Hari Sonnenahalli, NTT Data Business Solutions 5. Adopt Domain-Driven Design And A 'Streaming Data First' Approach Platform engineers should prioritize domain-driven design to explore, access and share data seamlessly. As cloud diversification and real-time data pipelines become essential, embracing a 'streaming data first' approach is key. This shift enhances automation, reduces complexity and enables rapid, AI-driven insights across business domains. - Guillaume Aymé, 6. Build Scalable Tools With A Self-Service Model A self-service-based scaled service operating model is critical for the success of an internal tool. Often, engineers take internal stakeholders for granted, not realizing they are their customers—customers whose broader use of an internal tool will make or break their product. Alongside scalable design, it will be equally important to have an organizational change management strategy in place. - Abhi Shimpi 7. Prioritize Cognitive Leverage Platform engineers should prioritize cognitive leverage over just reducing cognitive load. Internal tools should simplify tasks, amplify engineers' thinking and accelerate decision-making by surfacing context, patterns and smart defaults. - Manav Kapoor, Amazon 8. Empower Developers With Low-Dependency Tools The platform engineering team should strive to minimize dependencies on themselves when designing any solutions. It's crucial to empower the development team to use these tools independently and efficiently. - Prasad Banala, Dollar General Corporation 9. Lead With API-Driven Development Platform engineers should prioritize API-driven development over jumping straight into UI when building internal tools. Starting with workflows and backend design helps map data, avoid duplicated requests and reduce long-term tech debt. Though slower up front, this approach creates scalable, reliable tools aligned with actual business processes, not just quick fixes for internal use. - Jae Lee, MBLM 10. Observe Real Workflows Platform engineers should design for the actual job to be done, not just stated feature requests. They should observe how teams work and build tools that streamline those critical paths. The best internal tools solve real workflow bottlenecks, not just surface-level asks from teammates. - Alessa Cross, Ventrilo AI 11. Favor Speed, Flexibility And Usability You have to design like you're building a food truck, not a fine-dining kitchen—fast, flexible and usable by anyone on the move. Internal tools should favor speed over ceremony, with intuitive defaults and minimal setup. If your engineers need a manual just to order fries (or deploy code), you've overdesigned the menu. - Joel Frenette, 12. Ensure Tools Are Clear, Simple And Well-Explained When building internal tools, platform engineers should focus on making them easy and smooth for developers to use. If tools are simple, clear and well-explained, developers can do their work faster and without confusion. This saves time, reduces mistakes and helps the whole team work better. - Jay Krishnan, NAIB IT Consultancy Solutions WLL 13. Embrace User-Centric Design Platform engineers should prioritize user-centric design. They must focus on the needs, workflows and pain points of internal users to create intuitive, efficient tools. This principle ensures adoption, reduces training time and boosts productivity, as tools align with real-world use cases, minimizing friction and maximizing value for developers and teams. - Lori Schafer, Digital Wave Technology 14. Prioritize Developer Experience Internal platforms must prioritize developer experience above all. The best tools feel invisible—engineers use them without friction because interfaces are intuitive, documentation is clear and workflows are streamlined. When developers spend more time fighting your platform than building with it, you've failed your mission. - Anuj Tyagi 15. Bake In Observability Platform engineers should treat internal tools as evolving ecosystems, not static products. A core design principle is observability by default—bake in usage analytics, error tracking and feedback hooks from day one. This ensures tools organically improve over time and are grounded in real-world behavior, not assumptions, creating systems that adapt as teams and needs evolve. - Pawan Anand, Ascendion 16. Leverage Progressive Abstraction Progressive abstraction lets internal platforms scale with developer maturity. Engineers can start with guided, low-friction 'golden paths' for beginners while enabling power users to customize, script or access APIs. This balance avoids tool sprawl, supports growth and keeps platforms inclusive, adaptive and relevant over time. - Anusha Nerella, State Street Corporation 17. Streamline Processes Through Predictable, Intuitive Interfaces Internal tools must streamline processes instead of creating additional obstacles. Focus on clear, intuitive interfaces; fast onboarding with minimal documentation; and solid default settings that include advanced options for experienced users. Build in observability and self-service support, and strive for consistent, predictable behavior. - Saket Chaudhari, TriNet Inc. 18. Design Easy Authentication And Authorization Systems There should be ease of authentication and authorization. When building internal tools, you shouldn't design in silos. You must consider how many clicks it takes for an analyst, mid-call with a client, to launch what they need for troubleshooting. Seamless access, least privilege and contextual authentication aren't just security features—they're reflections of good architecture and thoughtful design. - Santosh Ratna Deepika Addagalla, Trizetto Provider Solutions 19. Engineer For High-Stress, Critical Scenarios A word of advice is to engineer for the worst day, not the average day. Internal tools become critical lifelines during incidents, yet we often design them for sunny-weather scenarios. When a system is melting down at 3 a.m. and the on-call engineer is bleary-eyed, that's when your tool's UX truly matters. Simple interfaces with clear error messages become worth their weight in gold. - Ishaan Agarwal, Square 20. Ensure Users Don't Need Deep Platform Knowledge Design for self-service and extension. Internal tools should empower teams to solve problems without deep platform knowledge. Engineers should hide complexity behind sensible defaults and include clean abstractions that allow extensions and clear documentation. Platforms succeed when others can build confidently without needing to ask for help every time. - Abhishek Shivanna, Nubank


Forbes
31 minutes ago
- Forbes
AI Reliability At Scale: Lessons From Enterprise Deployment
Alan Nichol is Cofounder & CTO of Rasa, a conversational AI framework that has been downloaded over 50 million times. Enterprises have no trouble proving that generative AI can make an impressive demo. The challenge is proving that it works consistently. A proof of concept might show off a powerful language model completing tasks or holding fluid conversations, but turn that same system loose in production, and cracks start to show: missed steps, unpredictable behavior, escalating costs. We've seen how many promising initiatives stall once they move beyond controlled environments. An assistant performs well in testing but struggles when faced with complex, real-world conversations. The root issue often isn't the model but the lack of structure. Too many systems rely on prompt engineering and trial-and-error, expecting the model to behave like a deterministic system. It won't. Enterprises need architecture that delivers clarity, control and repeatability. That starts by separating understanding from execution and designing assistants that operate with purpose and stability. This article outlines the most common failure points, the architectural patterns that work and what enterprises need to prioritize when building AI that meets real business standards. The Failure Mode: When Reliability Breaks Generative AI fails quietly at first. A missed step, a misinterpreted update, an unexpected fallback—these issues appear small in isolation. But when they surface repeatedly across thousands of interactions, the system begins to break. Conversations derail, users lose trust and costs spiral. We've seen assistants who can't hold context across turns, jump to the wrong conclusion or reset the conversation when the user simply tries to clarify something. These are not fringe cases. They're direct results of systems built entirely around prompting and agentic reasoning. When a language model is asked to carry the entire load (understanding, planning, executing), errors are inevitable. Without a clear separation between conversational logic and business execution, assistants behave inconsistently (handling the same request one moment and failing the next). Latency compounds as models attempt multiple reasoning steps per turn. Structured automation made the biggest difference. In Rasa's testing, assistants consistently followed business logic, while less structured approaches failed in over 80% of cases. These systems also delivered up to 4x faster response times without sacrificing consistency. Cost overruns follow the same pattern. Fully agentic architectures burn through tokens with each decision loop. When per-message costs were measured, systems using prompt-based execution consumed over twice the resources of structured approaches. These aren't isolated technical glitches. They're symptoms of an architecture with guardrails that don't function as intended. Klarna's shift back to human support highlights what happens when AI is deployed without sufficient structure or oversight. Building for reliability means designing assistants to handle real-world inputs, not just scripted demos. That starts by giving structure to how assistants interpret, decide and act. The Structural Fix: Rethinking AI Architecture Real reliability starts with designing structural systems. When assistants behave inconsistently, it's usually because their architecture often treats the model as both the interpreter and the executor. This entanglement makes it hard to control behavior, diagnose issues or scale confidently. Reliable systems assign clear roles: The language model handles interpretation, while deterministic components manage execution. Instead of letting an LLM guess the next action, these systems convert user input into executable commands that flow through structured, deterministic logic. That shift creates a predictable environment where behavior always aligns with business logic. Design LLMs to understand user input in context, mapping it to commands rather than guessing what to do next. Those commands move through structured flows that handle decision points, API calls and edge cases. This clear separation reduces variance, simplifies debugging and ensures the assistant doesn't veer off track in the middle of a task. Lessons From the Field: What Reliable AI Looks Like Across enterprise deployments, structured automation improved consistency by replacing fragile dialogue trees and intent classifications with command generation. This approach interprets user input as a sequence of actions, allowing assistants to handle nuanced phrasing and edge cases without collapsing the flow. Conversation repair takes this further by allowing assistants to respond smoothly to interruptions, corrections or topic changes. Combined with contextual rephrasing, responses remain fluid while staying tethered to the original business logic. These improvements don't require massive models or high-latency pipelines. Teams using smaller, fine-tuned models (i.e., Llama 8B, Gemma) have reduced costs and latency without sacrificing quality. Deployments that once depended on proprietary APIs now run effectively with open-source models and inference endpoints, offering performance and control. Balancing Performance And Cost At Scale Token-based pricing punishes high usage, even when a model generates near-identical responses. In a production environment, that adds up fast. A high-volume assistant handling repeated help requests doesn't need a high-powered model to phrase 'Let me assist you with that' ten thousand different ways. Rasa reduced those costs by deploying smaller, open-source models fine-tuned for specific roles. By offloading tasks like contextual rephrasing to a lightweight LLM, teams preserved conversational quality while cutting unnecessary overhead. These optimizations led to a 77.8% reduction in operational costs across real-world implementations. In rephrasing tasks, smaller models delivered natural, varied output with response quality that closely matched larger systems. Fine-tuning allowed developers to tune temperature settings, reduce hallucination risk and improve latency. Voice interfaces, in particular, demand low-latency systems. A delay of even a second breaks the illusion of real-time conversation. In head-to-head testing, fine-tuned small models outperformed GPT-4 in latency-sensitive deployments, delivering faster, more consistent user experiences at a dramatically lower cost. Conclusion Enterprise AI must deliver with consistency, accuracy and speed. That happens when systems are designed with clear boundaries between understanding and execution. Structured automation brings order to complexity, giving teams the tools to iterate faster, reduce costs and control what their assistant says and does. This approach doesn't slow progress. It can give teams the foundation to move from proof of concept to production confidently. When reliability is built in from the start, AI becomes easier to maintain, simpler to scale, and strong enough to support critical business operations. As with any AI model, it's important to deploy strategically and at a measured pace—and constantly monitor how it is performing. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Associated Press
31 minutes ago
- Associated Press
This Date in Baseball - New York Yankees set MLB record by homering in their 28th consecutive game
June 25 1934 — Pitcher John Broaca tied a major league record by striking out five consecutive times but pitched the Yankees to an 11-2 victory over the Chicago White Sox. Lou Gehrig had better luck at the plate, hitting for the cycle. 1937 — Augie Galan of Chicago became the first National League switch-hitter to homer from both sides of the plate in the Cubs' 11-2 victory over the Brooklyn Dodgers. 1950 — Chicago's Hank Sauer hit two home runs and two doubles to send the Cubs past the Philadelphia Phillies 11-8. 1961 — Baltimore and California used a major league record 16 pitchers, eight by each side, as the Orioles edged the Angels 9-8 on Ron Hansen's 14th-inning homer. 1968 — Bobby Bonds, in his first major league game, hit a grand slam off John Purdin to help San Francisco to a 9-0 win over Los Angeles. 1988 — Cal Ripken Jr. plays in his 1,000th consecutive game. 1998 — Sammy Sosa broke the major league record for homers in a month, hitting his 19th of June leading off the seventh inning of the Cubs' 6-4 loss to Detroit. Sosa passed the mark set by Detroit's Rudy York in August 1937. 1999 — Jose Jimenez, a rookie right-hander having one of the worst seasons than any other NL pitcher, threw St. Louis' first no-hitter in 16 seasons, outdueling Randy Johnson in a 1-0 victory over Arizona. 2002 — Luis Pujols of the Detroit Tigers and Tony Pena of the Kansas City Royals became the first Dominican-born managers to oppose each other in a major league game. 2007 — A fan charged at Bob Howry during the Cubs' 10-9 win over Colorado after the reliever helped blow an 8-3 lead in the ninth inning. Howry gave up back-to-back RBI singles to Garrett Atkins and Brad Hawpe and a three-run homer to Troy Tulowitzki. The fan then jumped onto the field from the roof of the Rockies' dugout and made it a few feet from the mound before security guards tackled him. Howry earned the victory when Alfonso Soriano hit a game-ending two-run single in the bottom of the inning. 2010 — Arizona's Edwin Jackson pitched a 1-0 no-hitter against Tampa Bay at Tropicana Field. 2010 — The Cubs suspend pitcher Carlos Zambrano indefinitely after he throws a tantrum in the dugout after giving up 4 runs in the 1st inning of a 6 - 0 loss to the White Sox. 'Big Z' blames first baseman Derrek Lee for letting a Juan Pierreground ball past him for a double that starts the rally, although the hard-hit ball was hardly catchable. Tom Gorzelanny replaces Zambrano who is removed from the game by manager Lou Piniella. 2011 — Cleveland's Tony Sipp balked home the only run with the bases loaded in the seventh inning of a 1-0 loss to San Francisco. Sipp slightly flinched his left arm before throwing a pitch to Emmanuel Burriss, allowing Miguel Tejada to score and sending San Francisco to its fourth straight win. There also were two errors in the inning by second baseman Cord Phelps that spoiled a strong start by Justin Masterson. 2013 — Eric Filia drove in a career-high five runs, Nick Vander Tuig limited Mississippi State to five hits in eight innings, and UCLA won 8-0 for its first national baseball championship. 2014 — Tim Lincecum pitched his second no-hitter against the San Diego Padres in less than a year, allowing only one runner and leading the San Francisco Giants to a 4-0 win. 2015 — The San Francisco Giants hit four triples in a game for the first time in 55 years, including a pair by Brandon Belt in a 13-8 win over the San Diego Padres. Brandon Crawford and Matt Duffy also tripled for San Francisco, which had not tripled four times in a game since Sept. 15, 1960, when Willie Mays hit three and Eddie Bressoud one at Philadelphia. 2018 — The St. Louis Cardinals record the 10,000th win in team history with a 4-0 defeat of the Cleveland Indians.. They are the sixth major league team to do so. 2019 — The New York Yankees set a new major league record by homering in their 28th consecutive game. 2021 — Philadelphia Philles pitcher Aaron Nola ties Tom Seaver's 51-Year old MLB record of ten consecutive strike outs in a 2-1 loss to the Mew York Mets. 2022 — Three Astros pitchers combine to no-hit the Yankees, 3 - 0. 2023 — George Springer leads off the bottom of the 1st for the Blue Jays against the Athletics with a homer off Luis Medina. The 55th leadoff home run of his career gives him sole possession of second place on the all-time list, behind only Rickey Henderson. The Blue Jays win handily, 12 - 1. _____