
SGNL Launches MCP Gateway to Enable Secure AI Adoption for Enterprise Workforces
The release puts identity-first security policies in the path of every AI interaction, automatically blocking unauthorized actions while maintaining business velocity.
MCP is revolutionizing how AI agents interact with internal and external systems—enabling them to perform tasks, interact with data, and trigger workflows across the enterprise. But without robust access controls, these agents can operate unchecked, risking over-permissioned access and unintended data exposure. Because of this, enterprises have been hesitant to approve AI tools for their workforce.
SGNL's MCP Gateway changes that. It brings centralized, dynamic authorization to every MCP server in the enterprise—governing access not just based on what the agent wants to do, but who they represent, where the request is coming from, and why it's being made.
'SGNL's MCP Gateway delivers more than just a technical breakthrough,' said Stephen Ward, co-founder of Brightmind Partners, former Home Depot CISO, and ex-Secret Service cybersecurity leader. 'It's a strategic game-changer that gives enterprises the levers to align AI automation with business policy in real time, bridging the critical gap between innovation and control.'
Eliminating blind access in the age of autonomous IT
AI agents are entering enterprise workflows faster than security teams can respond. From summarizing sensitive data to triggering downstream actions, they don't inherently understand risk, yet they operate at machine speed across dynamic contexts where traditional boundaries no longer apply.
This creates a fundamental mismatch. Legacy role-based access control was designed for predictable human behavior, not autonomous systems making thousands of decisions per minute. Enterprises can't simply "IAM harder" with existing tooling because static RBAC becomes exponentially more dangerous when applied to agents that never sleep, never second-guess themselves, and correlate data in ways humans cannot.
The result is blind access at scale, where broadly privileged roles and brittle permission matrices compound risk with every agent interaction.
The SGNL MCP Gateway addresses this head-on with:
Real-time policy enforcement between MCP clients and servers
Continuous evaluation of identity, device compliance, and request context
Default-deny architecture with enterprise-wide MCP server registry that grants access only to approved services when explicitly justified
Centralized MCP server registry and visibility into every AI agent interaction
'The Gateway isn't just a feature—it's foundational,' said Scott Kriz, CEO and co-founder of SGNL. 'With it, we're giving customers the ability to harness AI's full potential without compromising on security and control. Our customers can now confidently adopt agent-based workflows knowing that access decisions are dynamic, contextual, and enforceable at every step.'
A real-world example: stopping data loss before it happens
In a common use case, an account executive attempts to use an AI agent to summarize Salesforce data from a non-compliant laptop. Without SGNL, the agent would retrieve and expose potentially sensitive customer data. With SGNL's MCP Gateway in place, contextual policy enforcement blocks the request—ensuring that only secure, compliant actions are permitted.
This is just one of countless scenarios where real-time governance makes the difference between acceleration and exposure.
See SGNL's MCP Gateway in action
Request a demo at sgnl.ai/mcp to see how SGNL's MCP Gateway governs AI agent access for enterprise workforces.
About SGNL
SGNL's modern Privileged Identity Management is redefining identity-first security for the enterprise. By decoupling credentials from identity and enabling real-time, context-aware access decisions, SGNL empowers organizations to reduce risk, streamline operations, and scale securely. Whether it's humans or AI agents, SGNL keeps your critical systems and sensitive data secure.
That's why Fortune 500 companies are turning to SGNL to simplify their identity access programs and secure critical systems. Learn more at sgnl.ai
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Miami Herald
an hour ago
- Miami Herald
Veteran trader highlights crypto miner after Google deal
TheStreet Pro's Stephen Guilfoyle knows what you're thinking. The veteran trader recently turned his attention to TeraWulf (WULF) , which saw its stock skyrocket on Aug. 14. Don't miss the move: Subscribe to TheStreet's free daily newsletter "Sarge, isn't Terawulf a cryptocurrency mining operation?" he wrote. "Yes, but that said, the firm is transitioning into something bigger and potentially far more consequential than that." Guilfoyle said TeraWulf has pivoted toward providing infrastructure to so-called hyperscalers, the large cloud service providers offering massive computing power and storage capacity, with a focus on AI-related workloads. "In short, the firm is likely trying to position itself as a competitor to CoreWeave (CRWV) ," he said, referring to the AI cloud-computing startup. Image source: East Bay Times via Getty Images Founded in 2021, TeraWulf said on its website that it provided "domestically produced bitcoin by using more than 90% zero carbon energy today." Guilfoyle, whose career dates back to the floor of the New York Stock Exchange in the 1980s, said Terawulf reached two 10-year agreements with AI cloud platform company Fluidstack to supply high-performance computing clusters to large cloud providers. Google parent Alphabet (GOOGL) has agreed to provide funding of $1.8 billion to help finance this project. In return, Alphabet received warrants to acquire roughly 41 million shares of TeraWulf that would amount to an 8% stake when exercised. More Experts Stocks & Markets Podcast: Sectors to Avoid With Jay WoodsTrader makes bold call with Boeing stock after defense workers strikeVeteran fund manager sends urgent 9-word message on stocks "These are truly a game changer for TeraWulf," Chief Financial Officer Patrick Fleury told analysts during the second-quarter earnings call. "The Fluid Stack lease and Google support agreement are carefully structured to enhance our credit profile and position us to scale quickly." TeraWulf's stock has surged 55.4% this year and skyrocketed 144% from this time in 2024. TeraWulf beat Wall Street's quarterly earnings expectations, with revenue increasing 34% year-over-year to $47.6 million. The company cited a higher average bitcoin price and expanded mining capacity, offset partly by expected headwinds from increased network difficulty and the April 2024 halving, where bitcoin reduced the block reward by 50%. "My target price is around $9.50," Guilfoyle said. "This is a trade, not an investment, and I expect to be flat the name by the closing bell should short-term traders take profits en masse on Friday." Clear Street analyst Brian Dobson raised the investment firm's price target on TeraWulf to $12 from $9 and affirmed a buy rating on the shares, according to The Fly. The colocation agreements with Fluidstack, supported by Google's $1.8 billion lease backstop and equity stake, and 80-year ground lease at the Cayuga site in New York, "materially enhance" TeraWulf's long-term growth profile, the analyst said. The firm upped its 2027 Ebitda estimate to reflect TeraWulf's expanding high performance computing portfolio. It sees potential upside to its outlook as it does not consider new business wins. Adding Fluidstack as a client, along with Google's commitment, "will create significant momentum and increase the likelihood of additional contract wins going forward," Dobson contended. Citizens JMP analyst Greg Miller raised the firm's price target on TeraWulf to $13 from $7 and maintained an outperform rating on the shares. Related: AI leader stuns Google with move that could reshape the internet TeraWulf reported solid Q2 results, underscoring progress in its strategic pivot toward high-performance computing hosting, the analyst said. The company is likely to exit mining by the next halving event, and it retains the flexibility to redeploy mining capacity toward HPC, aligning with customer demand trends, the firm says. Analysts have noted a shift from bitcoin mining to AI data centers, as both require huge amounts of electricity. A report by the International Energy Association said that electricity demand from data centers worldwide is set to more than double by 2030 to around 945 terawatt-hours, slightly more than the entire electricity consumption of Japan today. "Hyperscalers with generative AI needs are particularly interested in converting to bitcoin mining data centers due to the substantial power requirements and the urgency of deployment timelines," Prakash Vijayan, a senior analyst with Driehaus Capital Management, wrote in November. Vijayan said generative AI applications demand immense computational power and energy, often 10 times more than standard operations. "Bitcoin mining data centers are equipped with advanced cooling systems and have access to cheap, substantial energy sources," he said. "This presents an ideal solution for these needs." By repurposing existing bitcoin mining facilities, Vijayan said, hyperscalers can significantly reduce timelines and meet the growing demand for AI services more efficiently. "Given these trends, bitcoin miners are increasingly transitioning to AI data centers as a strategic move to diversify their revenue streams and leverage their existing infrastructure," he added. Related: The stock market is being led by a new group of winners The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

Miami Herald
an hour ago
- Miami Herald
Sam Altman's latest delusional AI prediction doesn't pass the test
Artificial intelligence has come a long way since OpenAI introduced the world to ChatGPT 3.5 in late 2022. The release of the large language model sets the stage for an AI revolution that the world is still trying to understand. AI has many more uses than running large language models (LLMs), and the business world is already using the advanced technology to transform e-commerce (like when Amazon uses AI to recommend products), automotive navigation (AI underpins the autonomous technology most car companies use), robotics, and image generation. Related: Tesla's biggest strength has suddenly become a liability ChatGPT, specifically, has been especially disruptive to education. Enterprising students have used ChatGPT to enhance their studies and reduce research time, while other students have just used the technology to cheat. But OpenAI co-founder Sam Altman says children born today will never be able to surpass the "intelligence" of his products. Sam Altman believes we are living in a technological Stone Age that will have future generations snickering at how bad we had it. "A kid born today will never be smarter than AI, ever. And a kid born today, by the time that kid kind of understands the way the world works, will just always be used to an incredibly fast rate of things improving and discovering new science," Altman said in a recent interview floating around on social media. Related: Economists send harsh message to White House over BLS nominee "They will never know any other world. It will seem totally natural. It will seem unthinkable and stone-age-like that we used to use computers or phones or any kind of technology that was not way smarter than we were." The citizens of his mythical future will eventually pity us lowly non-AI users. "We will think like, 'how bad those people of the 2020s had it,'" Altman added. The beauty of Altman's prediction is that he can be proven right in multiple ways. The tech could become so advanced and helpful that humans literally can't live without it, or children could become so slavish to LLMs like ChatGPT in the future that they become reliant on it to perform the simplest tasks. Last year, Turnitin's AI detection tool scanned over 200 million writing assignments and detected AI use in one in 10 assignments. However, only three out of every 100 assignments were primarily generated by AI. "We hit a steady state, and it hasn't changed dramatically since then," said Turnitin Chief Product Officer Annie Chechitelli. "There are students who are leaning on AI too much. But it's not pervasive. It wasn't this, 'The sky is falling.'" But Altman expects AI adoption to only increase in the near future, so the number of students using the tech to cheat will likely also increase. Harvard Graduate School of Education recently published a report on the use of AI by teens and young adult students. When asked what they wanted adults to know about AI, the kids said that AI is not just used to cheat, but is also helpful for finding answers to unasked questions. But 41% of the kids today (not the ones in Altman's future) said they are "acutely aware" of the potential issues with generative AI. "Across demographics, those surveyed also expressed fears about AI's impact on their future job markets, especially the creative pursuits they mentioned above. They also highlighted potential privacy concerns, and generative AI's use to create and spread misinformation and disinformation," the report said. So it appears Sam Altman and company have some work to do to create the future they seem to believe is inevitable. Related: Car buyers beware, the market appears to be shifting gears The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.


Digital Trends
an hour ago
- Digital Trends
Blindsided by brutal AI chess bots? This one thinks like a human
For years, the game of chess has been seen as a litmus test for how far AI can go against the human intellect. When IBM's Deep Blue supercomputer beat reigning Chess world champion Garry Kasparov in 1997, it was deemed a pivotal moment. The Wall Street Journal called it 'one giant leap backward for mankind.' It hasn't been a total route for humanity, however. Just a month ago, Norwegian chess grandmaster Magnus Carlsen beat ChatGPT in a chess game without losing a single piece. Interestingly, the AI bots are fighting, too. Earlier this month, ChatGPT (backed by OpenAI's GPT-o3 reasoning model) beat Grok, an AI chatbot developed by Elon Musk-led xAI, in a chess tournament. Recommended Videos But how does an average chess-loving person fare against AI bots at chess? Well, it's frustrating. One of the most recurring themes you will see on chess forums is that bots play 'differently from humans.' Of course, when you're playing against a chess computing bot like Stockfish 16 that can assess over ten million positions per second, not many players stand a chance. On the other hand, some seasoned players say beating chess bots is easier because they follow a pattern, and that one must know how to survive the initial assault to beat them. But at the end of the day, AI bots don't play by conventions. A researcher at Carnegie Mellon University has now come up with a less alien solution – an AI chess bot that plays like a human. Say hello to Allie The bot named Allie is the brainchild of Yiming Zhang, a PhD candidate at the Language Technologies Institute (LTI) in CMU's School of Computer Science. Interestingly, Zhang found himself eager to play chess after watching Netflix's popular series, 'The Queen's Gambit.' But soon after dipping into the world of online chess, he found himself frustrated by chess bots. After playing against them, he realized that these bots play unnaturally. Moreover, the underlying tactics behind a chess engine often make it nearly impossible to beat them, thanks to their training, which involves winning at all costs by doing increasingly complex calculations. That's where Allie differs from your average chess-acing bot. It has been trained on 91 million transcripts of chess games played between humans. As a result, the way it contemplates moves, makes attacking advances, and defends positions feels like an average human player. 'Allie is trained on log sequences of real chess games to model the behaviors of human chess players across the skill spectrum, including non-move behaviors such as pondering times and resignations,' says the research paper. During evaluations, researchers found that Allie actually 'ponders' at critical situations in the game. Go ahead and test your mettle The fact that Allie has been trained to think like a human doesn't mean it's a weak player. Far from it, actually. It can hold its fort against everyone from amateurs to grandmasters. 'Against grandmaster-level (2500 Elo) opponents, Allie with adaptive search exhibits the strength of a fellow grandmaster, all while learning exclusively from humans.' Since being deployed publicly, it has amassed more than 11,500 online chess games on the online platform Lichess, where you can also try your skills against it. So far, it has over 6,500 victories against human players, lost just over 4,000 games, and more than 500 battles have ended in a draw. 'For beginners, it's not interesting or instructive to play against chess bots because the moves they make are often bizarre and incomprehensible to humans,' Zhang explains. Interestingly, Allie is completely free and open-source, which means other researchers can build atop it. Do keep in mind it's only accepting invites for Blitz games. Moreover, if you want to learn how the human-like AI chess bot makes its moves before going against Allie, you can watch it in action versus other human players at Lichess. And if you want to take a peek at the code, head over to the GitHub repository.