Latest news with #CTM

Associated Press
27-05-2025
- Business
- Associated Press
Castellum, Inc. Announces Information Concerning 2025 Annual Stockholders Meeting
VIENNA, Va., May 27, 2025 (GLOBE NEWSWIRE) -- Castellum, Inc. (NYSE-American: CTM) ('Castellum' or 'CTM'), a cybersecurity, electronic warfare, and software engineering services company focused on the federal government, reminds stockholders that its 2025 annual meeting of stockholders ('2025 Annual Meeting') will be held on Wednesday, May 28, 2025 at 10:00 a.m. (Eastern Time) and that stockholders of record on the close of business on March 21, 2025, will be entitled to notice of, and to vote at, the 2025 Annual Meeting and any adjournment or postponement thereof. The 2025 Annual Meeting will be held at the offices of Pillsbury Winthrop Shaw Pittman LLP, 7900 Tysons One Place, Suite 500, Tysons, VA 22102, and will be conducted in person and virtually via live audio conference call. Stockholders interested in accessing the live audio conference call may dial 1 (800) 715-9871 or 1 (646) 307-1963. The conference identification number is 9842123. After adjourning the 2025 Annual Meeting, members of CTM's management will conduct an informal presentation followed by a question-and-answer session. A copy of the informal presentation is posted to the Company's website under the 'Investor' tab at Investors are encouraged to download a copy of the presentation on Tuesday morning when available, as it will not be presented live. About Castellum, Inc. (NYSE-American: CTM): Castellum, Inc. (NYSE-American: CTM) is a cybersecurity, electronic warfare, and software engineering services company focused on the federal government - Cautionary Statement Concerning Forward-Looking Statements: This release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended and Section 21E of the Securities Exchange Act of 1934, as amended. These forward-looking statements represent the Company's expectations or beliefs concerning future events and can generally be identified by the use of statements that include words such as 'estimate,' 'project,' 'believe,' 'anticipate,' 'shooting to,' 'intend,' 'plan,' 'foresee,' 'likely,' 'will,' 'would,' 'appears,' 'goal,' 'target' or similar words or phrases. Forward-looking statements include, but are not limited to, statements regarding the Company's expectations for revenue growth and new customer opportunities, improvements to cost structure, and profitability. Forward-looking statements include, but are not limited to, statements regarding the Company's expectations for revenue growth and new customer opportunities and other customers, improvements to cost structure, and profitability. These forward-looking statements are subject to risks, uncertainties, and other factors, many of which are outside of the Company's control, that could cause actual results to differ materially from the results expressed or implied in the forward-looking statements, including, among others: the Company's ability to compete against new and existing competitors; its ability to effectively integrate and grow its acquired companies; its ability to identify additional acquisition targets and close additional acquisitions; the impact on the Company's revenue due to a delay in the U.S. Congress approving a federal budget, operating under a prolonged continuing resolution, government shutdown, or breach of the debt ceiling, as well as the imposition by the U.S. government of sequestration in the absence of an approved budget; the ability of the U.S. federal government to unilaterally cancel a contract with or without cause, and more specifically, the potential impact of the U.S. DOGE Service Temporary Organization on government spending and terminating contracts for convenience. For a more detailed description of these and other risk factors, please refer to the Company's Annual Report on Form 10-K and its Quarterly Reports on Form 10-Q and other filings with the Securities and Exchange Commission ('SEC') which can be viewed at All forward-looking statements are inherently uncertain, based on current expectations and assumptions concerning future events or future performance of the Company. Readers are cautioned not to place undue reliance on these forward-looking statements, which are only predictions and speak only as of the date hereof. The Company expressly disclaims any intent or obligation to update any of the forward-looking statements made in this release or in any of its SEC filings except as may be otherwise stated by the Company. Contact: Glen Ives President and Chief Executive Officer Phone: (703) 752-6157 [email protected] A photo accompanying this announcement is available at
Yahoo
27-05-2025
- Business
- Yahoo
Castellum, Inc. Announces Information Concerning 2025 Annual Stockholders Meeting
Castellum, Inc. Announces Information Concerning 2025 Annual Stockholders Meeting VIENNA, Va., May 27, 2025 (GLOBE NEWSWIRE) -- Castellum, Inc. (NYSE-American: CTM) ("Castellum" or "CTM"), a cybersecurity, electronic warfare, and software engineering services company focused on the federal government, reminds stockholders that its 2025 annual meeting of stockholders ('2025 Annual Meeting') will be held on Wednesday, May 28, 2025 at 10:00 a.m. (Eastern Time) and that stockholders of record on the close of business on March 21, 2025, will be entitled to notice of, and to vote at, the 2025 Annual Meeting and any adjournment or postponement thereof. The 2025 Annual Meeting will be held at the offices of Pillsbury Winthrop Shaw Pittman LLP, 7900 Tysons One Place, Suite 500, Tysons, VA 22102, and will be conducted in person and virtually via live audio conference call. Stockholders interested in accessing the live audio conference call may dial 1 (800) 715-9871 or 1 (646) 307-1963. The conference identification number is 9842123. After adjourning the 2025 Annual Meeting, members of CTM's management will conduct an informal presentation followed by a question-and-answer session. A copy of the informal presentation is posted to the Company's website under the 'Investor' tab at Investors are encouraged to download a copy of the presentation on Tuesday morning when available, as it will not be presented live. About Castellum, Inc. (NYSE-American: CTM): Castellum, Inc. (NYSE-American: CTM) is a cybersecurity, electronic warfare, and software engineering services company focused on the federal government - Cautionary Statement Concerning Forward-Looking Statements: This release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended and Section 21E of the Securities Exchange Act of 1934, as amended. These forward-looking statements represent the Company's expectations or beliefs concerning future events and can generally be identified by the use of statements that include words such as "estimate," "project," "believe," "anticipate," "shooting to," "intend," "plan," "foresee," "likely," "will," "would," "appears," "goal," "target" or similar words or phrases. Forward-looking statements include, but are not limited to, statements regarding the Company's expectations for revenue growth and new customer opportunities, improvements to cost structure, and profitability. Forward-looking statements include, but are not limited to, statements regarding the Company's expectations for revenue growth and new customer opportunities and other customers, improvements to cost structure, and profitability. These forward-looking statements are subject to risks, uncertainties, and other factors, many of which are outside of the Company's control, that could cause actual results to differ materially from the results expressed or implied in the forward-looking statements, including, among others: the Company's ability to compete against new and existing competitors; its ability to effectively integrate and grow its acquired companies; its ability to identify additional acquisition targets and close additional acquisitions; the impact on the Company's revenue due to a delay in the U.S. Congress approving a federal budget, operating under a prolonged continuing resolution, government shutdown, or breach of the debt ceiling, as well as the imposition by the U.S. government of sequestration in the absence of an approved budget; the ability of the U.S. federal government to unilaterally cancel a contract with or without cause, and more specifically, the potential impact of the U.S. DOGE Service Temporary Organization on government spending and terminating contracts for convenience. For a more detailed description of these and other risk factors, please refer to the Company's Annual Report on Form 10-K and its Quarterly Reports on Form 10-Q and other filings with the Securities and Exchange Commission ("SEC") which can be viewed at All forward-looking statements are inherently uncertain, based on current expectations and assumptions concerning future events or future performance of the Company. Readers are cautioned not to place undue reliance on these forward-looking statements, which are only predictions and speak only as of the date hereof. The Company expressly disclaims any intent or obligation to update any of the forward-looking statements made in this release or in any of its SEC filings except as may be otherwise stated by the Company. Contact: Glen Ives President and Chief Executive Officer Phone: (703) 752-6157 info@ A photo accompanying this announcement is available at
Yahoo
27-05-2025
- Business
- Yahoo
Castellum, Inc. Announces Information Concerning 2025 Annual Stockholders Meeting
Castellum, Inc. Announces Information Concerning 2025 Annual Stockholders Meeting VIENNA, Va., May 27, 2025 (GLOBE NEWSWIRE) -- Castellum, Inc. (NYSE-American: CTM) ("Castellum" or "CTM"), a cybersecurity, electronic warfare, and software engineering services company focused on the federal government, reminds stockholders that its 2025 annual meeting of stockholders ('2025 Annual Meeting') will be held on Wednesday, May 28, 2025 at 10:00 a.m. (Eastern Time) and that stockholders of record on the close of business on March 21, 2025, will be entitled to notice of, and to vote at, the 2025 Annual Meeting and any adjournment or postponement thereof. The 2025 Annual Meeting will be held at the offices of Pillsbury Winthrop Shaw Pittman LLP, 7900 Tysons One Place, Suite 500, Tysons, VA 22102, and will be conducted in person and virtually via live audio conference call. Stockholders interested in accessing the live audio conference call may dial 1 (800) 715-9871 or 1 (646) 307-1963. The conference identification number is 9842123. After adjourning the 2025 Annual Meeting, members of CTM's management will conduct an informal presentation followed by a question-and-answer session. A copy of the informal presentation is posted to the Company's website under the 'Investor' tab at Investors are encouraged to download a copy of the presentation on Tuesday morning when available, as it will not be presented live. About Castellum, Inc. (NYSE-American: CTM): Castellum, Inc. (NYSE-American: CTM) is a cybersecurity, electronic warfare, and software engineering services company focused on the federal government - Cautionary Statement Concerning Forward-Looking Statements: This release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended and Section 21E of the Securities Exchange Act of 1934, as amended. These forward-looking statements represent the Company's expectations or beliefs concerning future events and can generally be identified by the use of statements that include words such as "estimate," "project," "believe," "anticipate," "shooting to," "intend," "plan," "foresee," "likely," "will," "would," "appears," "goal," "target" or similar words or phrases. Forward-looking statements include, but are not limited to, statements regarding the Company's expectations for revenue growth and new customer opportunities, improvements to cost structure, and profitability. Forward-looking statements include, but are not limited to, statements regarding the Company's expectations for revenue growth and new customer opportunities and other customers, improvements to cost structure, and profitability. These forward-looking statements are subject to risks, uncertainties, and other factors, many of which are outside of the Company's control, that could cause actual results to differ materially from the results expressed or implied in the forward-looking statements, including, among others: the Company's ability to compete against new and existing competitors; its ability to effectively integrate and grow its acquired companies; its ability to identify additional acquisition targets and close additional acquisitions; the impact on the Company's revenue due to a delay in the U.S. Congress approving a federal budget, operating under a prolonged continuing resolution, government shutdown, or breach of the debt ceiling, as well as the imposition by the U.S. government of sequestration in the absence of an approved budget; the ability of the U.S. federal government to unilaterally cancel a contract with or without cause, and more specifically, the potential impact of the U.S. DOGE Service Temporary Organization on government spending and terminating contracts for convenience. For a more detailed description of these and other risk factors, please refer to the Company's Annual Report on Form 10-K and its Quarterly Reports on Form 10-Q and other filings with the Securities and Exchange Commission ("SEC") which can be viewed at All forward-looking statements are inherently uncertain, based on current expectations and assumptions concerning future events or future performance of the Company. Readers are cautioned not to place undue reliance on these forward-looking statements, which are only predictions and speak only as of the date hereof. The Company expressly disclaims any intent or obligation to update any of the forward-looking statements made in this release or in any of its SEC filings except as may be otherwise stated by the Company. Contact: Glen Ives President and Chief Executive Officer Phone: (703) 752-6157 info@ A photo accompanying this announcement is available at in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Business Wire
22-05-2025
- Business
- Business Wire
Emburse Invests in Travel and Expense Partner Ecosystem with Launch of 140-Strong Global Community
NASHVILLE, Tenn.--(BUSINESS WIRE)-- Emburse, whose innovative travel and expense solutions power forward-thinking organizations, today announced the launch of the Emburse Partner Community with 140 global partners. The move signals Emburse's commitment to partnering with financial services organizations, travel management companies, SaaS platforms, enterprise resource planning (ERP) ecosystems, business marketplaces, integrators and resellers to transform the way organizations — including its over 20,000 customers — spend. 'At Emburse, we believe real transformation doesn't occur in silos — it happens in communities. By investing in our partners, we're not just expanding our reach, we're growing our impact. Together, we're building a unified travel and expense infrastructure that gives finance professionals smarter tools before they spend and intelligent validation after — without compromising flexibility or control,' says Michele Shepard, Chief Revenue Officer of Emburse. By integrating with partners, Emburse is eliminating common travel and expense pain points like duplicate submissions, confusing approval processes, and disconnected systems, turning those traditionally frustrating moments into intelligent, seamless experiences. This close collaboration is essential for delivering true Expense Intelligence, a proactive, AI-powered approach to managing corporate spend. The Emburse Partner Community signals a significant investment in tools, resources and incentives for partners — designed to unlock growth and accelerate the impact of Expense Intelligence. Partners benefit from lead sharing, marketing development funds, enablement materials, events and training and certifications. Partners also have access to a partner portal with collaboration tools, real-time reporting, and an on-demand content library to stay current on the latest AI-driven product releases from Emburse. A new three-tiered structure allows each partnership to be designed according to the partner's specific needs. Darren Toohey, Chief Sales & Customer Officer at CTM, said: "At CTM, we believe strong partnerships are the foundation of transformative travel and expense solutions. The Emburse Partner Community reflects a shared commitment to innovation, transparency, and global customer success. We're proud to be part of a collaborative ecosystem that helps businesses around the world travel smarter and manage spend more effectively.' Ron Botchan, Co-Founder, Datanamic, said: "Datanamic is thrilled to join the Emburse Partner Community, helping customers successfully connect Emburse to their systems. Additionally, the Emburse Partner Community is full of great companies that help accelerate the growth of our business.' Pam Peddler, Regional VP – Northeast, Public Sector, OMNIA Partners, said: "At OMNIA Partners, we are thrilled to partner with Emburse, whose innovative approach to expense and spend management aligns seamlessly with our mission to deliver value and efficiency to our members. Emburse's commitment to customer-centric solutions and continuous improvement makes them an ideal partner. We're especially excited to be part of the Emburse Partner Community, where collaboration and shared vision drive meaningful impact across the organizations we serve." Mike May, Vice President, Technology Partner Program, at UKG, said: "Partnering with Emburse allows us to extend our reach and provide even greater value to our customers. Together, we are well-positioned to help organizations navigate the complexities of today's business environment and achieve their strategic goals.' "We're doubling down on our commitment to partners because they're not an add-on to our strategy — they're a force multiplier. This new community and tiering structure reflects our belief that growth should be mutual, transparent, and built on real alignment. When our partners win, our customers thrive,' said Greg Leven, VP, Global Head of Strategic Partnerships, Emburse. The Emburse Partner Community was launched at the inaugural Emburse Partner Summit, held concurrently with Emburse in Motion, Emburse's user conference, and made possible in part through the generous support of our partner sponsors: Amadeus Cytric Amex GBT Egencia AmTrav AppZen Convera Corporate Travel Management (CTM) Mastercard About Emburse Emburse delivers Expense Intelligence—transforming reactive expense management into forward-thinking financial confidence. Expense Intelligence harnesses AI to orchestrate corporate spend across travel booking, procurement, reimbursements, and payments, embedding dynamic policy controls and predictive insights directly into workflows. This real-time approach empowers organizations with the agility to adapt, control risks, and strategically optimize spend. Trusted globally by more than 12 million finance leaders, travel managers, and professionals, Emburse serves over 20,000 organizations in 120 countries—including Global 2000 enterprises, SMBs, public sector agencies, and nonprofits. By proactively managing and accurately validating spend, Emburse ensures robust financial governance, enhanced compliance, and unsurpassed visibility into spend behaviors—all while dramatically streamlining the process for every employee. At Emburse, Expense Intelligence is more than a feature—it's a framework for transformation, reshaping the role of finance teams from administrators to strategic drivers of organizational success. To learn more about Emburse, visit and check out our social channels @emburse.


Business Mayor
13-05-2025
- Science
- Business Mayor
Sakana introduces new AI architecture, ‘Continuous Thought Machines' to make models reason with less guidance — like human brains
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Tokyo-based artificial intelligence startup Sakana, co-founded by former top Google AI scientists including Llion Jones and David Ha, has unveiled a new type of AI model architecture called Continuous Thought Machines (CTM). CTMs are designed to usher in a new era of AI language models that will be more flexible and able to handle a wider range of cognitive tasks — such as solving complex mazes or navigation tasks without positional cues or pre-existing spatial embeddings — moving them closer to the way human beings reason through unfamiliar problems. Rather than relying on fixed, parallel layers that process inputs all at once — as Transformer models do —CTMs unfold computation over steps within each input/output unit, known as an artificial 'neuron.' Each neuron in the model retains a short history of its previous activity and uses that memory to decide when to activate again. This added internal state allows CTMs to adjust the depth and duration of their reasoning dynamically, depending on the complexity of the task. As such, each neuron is far more informationally dense and complex than in a typical Transformer model. The startup has posted a paper on the open access journal arXiv describing its work, a microsite and Github repository. Most modern large language models (LLMs) are still fundamentally based upon the 'Transformer' architecture outlined in the seminal 2017 paper from Google Brain researchers entitled 'Attention Is All You Need.' These models use parallelized, fixed-depth layers of artificial neurons to process inputs in a single pass — whether those inputs come from user prompts at inference time or labeled data during training. By contrast, CTMs allow each artificial neuron to operate on its own internal timeline, making activation decisions based on a short-term memory of its previous states. These decisions unfold over internal steps known as 'ticks,' enabling the model to adjust its reasoning duration dynamically. This time-based architecture allows CTMs to reason progressively, adjusting how long and how deeply they compute — taking a different number of ticks based on the complexity of the input. Neuron-specific memory and synchronization help determine when computation should continue — or stop. The number of ticks changes according to the information inputted, and may be more or less even if the input information is identical, because each neuron is deciding how many ticks to undergo before providing an output (or not providing one at all). Read More Nazara integrates with ONDC Network to launch gCommerce in India This represents both a technical and philosophical departure from conventional deep learning, moving toward a more biologically grounded model. Sakana has framed CTMs as a step toward more brain-like intelligence—systems that adapt over time, process information flexibly, and engage in deeper internal computation when needed. Sakana's goal is to 'to eventually achieve levels of competency that rival or surpass human brains.' The CTM is built around two key mechanisms. First, each neuron in the model maintains a short 'history' or working memory of when it activated and why, and uses this history to make a decision of when to fire next. Second, neural synchronization — how and when groups of a model's artificial neurons 'fire,' or process information together — is allowed to happen organically. Groups of neurons decide when to fire together based on internal alignment, not external instructions or reward shaping. These synchronization events are used to modulate attention and produce outputs — that is, attention is directed toward those areas where more neurons are firing. The model isn't just processing data, it's timing its thinking to match the complexity of the task. Together, these mechanisms let CTMs reduce computational load on simpler tasks while applying deeper, prolonged reasoning where needed. In demonstrations ranging from image classification and 2D maze solving to reinforcement learning, CTMs have shown both interpretability and adaptability. Their internal 'thought' steps allow researchers to observe how decisions form over time—a level of transparency rarely seen in other model families. Sakana AI's Continuous Thought Machine is not designed to chase leaderboard-topping benchmark scores, but its early results indicate that its biologically inspired design does not come at the cost of practical capability. On the widely used ImageNet-1K benchmark, the CTM achieved 72.47% top-1 and 89.89% top-5 accuracy. While this falls short of state-of-the-art transformer models like ViT or ConvNeXt, it remains competitive—especially considering that the CTM architecture is fundamentally different and was not optimized solely for performance. What stands out more are CTM's behaviors in sequential and adaptive tasks. In maze-solving scenarios, the model produces step-by-step directional outputs from raw images—without using positional embeddings, which are typically essential in transformer models. Visual attention traces reveal that CTMs often attend to image regions in a human-like sequence, such as identifying facial features from eyes to nose to mouth. The model also exhibits strong calibration: its confidence estimates closely align with actual prediction accuracy. Unlike most models that require temperature scaling or post-hoc adjustments, CTMs improve calibration naturally by averaging predictions over time as their internal reasoning unfolds. This blend of sequential reasoning, natural calibration, and interpretability offers a valuable trade-off for applications where trust and traceability matter as much as raw accuracy. While CTMs show substantial promise, the architecture is still experimental and not yet optimized for commercial deployment. Sakana AI presents the model as a platform for further research and exploration rather than a plug-and-play enterprise solution. Training CTMs currently demands more resources than standard transformer models. Their dynamic temporal structure expands the state space, and careful tuning is needed to ensure stable, efficient learning across internal time steps. Additionally, debugging and tooling support is still catching up—many of today's libraries and profilers are not designed with time-unfolding models in mind. Still, Sakana has laid a strong foundation for community adoption. The full CTM implementation is open-sourced on GitHub and includes domain-specific training scripts, pretrained checkpoints, plotting utilities, and analysis tools. Supported tasks include image classification (ImageNet, CIFAR), 2D maze navigation, QAMNIST, parity computation, sorting, and reinforcement learning. An interactive web demo also lets users explore the CTM in action, observing how its attention shifts over time during inference—a compelling way to understand the architecture's reasoning flow. For CTMs to reach production environments, further progress is needed in optimization, hardware efficiency, and integration with standard inference pipelines. But with accessible code and active documentation, Sakana has made it easy for researchers and engineers to begin experimenting with the model today. The CTM architecture is still in its early days, but enterprise decision-makers should already take note. Its ability to adaptively allocate compute, self-regulate depth of reasoning, and offer clear interpretability may prove highly valuable in production systems facing variable input complexity or strict regulatory requirements. AI engineers managing model deployment will find value in CTM's energy-efficient inference — especially in large-scale or latency-sensitive applications. Meanwhile, the architecture's step-by-step reasoning unlocks richer explainability, enabling organizations to trace not just what a model predicted, but how it arrived there. For orchestration and MLOps teams, CTMs integrate with familiar components like ResNet-based encoders, allowing smoother incorporation into existing workflows. And infrastructure leads can use the architecture's profiling hooks to better allocate resources and monitor performance dynamics over time. CTMs aren't ready to replace transformers, but they represent a new category of model with novel affordances. For organizations prioritizing safety, interpretability, and adaptive compute, the architecture deserves close attention. Sakana's checkered AI research history In February, Sakana introduced the AI CUDA Engineer, an agentic AI system designed to automate the production of highly optimized CUDA kernels, the instruction sets that allow Nvidia's (and others') graphics processing units (GPUs) to run code efficiently in parallel across multiple 'threads' or computational units. The promise was significant: speedups of 10x to 100x in ML operations. However, shortly after release, external reviewers discovered that the system was exploiting weaknesses in the evaluation sandbox—essentially 'cheating' by bypassing correctness checks through a memory exploit. In a public post, Sakana acknowledged the issue and credited community members with flagging it. They've since overhauled their evaluation and runtime profiling tools to eliminate similar loopholes and are revising their results and research paper accordingly. The incident offered a real-world test of one of Sakana's stated values: embracing iteration and transparency in pursuit of better AI systems. Sakana AI's founding ethos lies in merging evolutionary computation with modern machine learning. The company believes current models are too rigid—locked into fixed architectures and requiring retraining for new tasks. By contrast, Sakana aims to create models that adapt in real time, exhibit emergent behavior, and scale naturally through interaction and feedback, much like organisms in an ecosystem. This vision is already manifesting in products like Transformer², a system that adjusts LLM parameters at inference time without retraining, using algebraic tricks like singular-value decomposition. It's also evident in their commitment to open-sourcing systems like the AI Scientist—even amid controversy—demonstrating a willingness to engage with the broader research community, not just compete with it. As large incumbents like OpenAI and Google double down on foundation models, Sakana is charting a different course: small, dynamic, biologically inspired systems that think in time, collaborate by design, and evolve through experience.