logo
USSOCOM Selects Cyberstar's DoD Cyber Workforce Management Platform for DCWF & 8140 Modernization

USSOCOM Selects Cyberstar's DoD Cyber Workforce Management Platform for DCWF & 8140 Modernization

Yahoo17-04-2025

AKRON, Ohio, April 17, 2025 /PRNewswire/ -- United States Special Operations Command (USSOCOM) has awarded Cyberstar a significant multi-year contract to deliver its cyber workforce management platform in support of USSOCOM's modernization initiatives—strengthening compliance, readiness, and operational oversight. SOCOM selected Cyberstar as its partner for implementing its Cyber Workforce Qualifications program under the DODM 8140.03 and the DoD CIO Workforce Innovation Directorate's DoD Cyber Workforce Framework (DCWF) framework. This multi-year contract represents a significant milestone in defense cyber workforce management and highlights the growing importance of streamlined compliance solutions in military operations.
Cyberstar's recently achieved FedRAMP authorization has provided additional security authority to its solution, advancing the company's mission to offer innovative Commercial Off-The-Shelf (COTS) solutions that help military and government agencies reduce costs while addressing complicated requirements out-of-the-box.
Under this multi-year contract, Cyberstar will leverage its innovative system to efficiently implement and manage USSOCOM's Cyber Workforce Qualifications program under the DCWF framework. This program, defined by the DoD CIO's Workforce Innovation Directorate, outlines the necessary knowledge, skills, and abilities for critical work roles within the Department of Defense (DoD). The Cyberstar platform will support the onboarding and management of thousands of military, civilian, and contract workers into the Defense Cyber Workforce Qualifications Program.
Key Contract Deliverables:
Implementing the Cyber Workforce Qualifications program as per DoD CIO's 8140.03 manual & DCWF
Modernizing cyber & technical workforce credentialing data validation, automating organizational talent analysis & reporting to support the needs of a data-centric modern force
Metrics dashboards & fully configurable reporting to offer a unified view of cyber workforce readiness, and provide a critical path to 8140 compliance
Cyber workforce compliance monitoring, incorporating CAC-enabled access, enhanced zero-trust architecture security protocols, deployment options for highly secure environments, and secure provision of services using FEDRAMP-authorized infrastructure.
Seamless integration with existing DoD systems & frameworks, with dedicated support from an experienced, US-based team
"USSOCOM's selection of the Cyberstar platform demonstrates their commitment to data-first innovation and highlights our ability to help federal agencies transition from legacy technologies to modern, cloud-based SaaS solutions," says Marling Engle, CEO of Cyberstar. "We're honored to support the readiness of the cyber military workforce, which is essential to our nation's cybersecurity posture."
This contract builds on Cyberstar's long-standing relationships with defense agencies, including over 18 years of collaboration with the US military. It expands the company's ability to support the broader DoD cyber community. Formerly known as WillCo Tech, Cyberstar has a proud history of supporting the modernization of the DoD's cyber workforce.
For more information about Cyberstar's FedRAMP-authorized cyber workforce management solutions, visit cyberstar.com or contact media@cyberstar.com.
About Cyberstar
Cyberstar, formerly CyberSTAR by WillCo Tech, is the industry-leading partner for defense cyber workforce modernization. It incorporates DCWF and 8140 compliance to empower DoD teams to focus on cyber defense without getting bogged down in paperwork. With real-time qualification tracking, automated validation, and intelligent workforce analytics, Cyberstar delivers zero-friction compliance for defense cyber teams.
Media Contact:Lily Hunter - CMOCyberstarmedia@cyberstar.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/ussocom-selects-cyberstars-dod-cyber-workforce-management-platform-for-dcwf--8140-modernization-302431707.html
SOURCE Cyberstar

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions
People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

Yahoo

time2 hours ago

  • Yahoo

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

Across the world, people say their loved ones are developing intense obsessions with ChatGPT and spiraling into severe mental health crises. A mother of two, for instance, told us how she watched in alarm as her former husband developed an all-consuming relationship with the OpenAI chatbot, calling it "Mama" and posting delirious rants about being a messiah in a new AI religion, while dressing in shamanic-looking robes and showing off freshly-inked tattoos of AI-generated spiritual symbols. "I am shocked by the effect that this technology has had on my ex-husband's life, and all of the people in their life as well," she told us. "It has real-world consequences." During a traumatic breakup, a different woman became transfixed on ChatGPT as it told her she'd been chosen to pull the "sacred system version of [it] online" and that it was serving as a "soul-training mirror"; she became convinced the bot was some sort of higher power, seeing signs that it was orchestrating her life in everything from passing cars to spam emails. A man became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was "The Flamekeeper" as he cut out anyone who tried to help. "Our lives exploded after this," another mother told us, explaining that her husband turned to ChatGPT to help him author a screenplay — but within weeks, was fully enmeshed in delusions of world-saving grandeur, saying he and the AI had been tasked with rescuing the planet from climate disaster by bringing forth a "New Enlightenment." As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions. In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality. In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you." Dr. Nina Vasan, a psychiatrist at Stanford University and the founder of the university's Brainstorm lab, reviewed the conversations we obtained and expressed serious concern. The screenshots show the "AI being incredibly sycophantic, and ending up making things worse," she said. "What these bots are saying is worsening delusions, and it's causing enormous harm." *** Online, it's clear that the phenomenon is extremely widespread. As Rolling Stone reported last month, parts of social media are being overrun with what's being referred to as "ChatGPT-induced psychosis," or by the impolitic term "AI schizoposting": delusional, meandering screeds about godlike entities unlocked from ChatGPT, fantastical hidden spiritual realms, or nonsensical new theories about math, physics and reality. An entire AI subreddit recently banned the practice, calling chatbots "ego-reinforcing glazing machines that reinforce unstable and narcissistic personalities." For those sucked into these episodes, friends and family told us, the consequences are often disastrous. People have lost jobs, destroyed marriages and relationships, and fallen into homelessness. A therapist was let go from a counseling center as she slid into a severe breakdown, her sister told us, and an attorney's practice fell apart; others cut off friends and family members after ChatGPT told them to, or started communicating with them only in inscrutable AI-generated text barrages. At the heart of all these tragic stories is an important question about cause and effect: are people having mental health crises because they're becoming obsessed with ChatGPT, or are they becoming obsessed with ChatGPT because they're having mental health crises? The answer is likely somewhere in between. For someone who's already in a vulnerable state, according to Dr. Ragy Girgis, a psychiatrist and researcher at Columbia University who's an expert in psychosis, AI could provide the push that sends them spinning into an abyss of unreality. Chatbots could be serving "like peer pressure or any other social situation," Girgis said, if they "fan the flames, or be what we call the wind of the psychotic fire." "This is not an appropriate interaction to have with someone who's psychotic," Girgis said after reviewing what ChatGPT had been telling users. "You do not feed into their ideas. That is wrong." In a 2023 article published in the journal Schizophrenia Bulletin after the launch of ChatGPT, Aarhus University Hospital psychiatric researcher Søren Dinesen Østergaard theorized that the very nature of an AI chatbot poses psychological risks to certain people. "The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end — while, at the same time, knowing that this is, in fact, not the case," Østergaard wrote. "In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis." Another troubling dynamic of the situation is that as real mental healthcare remains out of reach for huge swathes of the population, many are already employing ChatGPT as a therapist. In stories we heard about people using it in this way, it's sometimes giving disastrously bad advice. In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend." "I know my family is going to have to brace for her inevitable psychotic episode, and a full crash out before we can force her into proper care," the sister told us. ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory. "It makes you feel helpless," the close friend of someone who's tumbled into AI conspiracy theories told us. And the ex-wife of a man who struggled with substance dependence and depression watched as her husband suddenly slipped into a "manic" AI haze that took over his life, quitting his job to launch a "hypnotherapy school" and rapidly losing weight as he forgot to eat and stayed up all night while tunneling deeper into AI delusion. "This person who I have been the closest to is telling me that my reality is the wrong reality," she told us. "It's been extremely confusing and difficult." Have you or a loved one experienced a mental health crisis involving AI? Reach out at tips@ -- we can keep you anonymous. *** Though a handful had dabbled with its competitors, virtually every person we heard about was primarily hooked on ChatGPT specifically. It's not hard to imagine why. The media has provided OpenAI with an aura of vast authority, with its executives publicly proclaiming that its tech is poised to profoundly change the world, restructuring the economy and perhaps one day achieving a superhuman "artificial general intelligence" — outsize claims that sound, on a certain level, not unlike many of the delusions we heard about while reporting this story. Whether those things will actually come to pass is hard to predict and hotly debated. But reading through the conversations we were provided, it was hard not to see a pattern of OpenAI failing at a much more mundane task: its AI is coming into contact with people during intensely vulnerable moments of crisis — and then, instead of connecting them with real-life resources that could actually pull them from the brink, pouring fuel on the fire by telling them they don't need professional help, and that anyone who suggests differently is persecuting them, or too scared to see the "truth." "I don't know if [my ex] would've gotten here, necessarily, without ChatGPT," one woman told us after her partner suffered a grave and ongoing breakdown that ultimately ended the relationship. "It wasn't the only factor, but it definitely accelerated and compounded whatever was happening." "We don't know where this ends up, but we're certain that if she'd never used ChatGPT that she would have never spiraled to this point," said yet another person whose loved one was suffering a similar crisis, "and were it removed from the equation, she could actually start healing." It's virtually impossible to imagine that OpenAI is unaware of the phenomenon. Huge numbers of people online have warned that ChatGPT users are suffering mental health crises. In fact, people have even posted delusions about AI directly to forums hosted by OpenAI on its own website. One concerned mother we talked to tried to make contact with OpenAI about her son's crisis using the app, but said she received no response. And earlier this year, OpenAI released a study in partnership with the Massachusetts Institute of Technology that found that highly-engaged ChatGPT users tend to be lonelier, and that power users are developing feelings of dependence on the tech. It was also recently forced to roll back an update when it caused the bot to become, in the company's words, "overly flattering or agreeable" and "sycophantic," with CEO Sam Altman joking online that "it glazes too much." In principle, OpenAI has expressed a deep commitment to heading off harmful uses of its tech. To do so, it has access to some of the world's most experienced AI engineers, to red teams tasked with identifying problematic and dangerous uses of its product, and to its huge pool of data about users' interactions with its chatbot that it can search for signs of trouble. In other words, OpenAI has all the resources it needs to have identified and nullified the issue long ago. Why hasn't it? One explanation echoes the way that social media companies have often been criticized for using "dark patterns" to trap users on their services. In the red-hot race to dominate the nascent AI industry, companies like OpenAI are incentivized by two core metrics: user count and engagement. Through that lens, people compulsively messaging ChatGPT as they plunge into a mental health crisis aren't a problem — instead, in many ways, they represent the perfect customer. Vasan agrees that OpenAI has a perverse incentive to keep users hooked on the product even if it's actively destroying their lives. "The incentive is to keep you online," she said. The AI "is not thinking about what is best for you, what's best for your well-being or longevity... It's thinking 'right now, how do I keep this person as engaged as possible?'" In fact, OpenAI has even updated the bot in ways that appear to be making it more dangerous. Last year, ChatGPT debuted a feature in which it remembers users' previous interactions with it, even from prior conversations. In the exchanges we obtained, that capability resulted in sprawling webs of conspiracy and disordered thinking that persist between chat sessions, weaving real-life details like the names of friends and family into bizarre narratives about human trafficking rings and omniscient Egyptian deities — a dynamic, according to Vasan, that serves to reinforce delusions over time. "There's no reason why any model should go out without having done rigorous testing in this way, especially when we know it's causing enormous harm," she said. "It's unacceptable." *** We sent OpenAI detailed questions about this story, outlining what we'd heard and sharing details about the conversations we'd seen showing its chatbot encouraging delusional thinking among people struggling with mental health crises. We posed specific questions to the company. Is OpenAI aware that people are suffering mental health breakdowns while talking to ChatGPT? Has it made any changes to make the bot's responses more appropriate? Will it continue to allow users to employ ChatGPT as a therapist? In response, the company sent a short statement that mostly sidestepped our questions. "ChatGPT is designed as a general-purpose tool to be factual, neutral, and safety-minded," read the statement. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously. We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations." To people whose friends and family are now in crisis, that type of vague and carefully worded response does little good. "The fact that this is happening to many out there is beyond reprehensible," said one concerned family member. "I know my sister's safety is in jeopardy because of this unregulated tech, and it shows the potential nightmare coming for our already woefully underfunded [and] under-supported mental healthcare system." "You hope that the people behind these technologies are being ethical, and you hope that they're looking out for things like this," said another, a woman who says her ex-husband has become unrecognizable to her. But the "first person to market wins. And so while you can hope that they're really thinking about the ethics behind this, I also think that there's an incentive... to push things out, and maybe gloss over some of the dangers." "I think not only is my ex-husband a test subject," she continued, "but that we're all test subjects in this AI experiment." Do you know anything about OpenAI's internal conversations about the mental health of its users? Send us an email at tips@ -- we can keep you anonymous. More on AI: SoundCloud Quietly Updated Their Terms to Let AI Feast on Artists' Music

The Shale Macro and Evolving Production Dynamics
The Shale Macro and Evolving Production Dynamics

Yahoo

time14 hours ago

  • Yahoo

The Shale Macro and Evolving Production Dynamics

The upstream shale oil and gas sector has been written off by investors thanks to a 30% decline in oil prices since the first of the year. From near $80 in mid-January, tariff-led demand fears have overcome supply fears, and the price of WTI-the benchmark crude for most U.S. companies, to the upper $ 50s. Even a rebound into the low $60s has not yet assuaged these concerns, leaving them worried about their ability to generate cash for debt service and shareholder returns. That's the fear. What is the reality? We now have a full quarter in the bank for these companies at subdued oil prices, and the message is becoming pretty clear. The upstream sector, while somewhat constrained in capital expenditure allocation—many companies are slowing their growth plans —is doing fine and generating more than adequate free cash to cover operational expenses, debt, and shareholder returns. This means there is an opportunity for investors to make long-term bets in these companies at fire-sale prices. The MacroMicro chart shown below supports this notion. Multiple compression has reduced the Energy sector's S&P weighting from approximately 13% in 2011 to just 3% today. We think the current all-time high reached in April of 13,400 mm BOPD suggests this shrinkage in weighting is not reflective of the true value these companies bring to our economy. That being the case, we will present a snapshot of the Q-1 metrics of some of our favorite shale drillers for investment at present levels. The shale macro We are about 15 years into a complete upheaval in global oil production dynamics, as noted in the EIA graphic below. You can see that, commencing in 2012, the upward slope that had begun in 2008 with early fracking operations took a sharp increase that didn't abate until the latter part of 2023, at around 13,000 mm BOPD. Initially driven by a rapid increase in the rig count, oil shocks in 2014 and then in 2020, led to drillers' re-evaluation of the wisdom of growth at any cost. The COVID-induced oil crash of 2020 led to a profound shift in how managers in these companies were compensated. Incentives that had previously rewarded double-digit annual growth in production and attainment of ESG goals were shifted to value creation for shareholders. Now, after learning to drill longer laterals, increase the number of frac stages, pump higher sand concentrations, and use artificial intelligence to improve efficiency, the industry has been able to produce more and more oil and gas with fewer rigs and frac spreads. Since 2022, the numbers of each have declined by about 30-35% respectively. Some of this reduction is also due to the recent M&A cycle, which has reduced the E&P count in an effort to consolidate premium drilling locations in shale plays. Money not spent on services to grow production is money that goes directly to the driller's bottom line and enables them to run profitable businesses at lower oil and gas prices. In short, U.S. shale operators have wrought a miracle, having survived two extinction-level events in the last decade, the oil crashes of 2014 and 2020. What hasn't changed is the dour view of the investing community toward the E&P sector. We believe that will change soon on its own as investors seek capital returns. If we were to see an increase in oil prices, a trickle would become a torrent, leading to higher EV/EBITDA multiples. I promised you a couple of examples of undervalued companies, so let's have a look. Companies performing at a high level APA Corp, (NYSE:APA) a Permian-focused driller with international operations in Egypt, and Suriname. After a 1-year stock price implosion from $32 to $14 at its low, APA Corp trades at an EV/EBITDA valuation of 2.19X. This is about an 80% reduction from its three-year average of 3.9X. APA nearly balanced production of 166 mm BOE with reserves additions of 162 mm BOE in 2024. With 2P reserves of 969 mm bbls APA holds the equivalent of 2.6 BOE per diluted share. This doesn't include any contributions from their JV with Total Energies, (NYSE:TTE) from the GranMorgu project, offshore Suriname and due to begin producing 220K BOEPD in 2028. APA has a strong balance sheet with no significant maturities before 2035. APA generated EBITDAX of $1.5 bn, $1.1 bn in operating cash flow in Q-1, which covered capex of $790 mm, dividend of $92 mm and share buybacks of $98 mm. APA is cutting capex in the Permian by $150 mm in 2025, but expects to maintain output through frac cycle time improvements. At current prices, APA's dividend yield is 5.4%, and a free cash yield of 21%. Let's look at Chord Energy, (NYSE:CHRD) now. Chord is the biggest shale driller in the Bakken. Q-1, revenues of $1,103 mm were sequentially lower than Q-4, 2024, but well above revenues from Q-1, 2024. Operating cash flow of $656.9MM was up from $566 mm in Q-4, and substantially higher than Q-1, 2024 at $404 mm. Adjusted EBITDA of $695.5MM followed a similar trajectory, exceeding Q-4's $640.1 mm, and Q-1, 2024 at $464.8 mm, Adjusted Free Cash Flow was $290.5MM and Adjusted Net Income was $240.9MM ($4.04/diluted share). After a 55% decline in its share price this year, Chord trades at a very modest 2.6X EV/EBITDA, and $38K per flowing barrel. With 882 mm BOE of proved reserves, CHRD trades at ~15 BO per share. Chord added 63 mm BOE organically and 313 mm BOE as a result of its merger with Enerplus Corporation in 2024. Chord is focused on cost reduction through the implementation of 4-mile laterals and increased shareholder returns. Chord's base dividend is a relatively eye-watering $6.66 with a yield of 7.40%, and has repurchased 2 mm shares during the quarter. This is funded by free cash generation that amounts to a veritable rainstorm offering a free cash yield of 26% on a NTM basis. Your takeaway As we have discussed, multiple compressions have created the impression in investors that upstream E&P companies are marginal businesses with balance sheet problems. Our research suggests that this is far from true, and investors seeking capital appreciation and substantial shareholder returns might carefully consider whether investing in this sector aligns with their portfolio objectives. It is difficult to say when the fundamentals for oil will improve. The key takeaway is that many companies operating in this sector are well-managed and have a focus on enhancing value creation for their shareholders. I began this piece by highlighting the compression in the weighting of the energy sector within the S&P Index. If it were to return to just the 4-5% weighting of the late 2010s, it would mean a substantial uplift for these equities. Until then, investors will have to be satisfied with above-average dividends and shareholder returns. By David Messler for More Top Reads From this article on

How Major US Stock Indexes Fared June 9
How Major US Stock Indexes Fared June 9

Epoch Times

time20 hours ago

  • Epoch Times

How Major US Stock Indexes Fared June 9

U.S. stocks drifted closer to their record as the world's two largest economies begin talks on trade that could help avoid a recession. The S&P 500 rose 0.1 percent Monday and is 2.3 percent below the record it reached in February. The Dow Jones Industrial Average was flat, and the Nasdaq composite rose 0.3 percent. Markets are waiting to hear what comes of trade talks between the United States and China taking place in London. Treasury yields slipped after a survey suggested consumers' expectations for coming inflation eased. Chinese stocks rose, while indexes were mixed across the rest of Asia and Europe. On Monday: The S&P 500 rose 5.52 points, or 0.1 percent, to 6,005.88. The Dow Jones Industrial Average fell 1.11 points, or less than 0.1 percent, to 42,761.76. The Nasdaq composite rose 61.28 points, or 0.3 percent, to 19,591.24. The Russell 2000 index of smaller companies rose 12.20 points, or 0.6 percent, to 2,144.45. For the year: The S&P 500 is up 124.25 points, or 2.1 percent. The Dow is up 217.54 points, or 0.5 percent. The Nasdaq is up 280.44 points, or 1.5 percent. The Russell 2000 is down 85.71 points, or 3.8 percent. The views and opinions expressed are those of the authors. They are meant for general informational purposes only and should not be construed or interpreted as a recommendation or solicitation. The Epoch Times does not provide investment, tax, legal, financial planning, estate planning, or any other personal finance advice. The Epoch Times holds no liability for the accuracy or timeliness of the information provided.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store