
A New Assembly Line at Everett? Boeing Stock (NYSE:BA) Slips as Expansion Considered
Confident Investing Starts Here:
Boeing wants to get its numbers back up, particularly as the Chinese market is coming back online and offering up a potential new gold mine of orders for Boeing. In fact, Kelly Ortberg—Boeing's CEO—recently noted that the Everett plant may get an entire production line for the 737 Max 10 jet. The interesting part about this is that the jet in question has not yet been certified by the FAA for use, despite efforts to get it to that point.
Boeing has had plans to ramp up the new Max production capabilities in the works since 2023, reports noted, but the plans have not yet been made concrete. With this new line, it represents a huge step forward and the very real possibility of the Max 10's certification soon. After all, why set up a production line for a plane you are not allowed to build by law?
Another DEI Shutdown
Meanwhile, Boeing also pared back a diversity, equity and inclusion (DEI) program, which works to bring it toward a goal of '…ensur(ing) compliance with the law.' Boeing shut down both events and funding for two programs: diversity councils and business resource groups. Boeing has nine such groups within its operations, serving as 'affinity groups,' or groups that worked '…connect and develop individuals with a common interest or identity, including race and gender.'
It is worth noting that these groups are apparently 'open to all workers,' though one wonders how those workers who are not in line with the affinity group in question are treated. Boeing has already shuttered its entire DEI department as far back as November, and its head ultimately left Boeing altogether.
Is Boeing a Good Stock to Buy Right Now?
Turning to Wall Street, analysts have a Moderate Buy consensus rating on BA stock based on 15 Buys, four Holds and one Sell assigned in the past three months, as indicated by the graphic below. After a 17.21% rally in its share price over the past year, the average BA price target of $213.37 per share implies 3.34% upside potential.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNBC
18 minutes ago
- CNBC
Nvidia says its AI chips do not have a 'kill switch' after Chinese accusation
Nvidia on Tuesday rejected Chinese accusations that its data center GPUs for artificial intelligence include a hardware function that could remotely deactivate the chips, which is commonly called a "kill switch." "NVIDIA GPUs do not and should not have kill switches and backdoors," wrote Nvidia's Chief Security Officer David Reber in a blog post on Tuesday. The blog post comes after the Cyberspace Administration of China said last week that it needed Nvidia to provide documents about what it called security vulnerabilities in the H20, Nvidia's data center AI chip intended for the Chinese market. The regulator specifically mentioned "backdoor" security risks, according to the New York Times. The statement is an example of how Nvidia is navigating geopolitical conflict as its AI chips remain in high demand by countries and companies around the world. U.S. lawmakers have proposed legislation that would require AI chips under export regulations to be equipped with location-tracking systems. The U.S. has placed export controls on some Nvidia chips to China because of national security reasons, saying that the country could use the chips to gain an advantage in AI or for military purposes. Nvidia CEO Jensen Huang has argued that it is better for the U.S. if Nvidia's chips become the global standard for AI computers, especially among Chinese developers. The H20 generates billions in revenue per quarter for Nvidia in sales, although the company does not typically break out its revenue specifically. The chip was briefly banned from export to China in April. The company said its guidance would have been about $8 billion higher except for lost sales from a recent export restriction on its China-bound H20 chips. The Trump administration said in July that it would grant a waiver for the chips to resume sales. Silicon Valley technologists and security experts generally believe that backdoors — when a device has a hidden function that would allow a government or attacker to secretly take data from a computer or otherwise control it — are untenable in products. Apple, in particular, has publicly fought off government requests for what it calls "backdoors" in the past as well. Nvidia declined to comment further on its blog post. Reber argued in the blog post that secret backdoors are dangerous vulnerabilities that could be used by hackers, not just officials, and that they "violate the fundamental principles of cybersecurity." He also said that if a kill switch or backdoor were to be put in products like Nvidia GPUs, that they would harm U.S. national security interests. "Hardwiring a kill switch into a chip is something entirely different: a permanent flaw beyond user control, and an open invitation for disaster," Reber wrote. "It's like buying a car where the dealership keeps a remote control for the parking brake — just in case they decide you shouldn't be driving."
Yahoo
an hour ago
- Yahoo
OpenAI launches two ‘open' AI reasoning models
OpenAI announced Tuesday the launch of two open-weight AI reasoning models with similar capabilities to its o-series. Both are freely available to download from the online developer platform, Hugging Face, the company said, describing the models as 'state-of-the-art' when measured across several benchmarks for comparing open models. The models come in two sizes: a larger and more capable gpt-oss-120b model that can run on a single Nvidia GPU, and a lighter-weight gpt-oss-20b model that can run on a consumer laptop with 16GB of memory. The launch marks OpenAI's first 'open' language model since GPT-2, which was released more than five years ago. In a briefing, OpenAI said its open models will be capable of sending complex queries to AI models in the cloud, as TechCrunch previously reported. That means if OpenAI's open model is not capable of a certain task, such as processing an image, developers can connect the open model to one of the company's more capable closed models. While OpenAI open-sourced AI models in its early days, the company has generally favored a proprietary, closed-source development approach. The latter strategy has helped OpenAI build a large business selling access to its AI models via an API to enterprises and developers. However, CEO Sam Altman said in January he believes OpenAI has been 'on the wrong side of history' when it comes to open sourcing its technologies. The company today faces growing pressure from Chinese AI labs — including DeepSeek, Alibaba's Qwen, and Moonshot AI —which have developed several of the world's most capable and popular open models. (While Meta previously dominated the open AI space, the company's Llama AI models have fallen behind in the last year.) In July, the Trump Administration also urged U.S. AI developers to open source more technology to promote global adoption of AI aligned with American values. With the release of gpt-oss, OpenAI hopes to curry favor with developers and the Trump Administration alike, both of which have watched the Chinese AI labs rise to prominence in the open source space. 'Going back to when we started in 2015, OpenAI's mission is to ensure AGI that benefits all of humanity,' said OpenAI CEO Sam Altman in a statement shared with TechCrunch. 'To that end, we are excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit.' How the models performed OpenAI aimed to make its open model a leader among other open-weight AI models, and the company claims to have done just that. On Codeforces (with tools), a competitive coding test, gpt-oss-120b and gpt-oss-20b score 2622 and 2516, respectively, outperformed DeepSeek's R1 while underperforming o3 and o4-mini. On Humanity's Last Exam, a challenging test of crowd-sourced questions across a variety of subjects (with tools), gpt-oss-120b and gpt-oss-20b score 19% and 17.3%, respectively. Similarly, this underperforms o3 but outperforms leading open models from DeepSeek and Qwen. Notably, OpenAI's open models hallucinate significantly more than its latest AI reasoning models, o3 and o4-mini. Hallucinations have been getting more severe in OpenAI's latest AI reasoning models, and the company previously said it doesn't quite understand why. In a white paper, OpenAI says this is 'expected, as smaller models have less world knowledge than larger frontier models and tend to hallucinate more.' OpenAI found that gpt-oss-120b and gpt-oss-20b hallucinated in response to 49% and 53% of questions on PersonQA, the company's in-house benchmark for measuring the accuracy of a model's knowledge about people. That's more than triple the hallucination rate of OpenAI's o1 model, which scored 16%, and higher than its o4-mini model, which scored 36%. Training the new models OpenAI says its open models were trained with similar processes to its proprietary models. The company says each open model leverages mixture-of-experts (MoE) to tap fewer parameters for any given question, making it run more efficiently. For gpt-oss-120b, which has 117 billion total parameters, OpenAI says the model only activates 5.1 billion parameters per token. The company also says its open model was trained using high-compute reinforcement learning (RL) — a post-training process to teach AI models right from wrong in simulated environments using large clusters of Nvidia GPUs. This was also used to train OpenAI's o-series of models, and the open models have a similar chain-of-thought process in which they take additional time and computational resources to work through their answers. As a result of the post-training process, OpenAI says its open AI models excel at powering AI agents, and are capable of calling tools such as web search or Python code execution as part of its chain-of-thought process. However, OpenAI says its open models are text-only, meaning they will not be able to process or generate images and audio like the company's other models. OpenAI is releasing gpt-oss-120b and gpt-oss-20b under the Apache 2.0 license, which is generally considered one of the most permissive. This license will allow enterprises to monetize OpenAI's open models without having to pay or obtain permission from the company. However, unlike fully open source offerings from AI labs like AI2, OpenAI says it will not be releasing the training data used to create its open models. This decision is not surprising given that several active lawsuits against AI model providers, including OpenAI, have alleged that these companies inappropriately trained their AI models on copyrighted works. OpenAI delayed the release of its open models several times in recent months, partially to address safety concerns. Beyond the company's typical safety policies, OpenAI says in a white paper that it also investigated whether bad actors could fine-tune its gpt-oss models to be more helpful in cyber attacks or the creation of biological or chemical weapons. After testing from OpenAI and third-party evaluators, the company says gpt-oss may marginally increase biological capabilities. However, it did not find evidence that these open models could reach its 'high capability' threshold for danger in these domains, even after fine-tuning. While OpenAI's model appears to be state-of-the-art among open models, developers are eagerly awaiting the release of DeepSeek R2, its next AI reasoning model, as well as a new open model from Meta's new superintelligence lab.


CNBC
an hour ago
- CNBC
OpenAI releases lower-cost models to rival Meta, Mistral and DeepSeek
OpenAI on Tuesday released two open-weight language models for the first time since it rolled out GPT-2 in 2019. The text-only models are called gpt-oss-120b and gpt-oss-20b, and are designed to serve as lower-cost options that developers, researchers and companies can easily run and customize, OpenAI said. An artificial intelligence model is considered open weight if its parameters, or the elements that improve its outputs and predictions during training, are publicly available. Open-weight models can offer transparency and control, but they are different from open-source models, whose full source code becomes available for people to use and modify. Several other tech companies, including Meta, Microsoft-backed Mistral AI and the Chinese startup DeepSeek, have also released open-weight models in recent years. "It's been exciting to see an ecosystem develop, and we are excited to contribute to that and really push the frontier and then see what happens from there," OpenAI President Greg Brockman told reporters during a briefing. The company collaborated with Nvidia, Advanced Micro Devices, Cerebras, and Groq to ensure the models will work well on a variety of chips. "OpenAI showed the world what could be built on Nvidia AI — and now they're advancing innovation in open-source software," Nvidia CEO Jensen Huang said in a statement. The release of OpenAI's open weight models has been highly anticipated, in part because the company repeatedly delayed the launch. In a post on X in July, OpenAI CEO Sam Altman said the company needed more time to "run additional safety tests and review high-risk areas." That came after a separate post weeks earlier, where Altman said the models would not be released in June. OpenAI said Tuesday that it carried out extensive safety training and testing on its open-weight models. It filtered out harmful chemical, biological, radiological and nuclear data during pre-training, and it mimicked how bad actors could try to fine-tune the models for malicious purposes. Through this testing, OpenAI said it determined that maliciously fine-tuned models were not able to reach the "high capability" threshold in its Preparedness Framework, which is its method for measuring and protecting against harm. The company also worked with three independent expert groups who provided feedback on its malicious fine-tuning evaluation, OpenAI said. OpenAI said people can download the weights for gpt-oss-120b and gpt-oss-20b on platforms like Hugging Face and GitHub under an Apache 2.0 license. The models will be available to run on PCs through programs such as LM Studio and Ollama. Cloud providers Amazon, Baseten and Microsoft are also making the models available. Both models can handle advanced reasoning, tool use and chain‑of‑thought processing, and are designed to run anywhere — from consumer hardware to the cloud to on-device applications. Users can run gpt-oss-20b on a laptop, for instance, and use it as a personal assistant that can search through files and write, OpenAI said. "We're excited to make this model, the result of billions of dollars of research, available to the world to get AI into the hands of the most people possible," Altman said in a statement Tuesday.