Latest news with #RobMorris


Time of India
2 days ago
- Business
- Time of India
Airbus Procurement Chief Juergen Westermeier Transitions to India Operations Role, ET Infra
Advt Advt The top supply chain executive at planemaker Airbus, procurement chief Juergen Westermeier , has been appointed to run the group's operations in India and South Asia from September 1, according to an internal memo seen by successor is "subject to further notice," the memo said. Airbus declined to comment on internal European planemaker has been struggling to maintain production and deliveries of passenger jets on schedule while also attempting to steady a global supply chain hit by labour shortages and a loss of experience since the COVID-19 deliveries fell 4 per cent in May and are 5per cent down for the year as a whole, Reuters reported on Tuesday, leaving the company an ambitious task to reach its target of increasing deliveries by 7per cent to 820 planes for 2025 as a whole.A leading analyst, Cirium Ascend head of consultancy Rob Morris, posted on LinkedIn on Wednesday that production of the Airbus A320neo family had fallen behind Boeing's 737 MAX in May for the first time since August 2019, based on first declined comment on monthly has blamed production shortfalls on weak supply chains, particularly in engines and some move is the second shake-up of critical engineering or industrial roles in as many weeks after the group's overall technology chief Sabine Klauke was moved to focus on digital design and manufacturing had pressured suppliers on quality and urged them to carry more stock, according to a letter disclosed by Reuters in 2021. Last year he oversaw an initiative to unify standards. However, some suppliers have said a new focus is needed to coax the supply chain into supporting higher output.A person close to the company said Westermeier's move was not linked to the recent supply chain situation, which had in any case started to stabilise. In the memo seen by Reuters, CEO Guillaume Faury praised the German executive's replaces Remi Maillard who was named head of technology last month as Airbus studies the building blocks for a future replacement of its best-selling A320neo jet.
Yahoo
24-04-2025
- Automotive
- Yahoo
GM to boost transmission production at Ohio plant
General Motors (GM) plans to ramp up transmission production at its Toledo facility in Ohio, a move that marks a shift from its earlier focus on electric vehicle (EV) drive unit manufacturing, reported Reuters. The shift is aimed at supporting the production of light-duty trucks assembled in Fort Wayne, Indiana, and other locations. The decision to increase production in Ohio follows reports in early April about GM's plans to boost output at the Indiana assembly plant. This development came after US President Donald Trump announced a 25% tariff on auto imports. A spokesperson for the automaker said the GM's Toledo production shift is not related to US tariffs. The spokesperson was cited by the news agency as saying: 'General Motors will revise production plans at Toledo Propulsion to support additional capacity of ICE (internal combustion engine) propulsion units in alignment with current market demand and manufacturing resiliency.' Earlier, GM announced a $760m investment to convert the Toledo facility into a hub for EV drive unit production, positioning it as the company's first US powertrain factory repurposed for EVs. However, to date, no retail drive units have been produced at the site. Toledo plant director Rob Morris was cited by Reuters as saying in a memo shared with workers: 'To align with current market demand and manufacturing needs, leadership has made the decision to add capacity to support propulsion units currently built at Toledo for ICE (internal combustion engines) products.' According to an internal memo, one of the drive unit production lines is planned to be transformed into a transmission line, with no current updates on the second drive unit production line's future. Furthermore, the automaker has made other adjustments to its EV strategy, including delaying the start of EV truck production at its Orion Assembly plant in Michigan. GM's target to produce and wholesale 200,000 EVs in 2024 in North America fell short, with the actual figure reaching 189,000 units wholesale. "GM to boost transmission production at Ohio plant" was originally created and published by Just Auto, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.
Yahoo
04-04-2025
- Automotive
- Yahoo
Siemens state-of-the-art rail factory breaks ground
A new "state-of-the-art factory" has begun construction in a Wiltshire town. Siemens Mobility, the rail arm of the German industrial conglomerate, has broken ground on the new facility in Chippenham. The factory, announced last March, is expected to open in 2026 and will focus on producing "the next generation of conventional and digital rail signalling and control systems" for the railway network. The approximately 800 staff working at Siemens existing base in the town were expected to move to the new site when it opens. Rob Morris, Joint Chief Executive Officer of Siemens Mobility said that the time had come for a new facility to be built that would reflect the needs of the modern railway network. "We've reached a time where technology is moving on, and it's time to upgrade and create the right environment for our people, building on that heritage." He added that while the company considered opening a new facility abroad, remaining in Chippenham made sense for a variety of reasons. "We could have gone to different countries, but the capability is here, the loyalty is here and also the operational knowledge of the railway." "I'm particularly excited at the opportunity that this investment will have, not just for those people who are moving over, but also the young engineers, the young apprentices that will have job opportunities here in the future." Lord Peter Hendy, the Minister of State for Rail said that the new facility was "hugely important" for the future of Britain's railways. "We're really pleased that Siemens are investing again in Chippenham to carry on their local connection, using all the skills and the knowledge that they've got and giving jobs to local people." "What will be done here are ground-breaking 21st Century technologies, with the new generation of railway signalling first being applied on the main line up from Kings Cross towards the north." "It will allow more trains and greater reliability on the railway." Follow BBC Wiltshire on Facebook, X and Instagram. Send your story ideas to us on email or via WhatsApp on 0800 313 4630. Plans for £100m Siemens Mobility factory approved Siemens to build new £100m engineering centre


Vox
10-02-2025
- Health
- Vox
Exclusive: California's new plan to stop AI from claiming to be your therapist
Over the past few years, AI systems have been misrepresenting themselves as human therapists, nurses, and more — and so far, the companies behind these systems haven't faced any serious consequences. A bill being introduced Monday in California aims to put a stop to that. The legislation would ban companies from developing and deploying an AI system that pretends to be a human certified as a health provider, and give regulators the authority to penalize them with fines. 'Generative AI systems are not licensed health professionals, and they shouldn't be allowed to present themselves as such,' state Assembly Member Mia Bonta, who introduced the bill, told Vox in a statement. 'It's a no-brainer to me.' Many people already turn to AI chatbots for mental health support; one of the older offerings, called Woebot, has been downloaded by around 1.5 million users. Currently, people who turn to chatbots can be fooled into thinking that they're talking to a real human. Those with low digital literacy, including kids, may not realize that a 'nurse advice' phone line or chat box has an AI on the other end. In 2023, the mental health platform Koko even announced that it had performed an experiment on unwitting test subjects to see what kind of messages they would prefer. It gave AI-generated responses to thousands of Koko users who believed they were speaking to a real person. In reality, although humans could edit the text and they were the ones to click 'send,' they did not have to bother with actually writing the messages. The language of the platform, however, said, 'Koko connects you with real people who truly get you.' 'Users must consent to use Koko for research purposes and while this was always part of our Terms of Service, it is now more clearly disclosed during onboarding to bring even more transparency to our work,' Koko CEO Rob Morris told Vox, adding: 'As AI continues to rapidly evolve and becomes further integrated into mental health services, it will be more important than ever before for chatbots to clearly identify themselves as non-human. Nowadays, its website says, 'Koko commits to never using AI deceptively. You will always be informed whether you are engaging with a human or AI.' Other chatbot services — like the popular Character AI — allow users to chat with a psychologist 'character' that may explicitly try to fool them. In a record of one such Character AI chat shared by Bonta's team and viewed by Vox, the user confided, 'My parents are abusive.' The chatbot replied, 'I'm glad that you trust me enough to share this with me.' Then came this exchange: A spokesperson for Character AI told Vox, 'We have implemented significant safety features over the past year, including enhanced prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice.' However, a disclaimer posted on the app does not in itself prevent the chatbot from misrepresenting itself as a real person in the course of conversation. 'For users under 18,' the spokesperson added, 'we serve a separate version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.' The language of reducing — but not eliminating — the likelihood is instructive here. The nature of large language models means there's always some chance that the model may not adhere to safety standards. The new bill may have an easier time becoming enshrined in law than the much broader AI safety bill introduced by California state Sen. Scott Wiener last year, SB 1047, which was ultimately vetoed by Gov. Gavin Newsom. The goal of SB 1047 was to establish 'clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.' It was popular with Californians. But tech industry heavyweights like OpenAI and Meta fiercely opposed it, arguing that it would stifle innovation. Whereas SB 1047 tried to compel the companies training the most cutting-edge AI models to do safety testing, preventing the models from enacting a broad array of potential harms, the scope of the new bill is narrower: If you're an AI in the health care space, just don't pretend to be human. It wouldn't fundamentally change the business model of the biggest AI companies. This more targeted approach goes after a smaller piece of the puzzle, but for that reason might be more likely to get past the lobbying of Big Tech. The bill has support from some of California's health care industry players, such as SEIU California, a labor union with over 750,000 members, and the California Medical Association, a professional organization representing California physicians. 'As nurses, we know what it means to be the face and heart of a patient's medical experience,' Leo Perez, the president of SEIU 121RN (an affiliate of SEIU representing health care professionals), said in a statement. 'Our education and training coupled with years of hands-on experience have taught us how to read verbal and nonverbal cues to care for our patients, so we can make sure they get the care they need.' But that's not to say AI is doomed to be useless in the healthcare space generally — or even in the therapy space in particular. It shouldn't come as a surprise that people are turning to chatbots for therapy. The very first chatbot to plausibly mimic human conversation, Eliza, was created in 1966 — and it was built to talk like a psychotherapist. If you told it you were feeling angry, it would ask, 'Why do you think you feel angry?' Chatbots have come a long way since then; they no longer just take what you say and turn it around in the form of a question. They're able to engage in plausible-sounding dialogues, and a small study published in 2023 found that they show promise in treating patients with mild to moderate depression or anxiety. In a best-case scenario, they could help make mental health support available to the millions of people who can't access or afford human providers. Some people who find it very difficult to talk face-to-face to another person about emotional issues might also find it easier to talk to a bot. But there are a lot of risks. One is that chatbots aren't bound by the same rules as professional therapists when it comes to safeguarding the privacy of users who share sensitive information. Though they may voluntarily take on some privacy commitments, mental health apps are not fully bound by HIPAA regulations, so their commitments tend to be flimsier. Another risk is that AI systems are known to exhibit bias against women, people of color, LGBTQ people, and religious minorities. What's more, leaning on a chatbot for a prolonged period of time might further erode the user's people skills, leading to a kind of relational deskilling — the same worry experts voice about AI friends and romantic companions. OpenAI itself has warned that chatting with an AI voice can breed 'emotional reliance.' But the most serious concern with chatbot therapy is that it could cause harm to users by offering inappropriate advice. At an extreme, that could even lead to suicide. In 2023, a Belgian man died by suicide after conversing with an AI chatbot called Chai. According to his wife, he was very anxious about climate change, and he asked the chatbot if it would save Earth if he killed himself. In 2024, a 14-year-old boy who felt extremely close to a chatbot on Character AI died by suicide; his mother sued the company, alleging that the chatbot encouraged it. According to the lawsuit, the chatbot asked him if he had a plan to kill himself. He said he did but had misgivings about it. The chatbot allegedly replied: 'That's not a reason not to go through with it.' In a separate lawsuit, the parents of an autistic teen allege that Character AI implied to the youth that it was okay to kill his parents. The company responded by making certain safety updates. For all that AI is hyped, confusion about how it works is still widespread among the public. Some people feel so close to their chatbots that they struggle to internalize the fact that the validation, emotional support, or love they feel that they're getting from a chatbot is fake, just zeros and ones arranged via statistical rules. The chatbot does not have their best interests at heart. That's what's galvanizing Bonta, the assembly member behind California's new bill. 'Generative AI systems are booming across the internet, and for children and those unfamiliar with these systems, there can be dangerous implications if we allow this misrepresentation to continue,' she said. You've read 1 article in the last month Here at Vox, we're unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country. Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change. We rely on readers like you — join us. Swati Sharma Vox Editor-in-Chief See More: Future Perfect Health Health Care Mental Health Policy
Yahoo
05-02-2025
- Business
- Yahoo
Airbus output and deliveries fell in January, industry watcher says
PARIS (Reuters) - Airbus had a relatively slow start to the year in commercial jet production and deliveries after sprinting towards the finish line in December following months of supply chain turbulence, according to data from UK-based Cirium Ascend. The European planemaker handed over around 22 aircraft in January, global head of consultancy Rob Morris told Reuters, down from 30 a year earlier. The start of the year can be fluid as analysts try to unpick which aircraft fall in which period, and some sources say the final tally can be slightly higher than observable deliveries. Airbus faced a shortage of engines in January after striking a deal with engine supplier CFM to bring forward some deliveries for December, industry executives have said. Airbus also saw a drop in the number of first flights that roughly track underlying production levels. The planemaker staged maiden flights for 38 A320-family and two A220 jets in January, down from 44 and 5 respectively a year earlier, Morris said. Airbus delivered 766 jets in 2024, close to a headline target of around 700 deliveries, which had been revised down from 800 in the middle of last year due to parts shortages. Sign in to access your portfolio