logo
Cirrus G7 Plus Test Flight: I Landed a Plane With One Finger

Cirrus G7 Plus Test Flight: I Landed a Plane With One Finger

CNET06-05-2025
Table of Contents Cirrus G7 Plus Test Flight: I Landed a Plane With One Finger
Cirrus Aircraft has introduced Safe Return, a self-landing system that takes control and lands the plane if the pilot becomes incapacitated. CNET got an early hands-on look.
Now Playing: Aviation Breakthrough: A Consumer Plane That Lands Itself
06:45
The system, which is part of the new SR22 G7 Plus aircraft, is enabled with the press of a button. It lets the plane talk to air traffic control, find the nearest safe airport, land itself and shut down -- all on its own.
While Safe Return has been part of the Vision Jet lineup since 2019, Cirrus says it had to completely rework it for the SR22 G7 Plus aircraft. In this version, the system doesn't just fly the plane but completely controls the throttle. I got to see this system in action; watching the throttle lever move by itself felt like something straight out of Knight Rider, like KITT taking over the wheel.
Cirrus
Cirrus says the G7 Plus continues the company's legacy for safety, which includes its whole-airframe parachute system, standard across all models, that has saved more than 270 lives.
On the tech front, the G7 Plus brings features such as:
Electronic Stability Protection
Emergency Descent Mode
Over-the-air navigation updates
Integrated touchscreen flight deck
Notifying emergency responders
Voice and visual emergency cues for passengers
Cirrus
Why this matters
The SR is a well-sized, beginner-friendly aircraft, ideal for learning to fly. Like any new skill, learning to fly comes with ups and downs, but with the G7 Plus, those "lows" are far less likely to end in disaster. One standout feature is its ability to detect abnormal flight patterns and autonomously take control to land the plane, even without manual activation of the Safe Return button. This is especially valuable when a pilot is flying solo and becomes incapacitated. During our flight, we used a demo version of the software that successfully landed the plane but didn't notify emergency responders or communicate with air traffic control. We departed from Westchester County Airport and touched down at Stewart International Airport.
Cirrus
With nearly 20,000 public and private airports across the US, getting around in an SR22 could be safer and faster than driving.
The Cirrus SR22 G7 Plus -- the same model we flew -- is priced at $1,236,500. It is equipped with oxygen, Global Connect, air conditioning and a composite propeller.
In short, the Cirrus G7 Plus could be a glimpse into the future of personal aviation by making cutting-edge safety tech more accessible.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

​​Ex-OpenAI Researcher Raises Fund to Back Emerging Talent​​
​​Ex-OpenAI Researcher Raises Fund to Back Emerging Talent​​

Bloomberg

time2 minutes ago

  • Bloomberg

​​Ex-OpenAI Researcher Raises Fund to Back Emerging Talent​​

Leonis Capital has secured $25 million for a new fund to help uncover the next OpenAI. The fund is backed by institutional investors and a mix of researchers and executives from companies like Nvidia Corp, OpenAI and Anthropic, the San Francisco-based venture capital firm said in a statement. Leonis, founded in 2021, has fully deployed its first $10 million fund across AI-native startups including MaintainX and Motion.

Therapists Leverage AI For Mental Health By Delving Into Virtual Digital Twins Of Their Clients And Patients Minds
Therapists Leverage AI For Mental Health By Delving Into Virtual Digital Twins Of Their Clients And Patients Minds

Forbes

time4 minutes ago

  • Forbes

Therapists Leverage AI For Mental Health By Delving Into Virtual Digital Twins Of Their Clients And Patients Minds

In today's column, I examine an advanced use of generative AI and large language models (LLMs) that entails therapists and other mental health professionals making use of so-called digital twins that are reflective of their respective clients and patients. The deal is this. Via the use of personas in generative AI, a feature that nearly all LLMs inherently include, it is presumably conceivable that you could devise a persona that somewhat matches and reflects a client or patient that is undergoing therapy. This is considered a digital twin, or more specifically, a medical digital twin. Yes, perhaps unnervingly, it seems possible to construct an AI-based simulated version of a client or patient that a therapist could then use to gauge potential responses and reactions to a planned line of psychological analyses and therapeutics. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. Therapists And AI Usage Many therapists and mental health professionals are opting to integrate AI into their practices and overtly use the AI as a therapeutic adjunct for their clients and patients (see my coverage at the link here). Even those therapists and mental health professionals who don't go down the route of incorporating AI are bound to encounter clients and patients who are doing so. Those clients and patients will often walk in the door with preconceived beliefs about how their therapy should go or is going, spurred and prodded by what AI has told them. In this sense, one way or another, therapists and mental health professionals are going to ultimately be impacted by the growing use of generative AI and LLMs. Right now, there are already around 700 million weekly active users of ChatGPT. You might find it of notable interest that the top-ranked use by the public of contemporary generative AI and LLMs is to consult with the AI on mental health matters, see my coverage at the link here. If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. Many people cannot otherwise afford or gain access to human therapists, but access to generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might undercut mental health, doing so on a massive population-level scale, see my discussion at the link here. Personas Are Coming To The Fore Let's shift gears and focus on the use of AI-based personas. I've repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so. In the context of mental health, I showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here. As a mental health professional, you ought to give serious consideration to making use of personas for your own self-training and personal refinement. For example, you might craft a persona that will pretend to be a person with deep depression. You could then use this persona to hone your therapeutic prowess regarding depression in patients and clients. It can be quite useful. Plus, there is no danger since it is just AI. You can try out various avenues to gauge what works and doesn't work. No harm, no foul. For my suggestions on how to write prompts that suitably create or cast personas, see the link here. Digital Twins And Humans There is specialized parlance in the tech field that has been around for many years and refers to the concept and practice of using computers to simulate a real object or entity. The parlance is that you are crafting and making use of a digital twin. This became popular when machinery used on factory floors could be modeled digitally. Why would a digital model or simulation of a factory assembly machine be useful? Easy-peasy, there are lots of crucial benefits. One is that before you even construct the machine, you can try it out digitally. You can make sure that the machine will hopefully work suitably once it is constructed and put into operation. Another advantage is that you can readily make lengthy runs of the digital twin and predict when the real version might break down. This gives a heads-up to the maintenance crew working on the factory operations. They get estimates of the likely time at which the machine will potentially start to degrade. Recently, there has been a realization that digital twins can be used in other, more creative ways, such as modeling or simulating human beings. This is often referred to as a medical digital twin (note that other names and phrases are sometimes used too). Medical Digital Twins In a research article entitled 'Toward Mechanistic Medical Digital Twins' by Reinhard Laubenbacher, Fred Adler, Gary An, Filippo Castiglione, Stephen Eubank, Luis L. Fonseca, James Glazier, Tomas Helikar, Marti Jett-Tilton, Denise Kirschner, Paul Macklin6, Borna Mehrad, Beth Moore, Virginia Pasour, Ilya Shmulevich, Amber Smith, Isabel Voigt, Thomas E. Yankeelov, and Tjalf Ziemssen, Frontiers In Digital Health, March 7, 2024, these salient points were made (excerpts): Please note that as emphasized above, the advent of medical digital twins is still early on. There is plenty of controversy associated with the topic. One major qualm is that with a factory floor machine, you can pretty much model every physical and mechanical aspect, but the same can't be said about modeling human beings. At least not yet. Lucky or not, we seem to be more complex than everyday machines. Score a point for humankind. Personas As Digital Twins When you think about devising a medical digital twin, there are customarily two major elements involved: Some would insist that you cannot adequately model the mind without also modeling the body. It's that classic mind-body debate; see my analysis at the link here. If you dogmatically believe that a mind is unable to be sufficiently modeled without equally modeling the body, I guess that the rest of this discussion is going to give you heartburn. Sorry about that. We are going to make a brash assumption that you can use generative AI to aid in crafting a kind of model or simulation of a person's mind, at least to the extent that the AI will seek to exhibit similar personality characteristics and overall psychological characteristics of the person. So, in that sense, we are going to pursue a medical digital twin that only focuses on the second of the two major elements. Does that mean that the AI-based digital twin is missing a duality ingredient that wholly undercuts the effort? I'm going to say that it doesn't, but you are welcome to take the posture that it does. We can amicably agree to disagree. On a related facet, there are advocates of medical digital twins who would insist that a medical digital twin must encompass the bodily aspects, else it isn't a medical digital twin at all. In that case, I guess we might need to drop the word 'medical' from this type of digital twin. Just wanted to give you a heads-up on these controversies. Personas Of Your Clients Or Patients Moving on, let's further consider the avenue of creating a digital twin of your client or patient so that you can utilize the AI to ascertain your line of therapy and treatment. The first step involves collecting data about the person. The odds are that a therapist will already have obtained an extensive history associated with a client or patient. Those notes and other documents could be used to feed the AI. The idea is that you will provide that data to the generative AI, and it will pattern-match and craft a persona accordingly. You might also include transcripts of your sessions. Feeding this data into AI is often done via a technique known as retrieval-augmented generation (RAG), see my explanation at the link here. Please be very cautious in taking this type of action. Really, really, really cautious. Many therapists are already willy-nilly entering data about clients and patients into off-the-shelf publicly available LLMs. The problem is that there is almost no guarantee of data privacy with these AIs, and you could readily be violating confidentiality and HIPAA provisions. You might also need to certify consent from the client or patient, depending on various factors at play. For more, see my discussion at the link here and the link here. Make sure to consult with your attorney on these serious matters. One approach is to stridently anonymize the data so that the client or patient is unrecognizable via the data you have entered. It would be as though you are simply creating a generic persona from scratch. Whether that will pass a legal test is something your legal counsel can advise you on. Another approach is to set up a secure private version of an LLM, but that, too, can have legal wrinkles. More On Personas As Digital Twins Yet another approach is to merely and shallowly describe the persona based on your overall semblance of the person. This is somewhat similar to my earlier point that you can use personas by simply entering a prompt that the devised persona is supposed to represent a person with depression. That's a vague indication and would seem untethered to a specific person. The downside, of course, is that the surface-level persona might not be of much help to you. What are you going to do with whatever persona you craft? You could try to figure out the emotional triggers of the person, as represented via the persona. What kind of coping style do they have? How does their coping mechanism react to the therapy you have in mind? All sorts of therapy-oriented strategies and tactics can be explored and assessed. In essence, you are trying out different interventions on the persona, i.e., the digital twin. Maybe you are mulling over variations of CBT techniques and want to land on a particular approach. Perhaps you often use exposure therapy and are unsure of how that will go over with the client or patients. This provides a no-risk means of determining your therapy in a simulated environment and prepares you for sessions with the actual person. Don't Fall For The Persona I trust and hope that any therapist or mental health professional going the route of using a persona as a digital twin is going to keep their wits about themselves. Ordinary users of AI who use personas can readily go off the deep end and believe that the persona is real. Do not let that same fate befall you. The persona is merely the persona. Period, end of story. You cannot assume that the persona is giving you an accurate reading of the person. The AI could be completely afield in terms of how the person will actually respond and react. Expect that the AI will almost certainly overrepresent some traits, underrepresent other traits, and be convincing as it does so. Convincingness is the trick involved. Contemporary generative AI is so seemingly fluent that you are drawn into a mental trap of believability. Inside your head, you might hear this internal voice: 'It must be showing me the true inner psyche of my client or patient! The AI is working miracles at modeling the person. Wow, AI is utterly amazing.' You must resist the urge to become over-reliant on the digital twin. Over-reliance is a likely possibility. Here's how. You use the persona. After doing so, you later meet with the client or patient. Everything the AI indicated as to responses and reactions appears to mirror what the person says and does during the session. Awesome. You decide to keep using the persona. Over and over, you use the persona. Voila, you are hooked. The persona has led you down a primrose path. The seemingly uncanny portrayal has been spot-on. The problem is that when the client or patient diverges from the persona, you are going to have your mind turned backward. The person must be wrong, because the persona was always right. In other words, the person is supposed to be acting as the persona does. The world has gone topsy-turvy. But it's you, because you have forsaken your therapist mindset and allowed AI to capture and defeat your real-world acuity. That's bad news. Do not let that happen. Additional Twists And Turns There is a lot more to consider when using AI as a digital twin in a mental health context. I'll be covering more in a series of postings. Be on the watch. One quick point to get your mental juices flowing is this. Suppose that you have gotten written consent from the client or patient, and they know that you are using AI to depict a persona of them. The person comes to one of your later sessions and starts to suspect that you are proceeding as if it is based on what the AI told you. They worry that the AI is portraying them in some unpleasant fashion. Furthermore, they now insist that you let them access the persona. They want to see how it represents them. Mull that over and think about how you would contend with that potential nightmare scenario. It's a doozy. It could arise. A final thought for now. Albert Einstein famously made this remark: 'My mind is my laboratory.' Yes, that's abundantly true. In the case of mental health therapy, besides your mind being your laboratory, it turns out that AI can be your laboratory too. Proceed with aplomb.

Former Intel CEO lays out radical plan to rescue the company — Suggests outing Lip-Bu Tan and asks for $40 billion investment from Nvidia, Apple, and others
Former Intel CEO lays out radical plan to rescue the company — Suggests outing Lip-Bu Tan and asks for $40 billion investment from Nvidia, Apple, and others

Yahoo

time7 minutes ago

  • Yahoo

Former Intel CEO lays out radical plan to rescue the company — Suggests outing Lip-Bu Tan and asks for $40 billion investment from Nvidia, Apple, and others

When you buy through links on our articles, Future and its syndication partners may earn a commission. Former Intel CEO and chairman Craig Barrett has laid out a blunt blueprint to rescue both Intel and, by extension, America's ability to produce state-of-the-art chips. He has warned that without urgent funding, Intel risks ceding critical ground to overseas rivals, which is an unprecedented fate since the company is crucial to America's contemporary semiconductor ambitions. He stressed that leadership in chipmaking requires heavy investment years in advance of demand Writing in Fortune, Barrett argued that Intel remains the sole US company capable of matching Taiwan's TSMC at the leading edge—but lacks the capital to scale and modernize its production. With government CHIPS Act funding insufficient to close the gap, he insists the only realistic source of cash is Intel's own customers. He proposed that Intel's eight largest customers, including Apple, Google, and Nvidia, should each contribute $5 billion in return for guaranteed domestic supply and pricing leverage against Asian competitors. Barrett said neither TSMC nor Samsung intends to bring their most advanced manufacturing to US soil, posing long-term risks for American technology companies dependent on imported chips. 'The only place the cash can come from is the customers,' he wrote, adding that leadership in manufacturing requires proactive investment years ahead of market demand. The proposal comes as Intel faces significant headwinds. Under CEO Lip-Bu Tan, appointed in March 2025, the company has been cutting over tens of thousands of jobs and cancelling major projects amid steep losses—$18.8 billion in 2024 and another $2.9 billion in Q2 2025. Intel has also purportedly stuggled with severe yield issues on its 18A manufacturing process, delaying key products and soft-forcing a pivot to its future 14A node. Barrett criticized the current leadership's reluctance to invest in 14A without pre-existing customer contracts, calling the approach 'a joke' and warning it risks falling permanently behind. Barrett's plan hinges on two pillars: immediate investment in critical technologies like High-NA EUV and backside power delivery, and potential US tariffs on imported advanced chips to stimulate domestic demand. He rejected calls to split Intel into separate design and manufacturing entities, arguing that the core problem is capital, not structure. His comments arrive as Intel navigates political turbulence. President Donald Trump has publicly called for Tan's resignation over alleged ties to China, with the two meeting at the White House today. While not directly addressing the leadership dispute, Barrett framed his proposal as essential for both national security and supply chain stability, warning that the US cannot afford to let Intel's manufacturing leadership slip away. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store