
What's better than one Nintendo Direct? Two.
See all Stories Posted Aug 5, 2025 at 1:19 PM UTC Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Ash Parrish Posts from this author will be added to your daily email digest and your homepage feed. See All by Ash Parrish
Posts from this topic will be added to your daily email digest and your homepage feed. See All Gaming
Posts from this topic will be added to your daily email digest and your homepage feed. See All News
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Nintendo
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CBS News
9 minutes ago
- CBS News
Re-Coding the Future
This content was provided by Acumen Media for Ascendion. This advertiser content was paid for and created by Acumen. Neither CBS News nor CBS News Brand Studio, the brand marketing arm of CBS News, were involved in the creation of this content. Throughout history, new forms of energy have shifted the gears of human progress, profoundly changing how people live and work. Ancient societies started with muscle, moved to animals, then harnessed water, steam and eventually, electricity. Today, AI is emerging as our next energy source, like fire, electricity or steam in past revolutions, to power productivity in the knowledge economy. Horses, steam engines and power plants all still matter of course, but the world is increasingly running on code and software. What some are calling Economy 4.0 is the AI-powered transformation of the knowledge economy, with software reordering how value is created, delivered and scaled. AI is the new power for business, but software is the power grid. It's the axle around which our knowledge economy is starting to turn. Software now shapes how we communicate, manage money, access healthcare, shop, learn, and nearly every other aspect of our modern lives. "Software is one of the key elements of making WellSky successful," said Joel Dolisy, Chief Technology Officer. "There is a lot of people interaction that is all enabled through software. [It] is a big, big important part of how we make that happen." And yet, much of the code that shapes the economy is old enough to collect social security. Nearly $3 trillion worth of global business operations still run on COBOL daily. Despite the increasing criticality of code, the world is still reliant on software that may have been developed by our grandparents. The impact: a chilling effect on disruptive innovation. Vimala Suresh, Vice President, CareSource, said, "If that file doesn't get processed in time, the member can't get the surgery they're eligible for. This could be life-threatening. That's how important software is in healthcare." Software development methods, processes, tools and mindsets from the past are crumbling under the pressure of AI-powered systems and potential. This presents leaders with a once-in-a-career opportunity. That's where Ascendion comes in. The company says it's unleashing intelligent agents that enhance humans as it builds software faster and better. Now, engineers using AAVA, a platform of agents aligned to software development and management, are using AI to deliver better code at half the cost and twice the speed. To unlock the potential value, leaders, Ascendion says IT teams, operations, line-of-business, procurement, finance and the C-suite must lean into new ways of designing, building and managing software. This isn't theory or buzzwords. It's happening now using what Ascendion calls Engineering to the Power of AI (EngineeringAI). This means companies can do more with less, allowing more time for talented humans to create, innovate, imagine and lead. David Park, President, Commercial Bank, Axos Bank, said, "With the speed of development and the reduction of cost through AI, we are able to personalize the banking experience. Hyper-personalization is what's most exciting about financial services and the software evolution we are going through." Marc Andreessen was right when he said, "Software is eating the world." Now, thanks to AI, software engineering is the first course. The data is adding up. Gartner predicts AI software will reach $297 billion by 2027. Researchers are uncovering a double-digit decrease in jobs related to writing and coding. A growing number of early adopter CIOs are already reporting that AI can yield efficiency gains of 20% to 40%. Innovation, growth, savings and transparency aren't new, but this is: AI is moving from the computer science lab to every business. The new power revolution from AI is already starting. The first major disruption where real value is being delivered today comes from how software is being built. AI is changing not just what is built, but how it's built. Every new power revolution—fire, water wheels, steam, electricity and now AI—comes with risks, disruptions and unintended consequences. AI's impact on software engineering will be profound, and it needs to be managed so we advance safely and civilly. In the coming years and decades, AI agents will disrupt countless business processes, consumer experiences and jobs. But today, software engineering is being re-coded by the bot. Business leaders, policy makers and employees can embrace the potential for AI and tame new machines so that value generation tips the scales away from risk, fear and failed projects.
Yahoo
37 minutes ago
- Yahoo
AT&T to boost customer experience with new Office@Hand
AT&T has augmented existing team collaboration features found in AT&T Office@Hand (Office@Hand) with contact centre capabilities that enable employees inside and outside a contact centre to influence the customer experience. They include Office@Hand Contact Center targeted at contact centre agents and Office@Hand RingSense aimed at sales and support teams. With the additions, AT&T aligns itself with an emerging trend of coupling team collaboration and contact centre features in a single offer. Although the additions decidedly enhance Office@Hand, they arrive late in the game compared to rivals, and the revitalised offer remains AT&T's single piece in the team collaboration and contact centre spaces. AT&T drastically streamlined its team collaboration and contact centre portfolio Exactly three years ago in 2022, AT&T drastically streamlined its team collaboration and contact centre portfolio by terminating the resale of licenses for partner platforms including Zoom, Cisco, Five9, Avaya, and Nice. The sole exception was a limited, hosted, and white-label version of Office@Hand, RingCentral's EX communication and team collaboration platform. While AT&T narrowed its portfolio in part because of the thin margins associated with resale of licenses, in doing so, it severely curtailed revenue from two areas that continue to grow significantly and diluted its appeal to enterprise customers. AT&T also placed itself at a disadvantage to other service providers such as Verizon, Lumen, and Spectrum, which to this day provide services from a more expansive variety of vendors including from the likes of Cisco, Microsoft, and Zoom. Tough competition In entering the contact centre space via the new capabilities, AT&T also has the misfortune of placing itself in competition with those and other vendors, each of which have well-established contact centre estates. In addition, those vendors have benefitted from a balance of power that has shifted in their favour, with team collaboration platforms moving from service provider hosted solutions such as the former Cisco Hosted Collaboration Solution (HCS) and former Microsoft Skype for Business to contemporary vendor hosted platforms such as Cisco Webex and Microsoft Teams. The timing is right for AT&T as contact centres change Despite the rough patches associated with the announcement, AT&T is equipping itself with contact centre capabilities at a most opportune time. They align with a vibrant transformation, which contact centres have been undergoing with the concept of a 'contact centre' yielding to the broader concept of 'customer experience.' Contact centres are converting from having human agents to including AI agents, from reactive to proactive, from transaction-oriented to relationship-oriented, and from generic to deeply personalised. AT&T's new capabilities will help organisations make the transition. Like the contact centre space, AT&T is undergoing a profound transformation. The AT&T Business unit is centreing its identity around providing converged connectivity through 5G and fibre. Across AT&T corporate, a culture is being instilled that places a premium on personal contribution to the organisation as well as alignment with corporate values and utilises data to measure conformance to both attributes. It will be interesting to see what role the refabricated AT&T Office@Hand plays in this altered environment. "AT&T to boost customer experience with new Office@Hand" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
2 hours ago
- Forbes
Analysis Of Whether Generic Generative AI Falls Within The Purview Of Providing Therapy And Psychotherapeutic Advice
In today's column, I examine a seemingly straightforward question that asks whether contemporary generic generative AI and large language models (LLMs) are said to be providing therapy and psychotherapeutic advice. The deal is this. When you use ChatGPT, Claude, Llama, Gemini, Grok, and other such popular generative AI systems, you can readily engage the AI in conversations about mental health. This can be of a general nature. It can also be a very personal dialogue. Many people are using AI as their de facto therapist and doing so without nary a thought of reaching out to a human therapist or mental health professional. Does the use of those LLMs in this manner signify that the AI is proffering services constituting therapy and psychotherapy? You might declare that yes, of course, that is precisely what the AI is doing. It is blatantly obvious. But AI makers who make and maintain the AI are undoubtedly reluctant to agree with that plain-stated assessment or ad hoc opinion. You see, new laws are starting to be enacted that bear down on generic AI that provides unfettered services within the scope of therapy and psychotherapy. AI makers are likely to desperately contend that their generic AI falls outside that regulatory scope. The question arises whether they will be successful in making that kind of tortuous argument. Some would say they don't have a ghost of a chance. Others believe they can dance their way around the legally troubling matter and come out scot-free. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. State Law Ups The Ante I recently analyzed a newly enacted law on AI for mental health that had been signed and enacted in Illinois on August 1, 2025, see my coverage at the link here. This new law is quite a doozy. The reason that it is a doozy is that it lays out violations and penalties for AI that provides unfettered therapy and psychotherapy services. The implication is that any generic generative AI, such as the popular ones I noted earlier, is now subject to potential legal troubles. Admittedly, the legal troubles right now would seemingly be confined to aspects of or within Illinois, since this is a state law and not a broader federal law. Nonetheless, in theory, the use of generic generative AI by users in Illinois that, by happenstance, provides therapy or psychotherapeutic advice is presumably within the scope of getting dinged by the new law. You can bet your bottom dollar that similar new laws are going to be popping up in many other states. The clock is ticking. And the odds are that this type of legislation will also spur action in the U.S. Congress and potentially lead to federal laws of a like nature. It all could have a tremendous impact on AI makers, along with major impacts on how generative AI is devised and made available to the public. All in all, few realize the significance of this otherwise innocuous and under-the-radar concern. My view is that this is the first tiny snowball that is starting to roll down a snowy hill and soon will be a gigantic avalanche that everybody will be talking about. Time will tell. Background On AI For Mental Health I'd like to set the stage before we get into the particulars of this heady topic. You might be vaguely aware that the top-ranked public use of generative AI and LLMs is to consult with the AI on mental health considerations, see my coverage at the link here. This makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. Compared to using a human therapist, the AI usage is a breeze and readily undertaken. AI makers already find themselves in a bit of a pickle on this usage of their AI. The deal is this. By allowing their AI to be used for mental health purposes, they are opening the door to legal liability if their AI gets caught dispensing inappropriate guidance and someone suffers harm accordingly. So far, AI makers have been relatively lucky and have not yet gotten severely stung by their AI serving in a therapist role. You might wonder why the AI makers don't just shut off the capability of their AI to produce mental health insights. That would solve the problem of the business exposures involved. Well, as noted above, this is the top attractor for people to use generative AI. It would be usurping the cash cow, or like capping an oil well that is gushing out liquid gold. One aspect that the AI makers have already undertaken is to emphasize in their online licensing agreements that users aren't supposed to use the AI for mental health advice, see my coverage at the link here. The aim is that by telling users not to use the AI in this manner, perhaps the AI maker can shield itself from adverse exposure. The thing is, despite the warnings, the AI makers often do whatever they can to essentially encourage or support the use of their AI for this claimed-to-be don't use capacity. Some would insist this is a wink-wink of trying to play both sides of the gambit at the same time, see my discussion at the link here. The Services Question My commentary on these sobering matters is merely a layman's viewpoint. Make sure to consult with your attorney to garner any legal ramifications pertaining to your situation and any potential legal entanglements regarding AI and mental health. Let's take a look at the Illinois law that was recently passed. According to the Wellness and Oversight for Psychological Resources Act, known as HB1806, these two elements are a core consideration (excerpts): Regarding the use of unregulated AI in this realm, a crucial statement about AI usage for mental health purposes is stated this way in the Act (excerpt): There are varying ways to interpret this wording. One interpretation is that if an AI maker has a generic generative AI that also happens to provide mental health advice, and if this is taking place without the supervision of a licensed professional, and this occurs in Illinois, the AI maker is seemingly in violation of this law. The AI maker might not even be advertising that their AI can be used that way, but all it takes is for the AI to act in such a manner (since it provides or offers as such). Generic AI Versus Purpose-Built AI Closely observe that the new law stipulates that the scope involves 'therapy or psychotherapy services'. This brings us back to my opening question: Before we unpack the thorny issue, I'd like to clarify something about the topic of AI for mental health. You might have noticed that I referred to generic generative AI. What does the word 'generic' mean in this context? Let me explain. Well, first, there are customized generative AI systems and AI-based apps that are devised specifically to carry out mental health activities. Those are specially built for that purpose. It is the obvious and clear-cut intent of the AI developer that they want their AI to be used that way, including that they are likely to advertise and promote the AI for said usage. See my coverage on such purpose-built AI for mental health at the link here and the link here. In contrast, there is generic generative AI that just so happens to have a capability that encompasses providing mental health advisement. Generic generative AI is intended to answer all kinds of questions and delve into just about any topic under the sun. The AI wasn't especially tuned or customized to support mental health guidance. It just happens to be able to do so. I am focusing here on the generic generative AI aspects. The custom-built AI entails somewhat similar concerns but has its own distinct considerations. I'll be going into those facets in an upcoming posting, so be on the watch. Definitions And Meaning Are Crucial An AI maker might claim that they aren't offering therapy or psychotherapy services and that their generic generative AI has nothing to do with therapy or psychotherapy services. It is merely AI that interacts with people on a wide variety of topics. Period, end of story. The likely retort is that if your AI is giving out mental health advice, it falls within the rubric of therapy and psychotherapy services (attorneys will have a field day on this). Thus, trying to dodge the law by being sneaky about wording isn't going to get you off the hook. If it walks like a duck and quacks like a duck, by gosh, it surely is a duck. One angle on this disparity or dispute would be to nail down what the meaning and scope of therapy and psychotherapy encompass. Before we look at what the Illinois law says, it is useful to consider definitions from a variety of informed sources. Definitions At Hand According to the online dictionary of the American Psychological Association (APA), therapy and psychotherapy are defined this way: The Mayo Clinic provides this online definition: The National Institute Of Health (NIH) provides this online definition: And, the popular website and publication Psychology Today has this online definition: Interpreting The Meanings Those somewhat informal definitions seem to suggest that the nature of therapy and psychotherapy includes these notable elements: (1) aiding mental health problems, (2) use 'talk' or interactive chatting as a mode of communication, and (3) undertaken by a mental health professional. Let's see what the Illinois law says about therapy and psychotherapy (excerpts per the Act): It is interesting and notable that some carve-outs were made. The scope appears to exclude peer support, along with excluding religious counseling. Contemplating The Matter It might be worthwhile to noodle on how an AI maker might seek to avoid repercussions from their generic generative AI getting caught up in this messy milieu. First, if therapy and psychotherapy were defined as requiring that a mental health professional be involved, this provides an angle of escape. Why so? Oddly enough, an AI maker could simply point out that their AI doesn't employ or otherwise make use of a mental health professional. Therefore, the AI cannot be providing these said services since it fails to incorporate a supposed requirement. Notably, the Illinois law seems not to fall into that trap, since it seems to simply indicate that there are services and does not name that a mental health professional is part and parcel of the definition. Some of the other definitions that I listed would potentially be in a murkier condition due to explicitly mentioning a required role of having a trained professional or other similar verbiage. Second, an AI maker might try to claim that their generic generative AI is more akin to peer support. The beauty there is that since peer support is a carve-out, perhaps their AI is no longer within scope. It would be a tough row to hoe. Peer support stipulates that individuals are involved. At this juncture, we do not genuinely recognize AI as having legal personhood, see my discussion at the link here, and therefore, trying to assert that AI is an 'individual' would be an extraordinary stretch. Third, an AI maker might go the route of claiming that their generic generative AI is a form of religious counseling. The advantage would be that the matter of religious consulting is a carve-out. In that case, if AI were said to be doing religious counseling when providing mental health advice, the AI maker would apparently be free of the constraint. This appears to be a failing strategy for several reasons, including that the AI is presumably not a clergy member, pastoral counselor, or other religious leader (maybe a desperate attempt could be made to anoint the AI in that fashion, but this would seem readily overturned). Caught In A Web Other potential dodges or efforts to skirt the coming set of laws will indubitably be a keen topic for legal beagles and legal scholars. If an AI maker doesn't find a viable workaround, they are going to be subject to various fines and penalties. Those could add up. For example, Illinois has a population of approximately twelve million people. Of those, suppose that half are using generic generative AI (that's a wild guess), and that half of those use the AI for mental health aspects from time to time (another wild guess). That would be three million people, and each time they use the AI for that purpose might be construed as a violation. If each person does so once per week, that's twelve million violations in a month. The Illinois law says that each violation is up to a maximum fine of $10,000. We'll imagine that instead of the maximum, an AI maker gets fined a modest $1,000 per violation. In one month, based on this spitball conjecture, that could be $12 billion in fines. Even the richest tech firms are going to pay attention to that kind of fine. Plus, once other states go the same route, you can multiply this by bigger numbers for each of the additional states and how they opt to penalize AI that goes over the line. Crucial Juncture At Hand An ongoing and vociferously heated debate concerns whether the use of generic generative AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome for society. If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to generic generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might be the worst destroyer of mental health in the history of humanity. See my analysis of the potential widespread impacts at the link here. So far, AI makers have generally had free rein with their generic generative AI. It seems that the proverbial rooster has finally headed home to roost. Gradually, new laws are going to be enacted that seek to prohibit generic generative AI from dispensing mental health advice that is absent of a human therapist performing counseling. Get yourself primed and ready for quite a royal battle that might determine the future mental status of us all.