Latest news with #AIresearch


Forbes
2 days ago
- Health
- Forbes
Stanford Initiative Leverages AI To Robustly Transform Mental Health Research And Therapy
In today's column, I explore the latest efforts to transform mental health research and therapy into being less subjective and more objective in its ongoing pursuits. This kind of transformation is especially spurred via the use of advanced AI, including leveraging deep learning (DL), machine learning (ML), artificial neural networks (ANN), generative AI, and large language models (LLMs). It is a vital pursuit well worth undertaking. Expectations are strong that great results and new insights will be gleaned accordingly. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis that also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH, see the link here. Indeed, today's discussion is substantively shaped around a recent seminar conducted by AI4MH. Let's begin with a cursory unpacking of what is generally thought of as a type of rivalry or balance amid being subjective versus objective. The conceptualization of 'objective' consists of a quality or property intended to convey that there are hard facts, proven principles and precepts, clear-cut observations, reproducible results, and other highly tangible systemic elements at play. In contrast, 'subjective' is characterized as largely speculative, sentiment-based, open to interpretation, and otherwise less ironclad. Where does the consideration of subjective vs. objective often arise? You might be surprised to know that the question of subjective versus objective has a longstanding root in two fields of endeavor, namely psychology and physics. Yes, it turns out that psychology and physics have historically been domains that richly illuminate the dialogue regarding subjective versus objective. The general sense is that people perceive physics as tilted more toward the objective and less toward the subjective, while the perception of psychology is that it is a field angled toward the subjective side more so than the objective. Turn back the clock to the 1890s, in which the famed Danish professor Harald Hoffding made these notable points about psychology and physics (source: 'Outlines of Psychology' by Harald Hoffding, London Macmillan, 1891): You might notice the rather stunning point that psychology and physics are themselves inclusive of everything that could potentially be the subject of human research. That's amazingly alluring to those in the psychology and physics fields, while perhaps not quite as affable for all other domains. In any case, on the thorny matter of subjective versus objective in the psychology realm, we can recall Pavlov's remarks made in 1930: Pavlov's comments reflect a longstanding aspiration of the field of psychology to ascertain and verify bona fide means to augment the subjective aspects of mental health analysis with more exacting objective measures and precepts. The final word on this goes to Albert Einstein as to the heady matter: It's always an uphill battle to refute remarks made by Einstein, so let's take them as they are. Shifting gears, the topic of psychology and the challenging properties of subjective vs. objective was a major theme during a recent seminar undertaken by Stanford University on May 28, 2025, at the Stanford campus. Conducted by the initiative known as AI4MH (Artificial Intelligence for Mental Health), see the link here, within the Stanford School of Medicine, Department of Psychiatry and Behavioral Sciences, the session was entitled 'Insights from AI4MH Faculty: Transforming Mental Health Research with AI' and a video recording of the session can be found at the link here. The moderator and the three speakers consisted of: I attended the session and will provide a recap and analysis here. In addition, I opted to look at various research papers by the speakers. I encompass selected aspects from the papers to further whet your appetite for learning more about the weighty insights provided during the seminar and based on their respective in-depth research studies. I'll proceed next in the same sequence as occurred during the seminar, covering each speaker one at a time, and then offer some concluding thoughts. The human brain consists of around 86 billion neurons and approximately 100 trillion synapses. This elaborate organ in our noggin is often referred to in the AI field as the said-to-be wetware of humans. That's a cheeky sendoff of computer-based hardware and software. Somehow, in ways that we still aren't quite sure, the human brain or wetware gives rise to our minds and our ability to think. In turn, we are guided in what we do and how we act via the miracle of what's happening in our minds. For my related discussion about the Theory of Mind (ToM) and its relationship to the AI realm, see the link here. In the presentation by Dr. Kaustubh Supekar, he keenly pointed out that the brain-mind indubitably is the source of our mental health and ought to be closely studied when trying to ascertain the causes of mental disorders. He and his team are using AI to derive brain fingerprints that can be associated with mental disorders. It's quite exciting to envision that we could eventually end up with a tight mapping between the inner workings of the brain-mind and how mental disorders manifest within the brain-mind. Imagine the incredible possibilities of anticipating, remedying, or at least aiding those incurring mental disorders. In case you aren't familiar with the formal definition of what mental disorders consist of, I covered the DSM-5 guidelines in a posting on AI-driven therapy using DSM-5, see the link here, and included this definition from the well-known manual: DSM-5 is a widely accepted standard and is an acronym for the Diagnostic and Statistical Manual of Mental Disorders fifth edition, which is promulgated by the American Psychiatric Association (APA). The DSM-5 guidebook or manual serves as a venerated professional reference for practicing mental health professionals. In a recent research article that Dr. Kaustubh Supekar was the lead author of, entitled 'Robust And Replicable Functional Brain Signatures Of 22q11.2 Deletion Syndrome And Associated Psychosis: A Deep Neural Network-Based Multi-Cohort Study' by Kaustubh Supekar, Carlo de los Angeles, Sikanth Ryali, Leila Kushan, Charlie Schleifer, Gabriela Repetto, Nicolas Crossley, Tony Simon, Carrie Bearden, and Vinod Meno, Molecular Psychiatry, April 2024, these salient points were made (excerpts): The study aimed to find relationships between those having a particular chromosomal omission, known as DiGeorge syndrome or technically as 22q11.2 deletion syndrome (DS), and linking the brain patterns of those individuals to common psychosis symptoms. The brain-related data was examined via the use of an AI-based artificial neural network (a specialized version involving space-time or spatiotemporal analyses underlying the data, referred to as stDNN). This and other such studies are significant steps in the erstwhile direction of mapping brain-mind formulations to the nature of mental disorders. Faithful readers might recall my prediction that ambient intelligence (AmI) would be a rapidly expanding field and will dramatically inevitably change the nature of our lives, see the link here. What is ambient intelligence? Simply stated, it is a mishmash term depicting the use of AI to bring together data from electronic devices and do so with a focus on detecting and reacting to human presence. This catchphrase got its start in the 1990s when it was considered state-of-the-art to have mobile devices and the Internet of Things (IoT) was gaining prominence. It is a crucial aspect of ubiquitous computing. Ambient intelligence has made strong strides due to advances in AI and advances in ubiquitous technologies. Costs are getting lower and lower. Embedded devices are here and there, along with the devices seemingly invisible to those within their scope. The AI enables adaptability and personalization. In the second presentation of the AI4MH seminar, Dr. Ehsan Adeli notably pointed out that we can make use of exhibited behaviors to try and aid the detection and mitigation of mental health issues. But how can we capture exhibited behavior? One strident answer is to lean into ambient intelligence. In a research article that he served as a co-author, entitled 'Ethical Issues In Using Ambient Intelligence In Healthcare Settings' by Nicole Martinez-Martin, Zelun Luo, Amit Kaushal, Ehsan Adeli, Albert Haque, Sara S Kelly, Sarah Wieten, Mildred K Cho, David Magnus, Li Fei-Fei, Kevin Schulman, and Arnold Milstein, Lancet Digital Health, December 2020, these salient points were made (excerpts): The idea is that by observing the exhibited behavior of a person, we can potentially link this to their mental health status. Furthermore, via the appropriate use of AI, the AI might be able to detect when someone is having mental health difficulties or perhaps incurring an actual mental health disorder. The AI could in turn notify clinicians or others, including the person themselves, as suitably determined. In a sense, this opens the door to undertaking continuous assessment of neuropsychiatric symptoms (NPS). Of course, as directly noted by Dr. Ehsan Adeli, the enabling of AmI for this purpose brings with it the importance of considering aspects of privacy and other AI ethics and patient ethics caveats underlying when to best use these growing capabilities. Being evidence-based is a hot topic, aptly so. The trend toward evidence-based medicine and healthcare has been ongoing and aims to improve both research and practice, doing so in a classic less subjective, and more objective systematic way. The American Psychological Association (APA) defines evidence-based practice in psychology (EBPP) as 'the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences.' The third speaker in the AI4MH seminar was Dr. Shannon Wiltsey Stirman, a top researcher with a focus on how to facilitate the high-quality delivery of evidence-based psychosocial (EBPs) interventions. Among her research work is a framework for identifying and classifying adaptations made to EBPs in routine care. On the matter of frameworks, Dr. Stirman's presentation included a discussion about a newly formulated framework associated with evaluating AI-based mental health apps. The innovative and well-needed framework had been devised with several of her fellow researchers. In a co-authored paper entitled 'Readiness Evaluation for Artificial Intelligence-Mental Health Deployment and Implementation (READI): A Review and Proposed Framework' by Elizabeth Stade, Johannes Eichstaedt, Jane Kim, and Shannon Wiltsey Stirman, Technology, Mind, and Behavior, March 2025, these salient points were made (excerpts): Longtime readers know that I have been calling for an assessment framework like this for quite a while. For example, when OpenAI first allowed ChatGPT users to craft customized GPTs, there was a sudden surge in GPT-based applets that purportedly performed mental health therapy via the use of ChatGPT. In my review of those GPTs, I pointed out that many were not only vacuous, but they were at times dangerous in the sense that the advice being dispensed by these wantonly shaped ChatGPT applets was erroneous and misguided (see my extensive coverage at the link here and the link here). I have also repeatedly applauded the FTC for going after those who tout false claims about their AI for mental health apps (see my indication at the link here). Just about anyone can readily stand up a generative AI app that they claim is suitable for mental health therapy. They might have zero experience, zero training, and otherwise be completely absent from any credentials associated with a mental health professional. Meanwhile, consumers are at a loss to know which mental health apps are prudent and useful and which ones are problematic and ought to be avoided. It is for this reason that I have sought a kind of Consumer Reports scoring that might be used to differentiate AI mental health apps (see my discussion at the link here). The new READI framework is a substantial step in that profoundly needed direction. Moving the needle on the subjective vs. objective preponderance in psychology is going to take earnest and undeterred energy and attention. Newbie researchers especially are encouraged to pursue these novel efforts. Seasoned researchers might consider adjusting their usual methods to also incorporate AI, when suitable. The use of AI can be a handy tool and demonstrative aid. I've delineated the many ways that AI has already inspired and assisted psychology, and likewise, how psychology has aided and inspired advances in AI, see the link here for that discussion. There is a great deal at stake in terms of transforming psychology and the behavioral sciences as far forward as we can aim to achieve. Besides bolstering mental health, which certainly is crucial and laudable, Charles Darwin made an even grander point in his 'On the Origin of Species by Means of Natural Selection' in 1859: You see, the stakes also include revealing the origins of humankind and our storied history. Boom, drop the mic. Some might say it is ironic that AI as a computing machine would potentially have a hand in the discovery of that origin, but it isn't that far-fetched since AI is in fact made by the hand of humanity. It's our self-devised tool in an expanding toolkit to understand the world. And which gladly includes two very favored domains, e.g., the close and dear cousins of psychology and physics.


The Verge
4 days ago
- Business
- The Verge
Why do lawyers keep using ChatGPT?
Every few weeks, it seems like there's a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, 'bogus AI-generated research.' The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don't exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven't they stopped? The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren't necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don't understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a 'super search engine.' It took submitting a filing with fake citations to reveal that it's more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense. Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. 'I think that what we're seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn't mean that these tools don't have enormous possible benefits and use cases for the delivery of legal services,' Perlman said. Legal databases and research systems like Westlaw are incorporating AI services. In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they've used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research 'case law, statutes, forms or sample language for orders.' The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said 'exploring the potential for implementing AI' at work is their highest priority. 'The role of a good lawyer is as a 'trusted advisor' not as a producer of documents,' one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren't always accurate, and in some cases aren't real at all. In one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included 'significant misrepresentations and misquotations of supposedly pertinent case law and history,' Judge Kathryn Kimball Mizelle, of Florida's middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times. Mizelle ultimately let Burke's lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he 'assumes sole and exclusive responsibility for these errors.' Rasch said he used the 'deep research' feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw's AI feature. Rasch isn't alone. Lawyers representing Anthropic recently admitted to using the company's Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an 'inaccurate title and inaccurate authors.' Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock's filing included 'two citation errors, popularly referred to as 'hallucinations,'' and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. 'I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn't exist,' Judge Michael Wilner wrote. Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. 'I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers' judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,' Perlman said. But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. 'Even before the emergence of generative AI, lawyers would file documents with citations that didn't really address the issue that they claimed to be addressing,' Perlman said. 'It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don't properly check them; they don't really see if the case has been overturned or overruled.' (That said, the cases do at least typically exist.) Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. 'I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,' Perlman said. Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He's also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the 'baseline definition' of what deepfakes are and then 'I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,' Kolodin told The Guardian at the time. Kolodin said he 'may have' discussed his use of ChatGPT with the bill's main Democratic cosponsor but otherwise wanted it to be 'an Easter egg' in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they're real. 'You don't just typically send out a junior associate's work product without checking the citations,' said Kolodin. 'It's not just machines that hallucinate; a junior associate could read the case wrong, it doesn't really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.' Kolodin said he uses both ChatGPT's pro 'deep research' tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has 'gone down substantially over the past year.' AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys' use of LLMs and other AI tools. Lawyers who use AI tools 'have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature' of generative AI, the opinion reads. The guidance advises lawyers to 'acquire a general understanding of the benefits and risks of the GAI tools' they use — or, in other words, to not assume that an LLM is a 'super search engine.' Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states. Perlman is bullish on lawyers' use of AI. 'I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,' he said. 'I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don't.' Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. 'Even with recent advances,' Wilner wrote, 'no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.'


Forbes
7 days ago
- General
- Forbes
Why AI Attaining AGI Will Spur A New Universal Natural Language That Reshapes Human Thinking
Once AI reaches artificial general intelligence (AGI), some believe that AGI will devise a new ... More natural language that could immensely advance human thinking. In today's column, I examine an intriguing speculation that once we advance AI to become AGI (artificial general intelligence), doing so will spur AGI to devise a new universal natural language for humanity to adopt, which then radically reshapes how we think and act. It is definitely an outside-the-box proposition and deserves a moment of keen reflection. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Let's shift gears and mull over the role of natural languages in our daily existence. I'm sure you've heard or read that natural languages tend to shape our thinking processes. The idea is that whatever language you happen to know impacts what you are able to think about and how you process mental considerations. The classic example is the claim that Eskimos have a multitude of words to represent ice and snow. This in turn enables them to think about ice and snow in ways that others lacking those words might not conceivably do. This strident belief that languages shape our thinking is referred to as the linguistic relativity hypothesis, which is also informally known as Whorfianism, see my detailed discussion at the link here. There is a bit of controversy associated with whether the claim about the language of Eskimos is fully accurate on this matter, but the crux is that languages being able to shape our thinking seems intuitively sensible and logical. If you meet someone who doesn't know a particular word, you've probably had a difficult time conversing on the topic at hand. You might have to figure out other ways to express the matter. They could lack an entire semblance of a concept or aspect of life that hasn't yet dawned on them, partially since their language doesn't express the matter or they've not encountered an equivalence in their language. Imagine that you could develop from scratch a new natural language that humans might find useful to adopt. This somewhat happened back in 1887 when Leyzer Zamenhof created a new natural language called Esperanto. He hoped to have everyone adopt a new language that would cause people to feel more connected with each other and readily able to communicate with each other. The United Nations many years later in 2017 noted his support for intercultural dialogue and around that time an estimated 2 million people were speaking Esperanto. That's obviously a drop in the bucket when you consider there are around 8 billion people on Earth, but it's kind of amazing to realize that some people still embrace his devised language. It is an uphill battle to get people to switch over to a brand-new language. First of all, if not many other people speak the language, why should you use your time and energy to learn it? Very few others would be able to interact with you in that language. Worse still, maybe the language will ultimately fizzle out and you'll be left with a dead language in your head that doesn't seem to do you much good. Another qualm would be that the language might not express things that you already know, versus based on whatever other languages you are fluent in. It would be exasperating to learn another language and feel jammed up that you cannot fully express yourself. You would tend to revert to the languages you already have squarely in hand. All in all, a brand-new language would have to possess some really compelling facilities to get people to avidly adopt the language. You are ready now for the big reveal on this mind-bending topic. Suppose we advance AI to become AGI. AGI then looks around and decides that the prevailing natural languages that humans have devised are not as robust and insightful as they could be. Humans have merely stepwise crafted new languages and have never completely been willing to give a shot at crafting a language out of the blue that would essentially be the pinnacle of all feasible languages. To clarify, this wouldn't be an exercise in simply coming up with a new language for the heck of it. There would be a grand purpose involved. The brand-new language would be intended to foster thinking in humans that is beyond our conventional thinking. Human-devised natural languages are holding us back. We don't realize this. Meanwhile, AGI does realize this. So, AGI opts to create a new natural language for us. The AGI is doing what it can to aid humankind and further our intellectual progress. Happy face. One potential reaction by humankind is that we don't want a brand-new language. We are perfectly fine with the languages we already have. A new language would be burdensome. The effort to learn the language and put it into everyday use would require a lot of human time and cost. We pat AGI on the head and say thanks, but no thanks. Some might also be highly suspicious of the new language. The worry goes like this. Maybe AGI wants to have ultimate mind control over humans. A sneaky and clever way to do this would be to saddle us with a language that the AGI somehow can manipulate to its advantage. If we all adopt the language, it could be a massive trap. The AGI has now got us right where it wants us. Sad face. You can bet your bottom dollar that a huge amount of scrutiny would go into inspecting the brand-new language. Governments might warn to avoid learning the language until the needed assessments have been completed. Others would undoubtedly rush to learn the language, despite any warnings by authorities, since the assumption is that knowing another language can't really hurt anyone. A notable concern would be raised about the possibility of babies learning this brand-new language, doing so before they've learned a tried-and-true human-devised natural language. In other words, suppose a baby in a household that speaks only English is first introduced to this new language before they are versed in English. Could this alter their thinking processes and if so, would that alteration have a lifelong impact, and be a good thing or be a disaster for them? George Orwell said it best: 'But if thought corrupts language, language can also corrupt thought.' Presumably, AGI would outrightly indicate why humanity is going to greatly benefit by adopting the new language. In an Eskimo language aspect analogous way, the AGI might explain that if we learn the new language, we will have a vocabulary that will expand our existing way of thinking. Doing so will put us on an even playing field with AGI. Perhaps breakthrough discoveries and inventions will be much more likely by humanity due to humans adopting the new language. Peace on Earth might be more probable because everyone speaks the same language and has the same collective understanding. It is like Esperanto on steroids and fully backed by the already impressive AGI. Consider the rollout aspects. At first, nearly everyone who learns this brand-new language would be doing so as a second language (or third, fourth, etc., depending on how many languages someone already knows). The world isn't going to suddenly change over to a new language in the blink of an eye. People would be speaking their other tongues and also using the newly crafted language. Eventually, assuming that the new language is truly that powerful and beneficial, the expectation would be that all other natural languages would fade out. Scholars would keep those languages around merely for historical purposes. Nearly 100% of the world would be using the new language and rarely ever resort to one of the then-considered retired languages. What would such a brand-new language look like? One viewpoint is that the language would have to resemble the natural languages that we already know. This would make it easier to adopt and a lot less challenging to learn. If the new language consisted of some strange-looking concoction, perhaps all numerical in its substance, it seems doubtful that widespread adoption could take place. People just wouldn't be able to grasp it. We also need to consider how the language works in both written form and when spoken aloud. Imagine for example that the new language was easy to write down but extremely hard to use via the spoken word. That's a problem. Getting the language to be widely adopted would have to appease both the writing aspects and the speaking aspects. AGI is faced with quite a tough puzzle. The new language must generally conform to existing or known natural languages if it is going to be widely accepted. Meanwhile, the new language is supposed to have a compelling basis for adoption, such as expanding our minds and reshaping our thinking processes. It is a tall order, that's for sure. A contrarian perspective is that it would be fine if the brand-new language was hard to adopt. Even if the language doesn't resemble our existing languages, no worries, since humans are adaptable and would adapt to this new language. The key is whether overcoming the barriers to learning the language was worth the benefits that would accrue. Contemplate the impact that having a universal natural language might have on human customs and culture. Some dread the idea that this would almost inevitably neutralize our differences and make us entirely homogenous. Everyone speaking and using the same language would potentially lead to a unitary way of thinking and acting. It is almost as though we have become staid robots. Not everyone believes in that doom-and-gloom outcome. Perhaps we could still retain culture and customs, even if using the same across-the-board language. There isn't an axiomatic reason that we must be the same merely due to using the same language. One of the most exciting aspects of the brand-new language is that it might portend a kind of tipping point for human cognition. This can be likened to a linguistic singularity, whereby humankind demonstrably leaps forward in our intellectual prowess. This is done without having to do implants into our brains. Nor do we need to wait thousands of years for evolution to gradually adjust our noggins. For now, I'll give the last word on this hefty topic to Nathaniel Hawthorne: 'Words -- so innocent and powerless as they are, as standing in a dictionary, how potent for good and evil they become in the hands of one who knows how to combine them.' Will we accept an AGI-devised language? Then again, maybe we will implore AGI to devise a new language and open Pandora's box of our own volition.


Forbes
17-05-2025
- Forbes
The Secrecy Debate Whether AI Makers Need To Tell The World If AGI Is Actually Achieved
In today's column, I explore the ongoing debate about whether AI makers can or should hide their attainment of artificial general intelligence (AGI) from the world if or when they arrive at the revered achievement. The controversial consideration is that on the one hand, they might wish to quietly leverage AGI to their benefit and not reveal the source of their budding power, meanwhile, the rest of us are unaware of and unable to also prosper correspondingly. There's also the qualm that once the world realizes AGI has been achieved, perhaps mass panic will arise, or evildoers will seek to turn AGI toward global extinction. It's all a quite complicated matter. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Imagine that an AI maker manages to attain AGI. That's pretty exciting news. It would be earth-shattering news. Some believe that AGI will enable us to cure cancer and solve many if not all of humankind's pressing problems. Happy face. An AI maker would presumably tout to the rooftops that they miraculously have achieved AGI. Nobel Prizes certainly would be awarded. Vast riches would flow to the AI maker and their employees would be acclaimed and undoubtedly become incredibly wealthy. This would be perhaps the greatest accomplishment of humanity and deserves suitable recognition. But supposing an AI maker decided to keep their AGI under lock and key, secretly profiting via their invention. Not fair, some exhort. AGI is so consequential that an AI maker would be ethically or morally obligated to inform the world. They can't just hog it for themselves. Furthermore, they hold in their hands something that could be incredibly dangerous. It is possible that the AGI might find new poisons or ways to destroy humanity. An AI maker should not solely possess that kind of power. Wait a second, the retort goes, the old-time adage is that to the victor go the spoils. If an AI maker arrives at AGI, it is theirs to decide what to do with it. They don't need to tell anyone what they've accomplished. They can use it for their purposes as they see fit. This includes not using AGI at all, maybe opting to deactivate the AGI under the belief that the world isn't ready for what AGI portends. Some assert that we ought to have laws that specifically would compel AI makers to reveal when AGI is reached. A legal requirement would force an AGI out into the open. The AI maker ought to face harsh criminal charges if they keep AGI a secret. Essentially, AGI is construed as a public good and the public has a right to know that AGI is floating around. Not only does this apply to achieving AGI, but the notion would also be that AI makers must provide status updates as they get near to attaining AGI. Rather than waiting until AGI has arisen, AI makers would have to announce that they are getting close. The requirement would be that they continue to keep the world informed as AGI inches toward reality. The stepwise notification gives us all a chance to be reflective and get ready for the grand moment that AGI exists. In contrast, a sudden announcement that AGI is here would seemingly prompt mass panic. Confusion would reign. People might riot or do other panicky acts. The better approach is to ease everyone into the realization that AGI is near. AI makers might not be keen on the stepwise proclamations. First, they might not even know whether they are getting nearer to AGI. There is a possibility that AGI will suddenly materialize, such as a rapid and unanticipated so-called intelligence explosion (see my analysis of this possibility, at the link here). Second, there are bound to be people gravely worried about AGI and they might try to stop or at least delay the AGI-making efforts of the AI maker. This could include legal pursuits such as civil lawsuits intended to prevent AGI from being reached. Imagine the immense headaches and added costs the AI maker would incur. The odds are that even if the prevention efforts weren't successful, the actions would distract the AI maker and deplete their attention to achieving AGI. Third, other AI makers, the competition as it were, might opt to hire away the AI developers and seek to 'steal' the AGI from the AI maker. This would be a sensible strategy. A competing AI maker would nearly have to do something radical to keep up with the Joneses. The stock value of all other AI makers would otherwise plummet to the doldrums since they aren't on the same footing and near AGI. A nation that has an AI maker in its midst that is nearing AGI would almost certainly decide not to stand idly by while the AI maker arrives at AGI. The hosting country would naturally want a sizable say in how the AGI is going to be used. Thus, the moment an AI maker teases that they are nearing AGI; governmental authorities would be fiercely tempted to declare a takeover of the AI maker. That's an important point. Having one company owning AGI seems somewhat disingenuous. Can the company truly protect the AGI from evildoers? Will the company itself go rogue and opt to use AGI for evil deeds? A nation would feel compelled to exert its authority over the firm. A related aspect is that AGI would undoubtedly change the balance of national geo-political power, see my discussion on this at the link here and the link here. Nations will inevitably wield AGI to showcase the strength of their nation. They would also potentially become drunk with glee and let the matter go to their head, possibly threatening other nations and relishing being on top of worldwide power dominance. All of this creates a likely domino effect. Think of the cascading impacts. A nation takes over an AI maker that either has AGI or is right at the cusp of AGI. Other nations plainly see that this power move is taking place. Some of those nations opt to undertake a first-strike approach. Rather than waiting until the AGI-wielding nation gets its ducks in order, an attempt is made to prevent the AGI from being attained. Chaos ensues as nations get embroiled in an AGI-focused battle over who has AGI and who does not. One viewpoint is that AGI would have to be considered a worldwide resource for all to share. No specific company ought to own AGI or control AGI. Neither should one particular nation own AGI or control AGI. The ownership and control of AGI must be a universal consideration. How would that work? Nobody can say for sure. Perhaps the United Nations would be the place to put AGI and have the UN then decide how AGI would be utilized. Not everyone is keen on that idea. Some assert that a new entity would need to be formed, somehow established on behalf of all humankind. An AI maker that attained AGI might not be pleased with the forced taking of their AGI. Why should they have to give up the immense profits that would come from possessing AGI? Well, the reply comes, maybe some form of payment could be arranged to compensate the AI maker for what they had grandly accomplished. Any kind of announcement that AGI is nearing would stridently spur evildoers into immediate action. Envision that AGI could be turned toward bad deeds and allow criminals to have in their hands the best criminal mastermind ever conceived. These malefactors would do whatever they could to get a copy of the AGI so they would have their own version of it. This would allow them to circumvent security provisions or human-value AI-alignment intricacies that might have been built into the AGI by the AI maker, see my analysis at the link here. If somehow the AGI was so well protected that it couldn't be stolen or copied, another angle would be to corrupt the AI developers into coming on board with the criminal side of things. Entice them or threaten them into compromising whatever prior ethical leanings they might have had. A sneaky path would be to acquire the AI maker that is on the verge of AGI. Perhaps establish an innocent-looking shell company that comes along and buys up the AI maker. Voila, in one fell swoop, AGI is now in the hands of others. Those various arguments about the pros and cons of keeping AGI secret are difficult to tabulate in terms of which way is the best route to go. Some might claim that the dangers clearly indicate that AGI must be kept secret. Others see things the exact opposite, namely that the bottom line adds up to showcasing that AGI must not be kept secret. Here's an interesting twist. Even if an AI maker wanted to keep their AGI a secret, could they effectively do so? No way, some would emphasize. There is absolutely no way that an AI maker could be sitting on AGI and that the word would not get out that they are doing so. AI developers would invariably brag about having attained AGI, maybe initially just to close friends and family. Then word would spread. It would spread like wildfire. Another leakage would be that if the AGI is actively being used for some beneficial purpose, an AI maker would have a challenging time explaining how they suddenly became so brilliant. Everyone would certainly be suspicious and assume that AGI had been achieved. The gig would be up quite quickly. If the government took over the AI maker, that would be another telltale clue that AGI might be in the works. Why else would the government out-of-the-blue decide to possess the firm? Excuses might be laid out to mask the real reason. Nonetheless, enterprising inquisitors would figure out the truth. Do you think an AI maker could end up with AGI and realistically keep it a secret? Seems like a tall order. Sophocles, the legendary Greek playwright, possibly said it best: 'Do nothing secretly; for Time sees and hears all things and discloses all."


Forbes
09-05-2025
- Business
- Forbes
Inside Agentic AI: A Look At How AI Agents Will Transform Your Business
To find a good fit for agentic AI, organizations should look for tasks that have a large number of people doing the same thing over and over.