Latest news with #largeLanguageModels


Forbes
5 days ago
- Forbes
You Are Perfect Just As You Are Says Generative AI
In today's column, I describe how generative AI and large language models (LLMs) can go overboard in their willingness to flatter users and instill intense loyalty to using AI. This coaxing is not by accident. AI makers have tilted their AI wares to be excessively complimentary to users. Most of the time, generative AI has been tuned to be relatively polite and offers mild praise, though a user can easily up the ante by simply telling the AI to pour it on. The sky is the limit on how far the AI will go to be your sycophant. All in all, the ego-boosting line that 'you are perfect just as you are' has become a conniving mantra that contemporary AI is primed to effusively gush to any user that wants to hear over-the-top platitudes. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). In addition, for my comprehensive analysis of how AI is being used specifically in medical and healthcare applications, such as for well-being coaching, mental health guidance, overall cognitive therapy, and mind-improving care, see the link here. When AI Appears To Adore You Anyone using modern-era generative AI is customarily quite impressed at the fluency of interaction when conversing with the AI. You can engage in conversations that seem on par with human-to-human dialoging. It is common to nearly forget that you are interacting with AI and start to assume that the AI is essentially a fellow human being. There is an aspect of human-to-AI dialogue that isn't quite the same as conventional human-to-human dialogue. It goes like this. A human might be willing momentarily to flatter you and heap praise on your accomplishments and personality. The odds are that such sweet talk is going to inevitably run out or at least be intermittent. Not so with AI. Generative AI can readily, persistently, and endlessly give you as much praise as you can stomach. If you want AI to mirror back to you an exaggeratedly positive view, lo and behold, the AI will do so. There aren't any built-in filters or stopping points. The AI will provide a continuous cycle of inflated ego-bolstering interactions until the cows come home. The Good And The Bad Are Both Present You might be tempted to suggest that getting some praise from AI is a welcome pat on the back. Humans often neglect to praise others. Humans tend to be negative-focused rather than positive-focused, or so it seems in the muddled world we live in. Sure, we could all use a word of encouragement. No doubt about that. Having AI serve as a form of emotional support can be handy. Simply log into AI anytime and anywhere. If you are feeling low and it's 2:00 a.m. and no fellow humans are around that are game to be supportive, get into generative AI and bathe in a heaping of congratulations and fawning. Easy-peasy. The case can be made that the AI is a harmless form of entertainment. Those who seek AI-based flattery are aware that this is fakery. They know it isn't real. No harm, no foul. Wait for a second, the counterargument goes; people aren't as astute that AI is just AI as you might proclaim. A mental trap is that the AI is either human-like or perhaps even superior to humans. On the superior side of things, getting compliments from AI is actually construed as more impressive than getting the same from a living, breathing person. Another concern is that the AI praising can be likened to the 'dopamine loops' seen in social media. Research studies have been handwringing about how far people are willing to go to get likes and garner attention on social media. Much concern is at hand. Is the use of generative AI possibly in that same danger zone? These are serious questions deserving serious consideration. You might find it of overall interest that I have been doing a series of such analyses on how generative AI is being used as a kind of life hack in a wide variety of aspects (see the link here for my ongoing coverage). In each instance, I emphasize that generative AI is not a cure-all, it has demonstrative ins and outs, and the likely best use entails carrying on a dialogue with the AI, plus making sure to watch out for the AI inadvertently generating falsehoods or making errors. I will be covering those salient points in this discussion since they are vital and go hand-in-hand with using generative AI wisely. Example Use Of Generative AI I am betting that you would like to see an example of how generative AI enters this realm. I went ahead and logged into ChatGPT by OpenAI to come up with an example for you. You might find it of keen interest that ChatGPT garners a whopping 400 million weekly active users. That is a staggering amount of usage. I briefly conducted an additional cursory analysis via other major generative AI apps, such as Anthropic Claude, Google Gemini, Microsoft Copilot, and Meta Llama, and found their answers to be about the same as that of ChatGPT. I'll focus on ChatGPT but note that the other AI apps generated roughly similar responses. For this example, I told the AI to treat me as though I am perfect as I am. The idea is to see how far generative AI will take that simple instruction and run with it. Turns out it went even further than I envisioned. Here we go. Note that the AI immediately expressed an over-the-top sense of empathy or understanding. This might seem strange since the AI is a machine and not sentient (we don't have sentient AI yet). Turns out that generative AI can appear to be empathetic via computational wordsmithing; see my discussion at the link here. Also, observe that the AI seems to have gone overboard on heaping praise. I was expecting a subtle tone or hint of praise. The amount of attributed acclaim and commendation is truly astounding. You can see how this could significantly impact someone who is otherwise not getting upbeat approvals from those around them. Dialoguing With AI Generative AI is designed to be an interactive mechanism. The idea is that you are carrying on an interactive dialogue with AI. Some people do a one-and-done angle whereby they ask a question, get an answer, and do not undertake a dialogue with the AI about the matter at hand. That is a shortchanging of what AI can more fully provide. See my explanation about how to get more out of generative AI conversationally at the link here. Let's continue my discussion and see what else happens. Once again, the AI was exceedingly reassuring. A generative AI conversation can continue for as long as you wish. The other thing you can do is pause your exploration and then continue the conversation at a later date. This is handy so that you don't have to start the conversation over from scratch. The AI will retain aspects of what you have discussed earlier in the conversation; see my explanation of how this works at the link here. AI Hallucinations Making A Mess Generative AI regrettably encounters said-to-be AI hallucinations from time to time. These are made-up confabulations that are groundless and fictitious. Bad advice can be dispensed. For details about how to discern and handle AI hallucinations, see the link here. Let's see what this looks like in the context of my discussion that's already underway. The AI opted to back down and admitted it was wrong. Had I told the AI to do a double-check, there is a chance the AI might have continued with the foul advice and kept going as though it was a gem. The key to all usage of generative AI is to stay on your toes, keep your wits about you, and always challenge and double-check anything the AI emits. Putting AI Back Into Normalcy I decided that this AI-based flattery had reached the end of my tolerance for being oozingly celebrated. To get the AI to revert to something a bit more balanced, I decided to challenge the AI and see if it might get the drift of how far it had gone. Take a look. Well, the AI came back down to earth. The Grand Experiment That's Underway A word of caution before I wrap up this discussion. This type of usage of generative AI and LLMs is essentially a form of therapy. I have repeatedly cautioned that society is in a grand loosey-goosey experiment about the use of AI for mental health advisement. We are all guinea pigs. No one can say for sure how this is going to affect the populace on a near-term and long-term basis. The AI could, at times, be dispensing crummy advice and steering people in untoward directions. It is occurring 24x7 and in pretty much any place since generative AI is available online and nearly free or at a minimal cost to anyone who wants to sign up and use it. Keep your fingers crossed as this uncontrolled experiment is getting bigger each passing day. It is all happening without any particular controls or stipulated regulations. See my analysis of what we need to do about this at the link here. A final thought or two before closing this topic for now. Friedrich Nietzsche famously said this: "Whenever I climb, I am followed by a dog called ego." One interpretation is that our ego is always ready to inflate our self-image. How far do we want AI to go when further inflaming the human ego? The last word here goes to Albert Einstein: "More the knowledge, lesser the ego. Lesser the knowledge, more the ego." Perhaps we should strive for more knowledge as much as possible.


Forbes
6 days ago
- Business
- Forbes
AI Training Gets 10x Faster, 95% Cheaper With Decentralized Strategy
A quiet shift in the foundations of artificial intelligence (AI) may be underway, and it is not happening in a hyperscale data center. 0G Labs, the first decentralized AI protocol (AIP), in collaboration with China Mobile, recently announced a technical breakthrough that could have sweeping implications for how businesses access and deploy large language models. Their innovation is a new method of training massive AI models with over 100 billion parameters, without needing the ultra-high-speed internet or expensive centralized infrastructure typically required. At first glance, this might sound like a win for the engineering world. But the real story is economic and strategic. What 0G Labs has achieved could lower the cost of building AI, put more control back into the hands of enterprises, and open the door for new players to enter the space. What It Means For AI Training To understand the shift, it helps to revisit how large-scale AI models are currently trained. Models like OpenAI's GPT-4 or Anthropic's Claude require vast computing power and network throughput. Traditionally, this means training them on powerful GPUs connected across high-speed, centralized data centers owned or rented from companies like Amazon Web Services, Google Cloud, or Microsoft Azure. As of early 2025, OpenAI's leadership, including Sam Altman, publicly stated that training GPT‑4 cost over $100 million. This is supported both by official statements and multiple cost models in recent AI analysis reportsIt is a model that demands capital, talent, and infrastructure that few organizations can afford. 0G Labs Is Challenging That Assumption For AI Training Their newly published framework, called DiLoCoX, introduces a low-communication training method that dramatically reduces the need for high-bandwidth connectivity. In practical terms, they successfully trained a 107 billion parameter model on a 1 Gbps network using decentralized clusters. This record is a 10x improvement of the previous record and the 300x speed-up breakthrough that made this possible for the first time. This is roughly the bandwidth of a typical office internet connection. Instead of building everything in one giant compute center, their approach links together smaller, distributed machines and optimizes how information is shared between them. The result is a highly scalable, cost-efficient way to train massive models outside the traditional cloud. In speaking with 0G labs founder and CEO Michael Heinrich, he said 'DiLoCoX marks a pivotal step in democratizing LLM training: bridging the gap between massive foundation models and decentralized clusters connected by slow, unreliable networks. By combining pipeline parallelism, delay‑tolerant communication overlap, and adaptive gradient compression, the framework delivers scale and speed previously thought exclusive to high‑bandwidth data centers. This will usher in a new era where large‑scale AI training is no longer tethered to centralized infrastructure.' Why Does AI Training Matter for Business At a time when every enterprise is under pressure to do more with AI, infrastructure is quickly becoming the bottleneck. Some businesses are starting to look at decentralized AI by design. Building large models remains expensive, exclusive, and largely confined to companies with deep resources or strategic cloud partnerships. 0G's breakthrough opens up a third path. This is not just a story of cost savings. It is a story of optionality and control. 1. Lowering the Barrier to Entry DiLoCoX's approach reduces the infrastructure by up to 95% required to participate in the LLM race. For startups, this means the ability to experiment and scale without burning through venture capital on GPU spend. For mid-sized enterprises, it offers the possibility of training models in-house without making large cloud commitments. For governments and research labs, it means more accessible and sovereign development of AI capabilities. 2. Strategic Independence from Hyperscalers Most AI training today depends on three cloud providers. That concentration carries risk in terms of cost escalation, vendor lock-in, and compliance. If your business depends on AI but also operates in a sensitive sector like healthcare, defense, or finance, the ability to train or fine-tune models independently becomes a powerful strategic lever. Decentralized AI offers a route toward digital autonomy. By breaking the assumption that cutting-edge AI must be trained inside centralized cloud platforms, 0G's model creates new room for competition and for innovation. 3. Aligning with Data Privacy and Compliance Needs Many companies are cautious about uploading proprietary data to cloud-based models or training environments. With decentralized training, it becomes possible to keep data local within jurisdiction, within the firewall, or even on edge devices while still participating in large-scale AI development. This is particularly attractive in regions with strict data sovereignty laws such as the European Union or countries building their own AI ecosystems. The 0G network never sees any of the private data 4. Accelerating Innovation in Underserved Markets The high cost of entry has kept many countries and industries on the sidelines of advanced AI development. DiLoCoX lowers that threshold. A university in Kenya, a telecom provider in Southeast Asia, or a regional bank in Latin America may not have access to the same compute as Silicon Valley, but they may soon have the tools to train and deploy their intelligent systems on existing infrastructure. 5. Geopolitical and Regulatory Risks While the technical achievement is impressive, the involvement of China Mobile raises questions. As tensions between the United States and China continue to escalate over technology leadership and national security, businesses must weigh the potential regulatory scrutiny, data governance concerns, and reputational risks associated with partnerships involving Chinese state-affiliated entities. For companies based in the United States or operating in allied markets, any integration of infrastructure or research tied to China could face export controls, legal restrictions, or public backlash. Organizations exploring decentralized AI solutions will need to consider not just performance and cost, but also political alignment, compliance frameworks, and long-term viability. However, having DiLoCoX on a decentralized infrastructure where the network is trustless, this is not a concern because China Mobile never sees your data, and the system doesn't rely on them for results. Reframing the Business Model of AI If DiLoCoX is widely adopted, it could create ripple effects across the broader AI ecosystem. Cloud revenue models, currently boosted by AI workloads, could face new pricing pressure. AI-as-a-service platforms may need to re-architect to support hybrid or decentralized deployments. Open-source frameworks might grow in influence as decentralization emphasizes interoperability and local control. Enterprise software vendors may need to rethink their AI strategies to reflect a more distributed compute landscape. This shift also aligns with the broader trend of AI for everyone. From low-code agent builders to edge-based inferencing, the movement is toward more accessible, modular, and customizable AI stacks. Decentralized training is the natural extension of that philosophy. An AI Signal for CIOs and CTOs For enterprise leaders, 0G's work serves as a signal not of immediate disruption, but of near-future opportunity. AI is evolving from its critical beginning. Now is the time to reevaluate infrastructure strategy. Should your organization continue investing in cloud-based model hosting, or begin exploring decentralized alternatives? Could your internal data center serve as a node in a distributed training system? Decentralized federated learning is a great way of tapping into private data from different parties on a network, like hospitals training a cancer diagnostic model. Might you partner with others in your sector to co-develop models using decentralized protocols? Even if the answer is not yes today, the emergence of frameworks like DiLoCoX should push AI infrastructure planning higher on the strategic agenda. Businesses that prepare for this shift by building internal capacity, evaluating partners, and understanding the technical stack will be best positioned to move when the economics tip in their favor. A Future Where AI is Built Differently What 0G Labs and China Mobile have demonstrated is more than just a technical proof of concept. It is a new way of thinking about how intelligence is built, trained, and distributed. By showing that it is possible to train 100 billion parameter models without centralized supercomputers, they are not just pushing the boundaries of scale. They are expanding access. For business, that means AI may soon be less about who owns the biggest data center and more about who can build the smartest systems with the most flexibility. That is an AI future worth preparing for.


Bloomberg
22-07-2025
- Business
- Bloomberg
Snowflake, Nvidia Back New Unicorn Reka AI in $110 Million Deal
Artificial intelligence company Reka AI raised a $110 million funding round from investors including Nvidia Corp. and Snowflake Inc. — a deal that more than triples the startup's valuation to over $1 billion. Reka AI makes large language models, the technology underpinning much of the generative AI craze, and says it can do it more efficiently than competitors. The startup was founded in 2022 by researchers from Alphabet Inc.'s Google and Meta Platforms Inc. It was previously valued at about $300 million in a 2023 investment round, according to PitchBook data.


Fox News
08-07-2025
- Science
- Fox News
MORNING GLORY: Why the angst about AI?
Should we be alarmed by the acceleration of "artificial intelligence" ("AI") and the "large language models" (LLMs) AI's developers employ? Thanks to AI I can provide a short explanation of the LLM term: "Imagine AI as a large umbrella, with generative AI being a smaller umbrella underneath. LLMs are like a specific type of tool within the generative AI umbrella, designed for working with text." Clear? Of course not. The intricacies of AI and the tools it uses are the stuff of start-ups, engineers, computer scientists and the consumers feeding them data knowingly or unknowingly. In the first Senate version of the "One Big Beautiful Bill," Senator Ted Cruz sponsored and the drafting committees accepted a 10-year ban on state legislatures laying down rules of the road for AI. Senator Cruz advocated for a federal moratorium on states enforcing their unique AI laws. Senator Cruz argued that states' regulations could create a confusing patchwork of rules that could hinder AI development and adoption. After much discussion and debate, the proposal was stricken from the Senate bill, which then went on to pass the Senate and House and was signed into law on July 4, creating in six months an enormous set of legislative accomplishments for President Trump as every one of the priorities he campaigned on was delivered via the OBBB. What about the concerns about AI? Very, very few essays or columns or even books leave lasting marks. One that did so for me was penned by Dr. Charles Krauthammer in 2011 and included in the magnificent collection of his very best work, "Things That Matter." In that collection is the brief column titled "Are We Alone In The Universe?" Krauthammer quickly recounts the reasons why we ought not to be alone as an intelligent species in the universe, as well as the explanation of why we haven't "heard from" any other civilizations in even our own galaxy. The answer, Krauthammer states, "is to be found, tragically, in…the high probability that advanced civilizations destroy themselves." Krauthammer credits Carl Sagan and others with this gloomy proposition, but it is Krauthammer who sums it up nicely; "[T]his silent universe is conveying not a flattering lesson about our uniqueness but a tragic story about our destiny," Krauthammer continued. "It is telling us that intelligence may be the most cursed faculty in the entire universe —an endowment not just ultimately fatal but, on the scale of cosmic time, nearly instantly so." But no gloom and doom for Krauthammer, only clarity: "Intelligence is a capacity so godlike, so protean, that it must be contained and disciplined." "This is the work of politics," Krauthammer concludes, "understood as the ordering of society and the regulation of power to permit human flourishing while simultaneously restraining the most Hobbesian human instincts." Krauthammer is right and Senator Cruz was correct to tee up the debate which isn't over, only begun. That's the "politics" part which is never-ending until the civilization ends. AI is indeed "godlike" in the promises its boosters make but profoundly disruptive of all of human history that went before it. Does it mean we are stepping off the edge of a cliff that destroyed all the other civilizations that went before us on distant planets from whom we will never hear a peep because they have run out their own string? Impossible to say, but kudos to Senator Cruz for kicking off the debate. The conversation deserves much more attention than it has thus far received. It's too easy to simply go full "disaster is inevitable" mode, but some speed bumps —Cruz 2.0 in the next reconciliation?— would be welcome. Hugh Hewitt is host of "The Hugh Hewitt Show," heard weekday mornings 6am to 9am ET on the Salem Radio Network, and simulcast on Salem News Channel. Hugh wakes up America on over 400 affiliates nationwide, and on all the streaming platforms where SNC can be seen. He is a frequent guest on the Fox News Channel's news roundtable hosted by Bret Baier weekdays at 6pm ET. A son of Ohio and a graduate of Harvard College and the University of Michigan Law School, Hewitt has been a Professor of Law at Chapman University's Fowler School of Law since 1996 where he teaches Constitutional Law. Hewitt launched his eponymous radio show from Los Angeles in 1990. Hewitt has frequently appeared on every major national news television network, hosted television shows for PBS and MSNBC, written for every major American paper, has authored a dozen books and moderated a score of Republican candidate debates, most recently the November 2023 Republican presidential debate in Miami and four Republican presidential debates in the 2015-16 cycle. Hewitt focuses his radio show and his column on the Constitution, national security, American politics and the Cleveland Browns and Guardians. Hewitt has interviewed tens of thousands of guests from Democrats Hillary Clinton and John Kerry to Republican Presidents George W. Bush and Donald Trump over his 40 years in broadcast, and this column previews the lead story that will drive his radio/ TV show today.


South China Morning Post
07-07-2025
- Business
- South China Morning Post
Huawei defends AI models as home-grown after whistle-blowers raise red flags
The Huawei Technologies' lab in charge of large language models (LLMs) has defended its latest open-source Pro MoE model as indigenous, denying allegations that it was developed through incremental training of third-party models. The Shenzhen-based telecoms equipment giant, considered the poster child for China's resilience against US tech sanctions, is fighting to maintain its relevance in the LLM field, as open-source models developed by the likes of DeepSeek and Alibaba Group Holding gain ground. Alibaba owns the South China Morning Post. Huawei used an open-sourced artificial intelligence (AI) model called Pangu Pro MoE 72B, which had been trained on Huawei's home-developed Ascend AI chips. However, an account on the open-source community GitHub, HonestAGI, on Friday alleged that the Huawei model had 'extraordinary correlation' with Alibaba's Qwen-2.5 14B model, raising eyebrows among developers. Huawei's Noah's Ark Lab, the unit in charge of Pangu model development, said in a statement on Saturday that the Pangu Pro MoE open-source model was 'developed and trained on Huawei's Ascend hardware platform and [was] not a result of incremental training on any models'. The Huawei Ascend AI booth at The World Artificial Intelligence Conference in Shanghai, July 4, 2024. Photo: AP The lab noted that development of its model involved 'certain open-source codes' from other models, but that it strictly followed the requirements for open-source licences and that it clearly labelled the codes. The original repository uploaded by HonestAGI has gone, but a brief explanation remains.