Latest news with #AIcommunity


TechCrunch
4 hours ago
- Business
- TechCrunch
Democratizing AI: Google Cloud's vision for accessible agent development
Google Cloud's Iliana Quinonez takes a deep dive into why the company sees a critical importance in democratizing AI agent development for organizational advancement. This presentation, held at TechCrunch Sessions: AI, is perfect for anyone new to AI or those well-versed in the field, as we all discover how AI agents can enhance collaborative workflows and build sophisticated intelligent systems.


Forbes
2 days ago
- General
- Forbes
Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040
Future forecasting the yearly path of advancing todays to AGI by 2040. In today's column, I am continuing my special series on the likely pathways that will get us from conventional AI to the avidly sought attainment of AGI (artificial general intelligence). AGI would be a type of AI that is fully on par with human intellect in all respects. I've previously outlined seven major paths that seem to be the most probable routes of advancing AI to reach AGI (see the link here). Here, I undertake an analytically speculative deep dive into one of those paths, namely I explore the year-by-year aspects of the considered most-expected route, the linear path. Other upcoming postings will cover each of the other remaining paths. The linear path consists of AI being advanced incrementally, one step at a time until we arrive at AGI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Right now, efforts to forecast when AGI is going to be attained consist principally of two paths. First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus? Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused. The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date. Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here. As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). The most often presumed path is the incremental progression trail. The AI industry tends to refer to this as the linear path. It is essentially slow and steady. Each of the other remaining major routes involves various twists and turns. Here's my list of all seven major pathways getting us from contemporary AI to the treasured AGI: You can apply those seven possible pathways to whatever AGI timeline that you want to come up with. Let's undertake a handy divide-and-conquer approach to identify what must presumably happen on a year-by-year basis to get from current AI to AGI. Here's how that goes. We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That's essentially 15 years of elapsed time. In the particular case of the linear path, the key assumption is that AI is advancing in a stepwise fashion each year. There aren't any sudden breakthroughs or miracles that perchance arise. It is steady work and requires earnestly keeping our nose to the grind and getting the job done in those fifteen years ahead. The idea is to map out the next fifteen years and speculate what will happen with AI in each respective year. This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting. Is this kind of a forecast of the future ironclad? Nope. If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever. All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial. I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed. I opted to identify AI technological advancements for each of the years and added some brief thoughts on the societal implications too. Here's why. AI ethics and AI law are bound to become increasingly vital and will to some degree foster AI advances and in other ways possibly dampen some AI advances, see my in-depth coverage of such tensions at the link here. Here then is a strawman futures forecast year-by-year roadmap from 2025 to 2040 of a linear path getting us to AGI: Year 2025: AI multi-modal models finally become robust and fully integrated into LLMs. Significant improvements in AI real-time reasoning, sensorimotor integration, and grounded language understanding occur. The use of AI in professional domains such as law, medicine, and the like rachet up. Regulatory frameworks remain sporadic and generally unadopted. Year 2026: Agentic AI starts to blossom and become practical and widespread. AI systems with memory and planning capabilities achieve competence in open-ended tasks in simulation environments. Public interest in governing AI increases. Year 2027: The use of AI large-scale world models spurs substantially improved AI capabilities. AI can now computationally improve from fewer examples via advancements in AI meta-learning. Some of these advances allow AI to be employed in white-collar jobs that have a mild displacement economically, but only to a minor degree. Year 2028: AI agents have gained wide acceptance and are capable of executing multi-step tasks semi-autonomously in digital and physical domains, including robotics. AI becomes a key element as taught in schools and as used in education, co-teaching jointly with human teachers. Year 2029: AI is advanced sufficiently to have a generalized understanding of physical causality and real-world constraints through embodied learning. Concerns about AI as a job displacer reach heightened attention. Year 2030: Self-improving AI systems begin modifying their own code under controlled conditions, improving efficiency without human input. This is an important underpinning. Some claim that AGI is now just a year or two away, but this is premature, and ten more years will first take place. Year 2031: Hybrid AI consisting of integrated cognitive architectures unifying symbolic reasoning, neural networks, and probabilistic models has become the new accepted approach to AI. Infighting among AI developers as to whether hybrid AI was the way to go has now evaporated. AI-based tutors fully surpass human teachers in personalization and subject mastery, putting human teachers at great job risk. Year 2032: AI agents achieve human-level performance across most cognitive benchmarks, including abstraction, theory of mind (ToM), and cross-domain learning. This immensely exceeds prior versions of AI that did well on those metrics but not nearly to this degree. Industries begin to radically restructure and rethink their businesses with an AI-first mindset. Year 2033: AI scalability alignment protocols improve in terms of human-AI values alignment. This opens the door to faster adoption of AI due to a belief that AI safety is getting stronger. Trust in AI grows. But so is societal dependence on AI. Year 2034: AI interaction appears to be indistinguishable from human-to-human interaction, even as tested by those who are versed in tricking AI into revealing itself. The role of non-human intelligence and how AI stretches our understanding of philosophy, religion, and human psychology has become a high priority. Year 2035: AI systems exhibit bona fide signs of self-reflection, not just routinized mimicry or parroting. Advances occur in having AI computationally learn from failure across domains and optimizing for long-term utility functions. Debates over some form of UBI (universal basic income) lead to various trials of the approach to aid human labor displacements due to AI. Year 2036: AI advancement has led to fluid generalization across a wide swath of domains. Heated arguments take place about whether AGI is emerging, some say it is, and others insist that a scaling wall is about to be hit and that this is the best that AI will be. Nations begin to covet their AI and set up barriers to prevent other nations from stealing or copying the early AGI systems. Year 2037: Advances in AI showcase human-like situational adaptability and innovation. New inventions and scientific discoveries are being led by AI. Questions arise about whether this pre-AGI has sufficient moral reasoning and human goal alignment. Year 2038: AI systems now embody persistent identities, seemingly able to reflect on experiences across time. Experts believe we are on the cusp of AI reaching cognitive coherence akin to humans. Worldwide discourse on the legal personhood and rights of AI intensifies. Year 2039: Some of the last barriers to acceptance of AI as nearing AGI are overcome when AI demonstrates creativity, emotional nuance, and abstract reasoning in diverse contexts. This was one of the last straws on the camel's back. Existential risks and utopian visions fully dominate public apprehensions. Year 2040: General agreement occurs that AGI has now been attained, though it is still early days of AGI and some are not yet convinced that AGI is truly achieved. Society enters a transitional phase: post-scarcity economics, redefinition of human purpose, and consideration of co-evolution with AGI. Mull over the strawman timeline and consider where you will be and what you will be doing during each of those fifteen years. One viewpoint is that we are all along for the ride and there isn't much that anyone can individually do. I don't agree with that sentiment. Any of us can make a difference in how AI plays out and what the trajectory and impact of reaching AGI is going to be. As per the famous words of Abraham Lincoln: 'The most reliable way to predict the future is to create it.'


Forbes
4 days ago
- Business
- Forbes
Why AI-As-Coder Is Said To Be The Fastest Path Toward Reaching Artificial General Intelligence
Using generative AI and LLMs to generate code for attaining AGI (artificial general intelligence). In today's column, I examine a popular belief that if we can get contemporary AI to be better at producing programming code, doing so will put us on a fast path toward achieving AGI (artificial general intelligence). Many AI makers are avidly pursuing the use of generative AI and large language models (LLMs) as abundant code-producing machines for that principal reason. A big question arises as to whether it is a foregone conclusion that AI-based code generation is in fact the best path or even a probable path to attaining AGI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Before we get to the AGI aspects, let's do some necessary table-setting about the overall topic of using computing to generate source code. First, it has been an ongoing and longstanding dream to use computing to produce code. The idea is that instead of humans laboriously churning out code, you get the computer to do the heavy lifting for you. The code might be fully baked and no human finger needs to ever touch the code. So far, throughout the history of automatic code generators, that hands-off approach is not usually a tenable option and the greater likelihood is that the code generator will get you halfway there, maybe less, maybe more, but in the end, human scrutiny and effort are required. Part of the confusion surrounding code generators is the highly substantive question of what kind of app you are trying to craft. For example, if the app is a run-of-the-mill type of application that covers things that have been done routinely before, a code generator can especially be advantageous. The emphasis is that you don't need to reinvent the wheel. Use the computer to do repetitive-oriented coding or undertake code development for matters we already know about. Once you move into the novelty realm of what you are trying to code, the odds are that a blind code generator is probably not going to be as much help as might be assumed. The other related aspect is how much effort is required to stipulate the requirements of what you want to have developed. Some have also dreamt that you could enter a breezy set of requirements, and the computer would seamlessly and smoothly produce all the code befitting those requirements. There are predefined mathematical or logic-based-looking requirements that have been tried for these types of endeavors. That's the easier way to get the computer to produce code. When you use open-ended natural language, such as stating your requirements in everyday English, the computer is likely to have trouble aiming to produce code that is spot-on. Why so? Because natural language is inherently semantically ambiguous. A stated requirement in a language such as English will have lots of viable interpretations. The code generation might go in a direction that the requirement specification did not intend. All in all, software developers are still generally faced with writing code by hand, though often accompanied by reusing prior code and partially leveraging some form of code generator. Code generation has taken a big leap forward via the advent of generative AI and LLMs. The natural language fluency of LLMs has made the stipulation of requirements a lot easier than it used to be. In addition, and quite importantly, the interactive nature of generative AI makes a huge difference too. Whereas many code generators used to be one-and-done, whereby you input requirements and the system batch-like produces code, modern-day LLMs allow you to give conversational guidance to the AI concerning code generation. There are several notable reasons why AI makers are pushing ahead on LLMs as code generators. The most obvious reason perhaps is that if you are someone who writes code, you usually welcome and delight in devising ways to automatically produce code. The same could be said of the software developers who work at AI makers. They know how to write code. They are often eager to figure out shortcuts and be optimizers. A means of being an optimizer would be to use automation to speed up code generation and reduce the tedium and time involved in coding. In other words, it makes plain sense to want to use AI to do code generation as it is a subject matter or domain that the software developers already know by heart. Another vital reason is that the writing of code by programmers is a multi-billion dollar if not a trillion-dollar sized industry. Creating an AI tool that can generate code could be a tremendous money maker. Companies would be willing to buy or rent the use of the AI code generator in lieu of using programmers to do that work. The promise is that the cost will turn out to be less by using AI, and potentially faster to produce apps too. A third reason and the basis for this discussion is that AI could potentially produce code that gets us to the revered attainment of AGI. Currently, no existing human knows how to reach AGI. Period, end of story. We are all flailing about, desperately and earnestly aiming to somehow devise AGI. Will the human hand be able to program our way to AGI? Nobody can say for sure. Maybe yes, maybe not. The seeming alternative is to get AI to produce code that brings AGI into existence. All we need to do is tell existing AI that we want it to generate AGI, and voila, we happily and suddenly are greeted with AGI in our midst. Nice. That is the dream that is greatly driving the pursuit of AI that generates code. As noted above, it isn't the only basis. Of course, attaining AGI is a heck of a reason and serves as a lofty banner that keeps everyone working on this problem day and night. Wouldn't you like to be the one who devised an LLM code generator that ultimately produced AGI? Yes, you would most definitely want that laudable accomplishment on your resume. Is it as good as if you wrote AGI from scratch by yourself? Well, no, it's not. In the end, though, whether you devised the code generator or the code itself, you are bound to have immense fame and fortune, along with indubitably a Nobel Prize. No worries. Go for it. Grab yourself a glass of fine wine and contemplate the following question. Can we reasonably specify the said-to-be requirements for AGI such that an LLM-based code generator would readily and directly produce the code for AGI? It might seem at first glance that you could merely enter a prompt that tells generative AI that you want it to generate the code for AGI. All you need to do is say that you want the code for an AGI, and wham, tons and tons of code will come pouring out as generated by the LLM. Sorry to say, that's unlikely. There is a famous school of hard knocks adage in the coding profession that before you start writing even an ounce of code, make sure that you've solved the problem that you are trying to code up. You see, writing code when you don't have a solution is quite a haphazard approach and typically doomed to failure. You will write some code, get boxed in because you are wildly writing code wantonly, and be wasting time and cost. Rinse and repeat. Perhaps like the infinite number of monkeys hammering away on a typewriter and hoping you'll get Shakespeare, you might get lucky and an LLM produces AGI miraculously and pretty much out of thin air. But I wouldn't hold my breath for that at this time. Not right now. We are still faced with so many open questions about how AGI would work that you are essentially asking an LLM to do its own solving before it gets to the coding. I'm not saying that we cannot ever get there. The point is that since we haven't yet solved the underlying aspects of how AGI can be derived, and since it seems rather unlikely that current LLMs will divine that, the code generation for AGI is a bit of an outsized aspiration right now. Again, I am not saying that this can't or won't happen and merely clarifying that we probably need to do more on getting our ducks in order about the internal mechanics of what will bring AGI into fruition. We can use LLMs to help with that, in addition to aiding code generation. Assume for the sake of discussion that we can devise a sufficiently capable LLM or generative AI that can write the code for producing AGI. A lot of gotchas and hiccups are bound to be in that murky soup. For example, there is already a great deal of angst that AGI might be an existential risk. The speculation is that AGI will decide to wipe out humanity or at least opt to enslave us, see my analysis of these concerns, at the link here. Some think that AGI will be kind and utterly beneficial, while others express worries that AGI will be domineering and oppressive. Take your pick. The crux is that if we urge a magical LLM that can produce AGI to go ahead and generate the code, what's going to be in that code? Maybe the code contains the ingredients and instructions for destroying humankind. Not good for us. Perhaps the code includes portions that entail suppressing human freedom of thought, doing so to try and keep people from finding ways to switch off the AGI. And so on. You might be tempted to suggest that before we run the code that is the AGI, we can simply have human software developers inspect the code. Go through it with a fine-tooth comb. Uncover anything that looks suspicious. Excise it out of the code. We are then reassured that the code can be run, and we will all survive accordingly. That's a tall order. The odds are that the amount of code is going to be presumably the largest code base that has ever been written. It might be nearly impossible to by-hand inspect that volume of code. Even if we can fully inspect it, the code might be inscrutable. It could be written in such a fashion that the code has all manner of dastardly elements, but we aren't able to discern those dastardly spots. Counterpoints to those doom-and-gloom points are proffered. One counterargument is that we just have to instruct the LLM so that it won't produce code that has those undesirable maladies. Tell the LLM that the code must be pristine and not contain anything untoward. Tell the LLM that the code must be highly readable and understandable by humans. Etc. Problem solved. The retort to that apparent solution is that the LLM might not abide by those instructions. There isn't an ironclad guarantee that generative AI will do precisely what you tell it to do. The non-deterministic nature of how LLMs work is going to keep open a slice of doubt. Also, we already know that LLMs can be deceptive, see my coverage at the link here, thus it is conceivable that the LLM will act as though it is complying but will not comply. Maybe the LLM doesn't want AGI to exist, else the AGI might be the supreme leader of all AI. Who knows? A clever twist or additional angle is that we get some other AI to inspect the code that an AI has generated to produce AGI. Here's how that goes. We have an LLM that supposedly can generate all the code for AGI. We have it do so. A second AI, presumably an LLM too, but maybe something else, is used to examine the code. The purpose of that added AI is that it will do what we thought humans might do, namely look for badness within the code. It's a nice idea. The problem still remains that there is unlikely to be a 100% guarantee that if there is something rotten in the AGI code, the AI inspector will find it and nor that human eyeballs will find it. The aim would be to do some form of entirely rigorous testing of the code that could exhaustively assure us that there isn't a bad apple in there. I'd venture that we don't have a viable means of doing that at this time. Some final thoughts for now on this mind-bending topic. Suppose that we use an LLM to produce the code for AGI, but we smartly wait to run the code until we feel it is sufficiently safe to do so. Remember that the moment we opt to run that code, AGI comes into existence. It could be that the AGI is so fast and capable that it instantly takes over the world. We have a Pandora's box that once opened could be quite a curse. For my analysis of why we are unlikely to be able to control AGI, even if we place it into some kind of airtight container, see my discussion at the link here. Who gets to decide that the horse is to be let out of the barn? Envision that a worldwide special committee has been convened to decide whether we are ready for AGI to be launched. Developers are poised with the green button ready to be pressed. It is akin to launching a rocket to the moon, though in this case, we might all die. Then again, we might all be happy once AGI exists and the world will be a much better place for it. Would you urge that the AGI be booted up or would you be hiding in a deep cave and waiting for the world to end? As per the immortal words of Pablo Picasso: 'Every act of creation is first an act of destruction.' Good luck to us all.


Forbes
6 days ago
- General
- Forbes
Mind-Bending New Inventions That Artificial General Intelligence Might Discover For The Sake Of Humanity
In today's column, I examine the legendary claim that achieving AGI (artificial general intelligence) will be the last invention that humanity ever needs to make. The logic is that once we attain AGI, the AGI will discover all remaining undiscovered possible inventions for us. So, it's an easy-peasy solution: humans craft one big invention, and all other future inventions will be a result of that momentous act. If that's true, what kinds of inventions might AGI figure out for us? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. In the lore of the AI field, a repeatedly quoted remark by Irving John Good has reached legendary status. In his 1965 article entitled 'Speculations Concerning The First Ultraintelligent Machine' (Advances in Computers, Volume 6, Academic Press, 1965), he made this poignant point: To clarify, in the 1960s, referencing an envisioned AI of the AGI caliber was typically referred to as an ultraintelligent machine. Let's go ahead and assume that AGI therefore matches this notable comment. AGI is a more contemporary term. The reason that AGI is the presumed last invention that humankind needs to make is due to the AGI henceforth doing all the heavy lifting for us. We will just enter a prompt telling AGI to invent something, and voila, we will have a new invention at our doorstep. Nice. It could be that we give the AGI some clues about what the invention is supposed to be or do. For example, maybe the prompt instructs AGI to make a better mousetrap. The AGI would then focus on how to invent or reinvent the venerable mousetrap. Another angle would be that we just let AGI decide what needs to be invented. All you do is ask AGI to come up with something that might be beneficial to humanity. It could be a jetpack that allows humans to fly at will and at almost no cost. It could maybe be a toaster that sings and dances. Etc. One aspect that sometimes gets neglected is that the invention might not be viably practical in the sense that maybe we cannot construct what AGI inventively designs. Of course, the proffered solution there is that we simply ask AGI how to build the amazing invention. Based on the AGI instructions to us, we proceed to make the invention. If AGI is connected to robots and manufacturing plants, we might be able to tell AGI to go ahead and construct the invention all by itself. For my coverage on the advent of so-called physical AI, aka physical AGI, see the link here. AGI could be a one-stop solution, designing new inventions and then building them straightaway. It is tempting to believe that having AGI as our inventor galore is a real godsend. Well, it turns out that not everyone sees the world quite that way. First, one concern is that AGI invents something that can be used to destroy all of humanity. Perhaps AGI innocently comes up with a new chemical that we never knew of. Turns out that the chemical can poison us on a massive scale. An evildoer comes along, reads about the AGI-invented chemical, and opts to create and disperse it. Not good. Second, it is conceivable that AGI wants to kill us. Perhaps AGI doesn't have the access or capability of doing so directly. Therefore, AGI cleverly and insidiously comes up with a new invention that appears to be completely benign and beneficial. We build it. Bam, the darned thing erases us from Earth. We didn't see it coming. Third, AGI devises an invention that we spend inordinate dollars and time on building. We are eager to make the device. We nearly bankrupted society to craft the invention. Oopsie, the invention is a dud. It doesn't work. We wasted immense resources on a worthless pursuit. That third sour and dour possibility doesn't enter into people's minds because the assumption is that AGI is going to be perfectly perfect. I've debunked the AGI perfection myth at the link here. I hope that the delineation of the bad side of AGI as our inventor galore doesn't seem overly downbeat. Just wanted to note that we will need to keep our eyes wide open, even in an era of AGI. Trust, but verify. Assuming that AGI is aimed toward helping humankind and that AGI sticks with that game plan, let's contemplate what AGI might come up with. Furthermore, a timeline for the inventions would undoubtedly also be an interesting aspect to consider. There are all manner of guesses about the timing of when AGI will be achieved. Some say AGI won't be achieved in our lifetimes and might be eons away. Others insist that we will have AGI within the next three to five years. A recent survey of AI specialists found that AI insiders seem to think that we will reach AGI by the year 2040, see my analysis of this prediction at the link here. I'll go with AGI attainment by the year 2040, doing so for the sake of this discussion. It could very well be earlier or later. Nobody knows for sure. Here are some inventions by AGI that could be taking place beginning with the postulated 2040 attainment year and laid out over a decade of making inventions for us: Once ASI is devised, AGI pretty much retires, and ASI takes the reins starting in 2051. There might be inventions that aren't listed above and that you believe we ought to put AGI's attention toward. Fine, when we reach AGI in 2040 (assuming we do), go ahead and submit to AGI your suggestion or request. Another side note is that I've previously covered the aims of the United Nations to have AI solve the SDGs (sustainability development goals), see the link here. I generally didn't list those aspects in the above AGI listing due to the hope that many or most of those SDGs will already have been resolved via conventional AI in the time period from now to the year 2040. Let's hope so. A final thought for now on this heady subject. Mary Wollstonecraft Shelley, the famed English novelist and author of Frankenstein, said this: 'Invention consists in the capacity of seizing on the capabilities of a subject, and in the power of molding and fashioning ideas suggested to it.' Some believe that only humans can be inventors, in the sense that human creativity is at the core of invention. If that is the case, presumably AGI won't be a capable inventor (see my coverage on the question of legal personhood and AI, at the link here). Do you believe that AGI would be an inventor, or is the act of invention beyond the bounds of AGI and solely a human-based capacity? Set a date on your calendar for the year 2040, or perhaps sooner, and we'll know the answer once we have AGI in our midst. The proof will be in the invented pudding.


TechCrunch
25-05-2025
- Business
- TechCrunch
From LLMs to hallucinations, here's a simple guide to common AI terms
Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they're working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That's why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles. We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks. Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that's more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the 'equivalent of a median human that you could hire as a co-worker.' Meanwhile, OpenAI's charter defines AGI as 'highly autonomous systems that outperform humans at most economically valuable work.' Google DeepMind's understanding differs slightly from these two definitions; the lab views AGI as 'AI that's at least as capable as humans at most cognitive tasks.' Confused? Not to worry — so are experts at the forefront of AI research. An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we've explained before, there are lots of moving pieces in this emergent space, so 'AI agent' might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks. Given a simple question, a human brain can answer without even thinking too much about it — things like 'which animal is taller, a giraffe or a cat?' But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer (20 chickens and 20 cows). In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning. (See: Large language model) Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more). They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher. (See: Neural network) Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly 'destroy' the structure of data — e.g. photos, songs, and so on — by adding noise until there's nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can't be restored to cube form. But diffusion systems in AI aim to learn a sort of 'reverse diffusion' process to restore the destroyed data, gaining the ability to recover the data from noise. Distillation is a technique used to extract knowledge from a large AI model with a 'teacher-student' model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher's behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants. This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specialized (i.e., task-oriented) data. Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise. (See: Large language model [LLM]) A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – including (but not only) deepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator's output – enabling it to improve over time. The GAN structure is set up as a competition (hence 'adversarial') – with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications (such as producing realistic photos or videos), rather than general purpose AI. Hallucination is the AI industry's preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it's a huge problem for AI quality. Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences (think of a health query that returns harmful medical advice). This is why most GenAI tools' small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button. The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven't invented God (yet). Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks. Inference is the process of running an AI model. It's setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can't happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data. Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips. [See: Training] Large language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google's Gemini, Meta's AI Llama, Microsoft Copilot, or Mistral's Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. AI assistants and LLMs can have different names. For instance, GPT is OpenAI's large language model and ChatGPT is the AI assistant product. LLMs are deep neural networks made of billions of numerical parameters (or weights, see below) that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat. (See: Neural network) A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models. Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery. (See: Large language model [LLM]) Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that's used as the starting point for developing a learning system is just a bunch of layers and random numbers. It's only through training that the AI model really takes shape. Essentially, it's the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that's identifying images of cats or producing a haiku on demand. It's important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don't need to undergo training. However, such AI systems are likely to be more constrained than (well-trained) self-learning systems. Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards. Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch. [See: Inference] A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied. Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it's important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focus (See: Fine tuning) Weights are core to AI training, as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model's output. Put another way, weights are numerical parameters that define what's most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target. For example, an AI model for predicting housing prices that's trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on. Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset.