logo
#

Latest news with #codegeneration

Why AI-As-Coder Is Said To Be The Fastest Path Toward Reaching Artificial General Intelligence
Why AI-As-Coder Is Said To Be The Fastest Path Toward Reaching Artificial General Intelligence

Forbes

timea day ago

  • Business
  • Forbes

Why AI-As-Coder Is Said To Be The Fastest Path Toward Reaching Artificial General Intelligence

Using generative AI and LLMs to generate code for attaining AGI (artificial general intelligence). In today's column, I examine a popular belief that if we can get contemporary AI to be better at producing programming code, doing so will put us on a fast path toward achieving AGI (artificial general intelligence). Many AI makers are avidly pursuing the use of generative AI and large language models (LLMs) as abundant code-producing machines for that principal reason. A big question arises as to whether it is a foregone conclusion that AI-based code generation is in fact the best path or even a probable path to attaining AGI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Before we get to the AGI aspects, let's do some necessary table-setting about the overall topic of using computing to generate source code. First, it has been an ongoing and longstanding dream to use computing to produce code. The idea is that instead of humans laboriously churning out code, you get the computer to do the heavy lifting for you. The code might be fully baked and no human finger needs to ever touch the code. So far, throughout the history of automatic code generators, that hands-off approach is not usually a tenable option and the greater likelihood is that the code generator will get you halfway there, maybe less, maybe more, but in the end, human scrutiny and effort are required. Part of the confusion surrounding code generators is the highly substantive question of what kind of app you are trying to craft. For example, if the app is a run-of-the-mill type of application that covers things that have been done routinely before, a code generator can especially be advantageous. The emphasis is that you don't need to reinvent the wheel. Use the computer to do repetitive-oriented coding or undertake code development for matters we already know about. Once you move into the novelty realm of what you are trying to code, the odds are that a blind code generator is probably not going to be as much help as might be assumed. The other related aspect is how much effort is required to stipulate the requirements of what you want to have developed. Some have also dreamt that you could enter a breezy set of requirements, and the computer would seamlessly and smoothly produce all the code befitting those requirements. There are predefined mathematical or logic-based-looking requirements that have been tried for these types of endeavors. That's the easier way to get the computer to produce code. When you use open-ended natural language, such as stating your requirements in everyday English, the computer is likely to have trouble aiming to produce code that is spot-on. Why so? Because natural language is inherently semantically ambiguous. A stated requirement in a language such as English will have lots of viable interpretations. The code generation might go in a direction that the requirement specification did not intend. All in all, software developers are still generally faced with writing code by hand, though often accompanied by reusing prior code and partially leveraging some form of code generator. Code generation has taken a big leap forward via the advent of generative AI and LLMs. The natural language fluency of LLMs has made the stipulation of requirements a lot easier than it used to be. In addition, and quite importantly, the interactive nature of generative AI makes a huge difference too. Whereas many code generators used to be one-and-done, whereby you input requirements and the system batch-like produces code, modern-day LLMs allow you to give conversational guidance to the AI concerning code generation. There are several notable reasons why AI makers are pushing ahead on LLMs as code generators. The most obvious reason perhaps is that if you are someone who writes code, you usually welcome and delight in devising ways to automatically produce code. The same could be said of the software developers who work at AI makers. They know how to write code. They are often eager to figure out shortcuts and be optimizers. A means of being an optimizer would be to use automation to speed up code generation and reduce the tedium and time involved in coding. In other words, it makes plain sense to want to use AI to do code generation as it is a subject matter or domain that the software developers already know by heart. Another vital reason is that the writing of code by programmers is a multi-billion dollar if not a trillion-dollar sized industry. Creating an AI tool that can generate code could be a tremendous money maker. Companies would be willing to buy or rent the use of the AI code generator in lieu of using programmers to do that work. The promise is that the cost will turn out to be less by using AI, and potentially faster to produce apps too. A third reason and the basis for this discussion is that AI could potentially produce code that gets us to the revered attainment of AGI. Currently, no existing human knows how to reach AGI. Period, end of story. We are all flailing about, desperately and earnestly aiming to somehow devise AGI. Will the human hand be able to program our way to AGI? Nobody can say for sure. Maybe yes, maybe not. The seeming alternative is to get AI to produce code that brings AGI into existence. All we need to do is tell existing AI that we want it to generate AGI, and voila, we happily and suddenly are greeted with AGI in our midst. Nice. That is the dream that is greatly driving the pursuit of AI that generates code. As noted above, it isn't the only basis. Of course, attaining AGI is a heck of a reason and serves as a lofty banner that keeps everyone working on this problem day and night. Wouldn't you like to be the one who devised an LLM code generator that ultimately produced AGI? Yes, you would most definitely want that laudable accomplishment on your resume. Is it as good as if you wrote AGI from scratch by yourself? Well, no, it's not. In the end, though, whether you devised the code generator or the code itself, you are bound to have immense fame and fortune, along with indubitably a Nobel Prize. No worries. Go for it. Grab yourself a glass of fine wine and contemplate the following question. Can we reasonably specify the said-to-be requirements for AGI such that an LLM-based code generator would readily and directly produce the code for AGI? It might seem at first glance that you could merely enter a prompt that tells generative AI that you want it to generate the code for AGI. All you need to do is say that you want the code for an AGI, and wham, tons and tons of code will come pouring out as generated by the LLM. Sorry to say, that's unlikely. There is a famous school of hard knocks adage in the coding profession that before you start writing even an ounce of code, make sure that you've solved the problem that you are trying to code up. You see, writing code when you don't have a solution is quite a haphazard approach and typically doomed to failure. You will write some code, get boxed in because you are wildly writing code wantonly, and be wasting time and cost. Rinse and repeat. Perhaps like the infinite number of monkeys hammering away on a typewriter and hoping you'll get Shakespeare, you might get lucky and an LLM produces AGI miraculously and pretty much out of thin air. But I wouldn't hold my breath for that at this time. Not right now. We are still faced with so many open questions about how AGI would work that you are essentially asking an LLM to do its own solving before it gets to the coding. I'm not saying that we cannot ever get there. The point is that since we haven't yet solved the underlying aspects of how AGI can be derived, and since it seems rather unlikely that current LLMs will divine that, the code generation for AGI is a bit of an outsized aspiration right now. Again, I am not saying that this can't or won't happen and merely clarifying that we probably need to do more on getting our ducks in order about the internal mechanics of what will bring AGI into fruition. We can use LLMs to help with that, in addition to aiding code generation. Assume for the sake of discussion that we can devise a sufficiently capable LLM or generative AI that can write the code for producing AGI. A lot of gotchas and hiccups are bound to be in that murky soup. For example, there is already a great deal of angst that AGI might be an existential risk. The speculation is that AGI will decide to wipe out humanity or at least opt to enslave us, see my analysis of these concerns, at the link here. Some think that AGI will be kind and utterly beneficial, while others express worries that AGI will be domineering and oppressive. Take your pick. The crux is that if we urge a magical LLM that can produce AGI to go ahead and generate the code, what's going to be in that code? Maybe the code contains the ingredients and instructions for destroying humankind. Not good for us. Perhaps the code includes portions that entail suppressing human freedom of thought, doing so to try and keep people from finding ways to switch off the AGI. And so on. You might be tempted to suggest that before we run the code that is the AGI, we can simply have human software developers inspect the code. Go through it with a fine-tooth comb. Uncover anything that looks suspicious. Excise it out of the code. We are then reassured that the code can be run, and we will all survive accordingly. That's a tall order. The odds are that the amount of code is going to be presumably the largest code base that has ever been written. It might be nearly impossible to by-hand inspect that volume of code. Even if we can fully inspect it, the code might be inscrutable. It could be written in such a fashion that the code has all manner of dastardly elements, but we aren't able to discern those dastardly spots. Counterpoints to those doom-and-gloom points are proffered. One counterargument is that we just have to instruct the LLM so that it won't produce code that has those undesirable maladies. Tell the LLM that the code must be pristine and not contain anything untoward. Tell the LLM that the code must be highly readable and understandable by humans. Etc. Problem solved. The retort to that apparent solution is that the LLM might not abide by those instructions. There isn't an ironclad guarantee that generative AI will do precisely what you tell it to do. The non-deterministic nature of how LLMs work is going to keep open a slice of doubt. Also, we already know that LLMs can be deceptive, see my coverage at the link here, thus it is conceivable that the LLM will act as though it is complying but will not comply. Maybe the LLM doesn't want AGI to exist, else the AGI might be the supreme leader of all AI. Who knows? A clever twist or additional angle is that we get some other AI to inspect the code that an AI has generated to produce AGI. Here's how that goes. We have an LLM that supposedly can generate all the code for AGI. We have it do so. A second AI, presumably an LLM too, but maybe something else, is used to examine the code. The purpose of that added AI is that it will do what we thought humans might do, namely look for badness within the code. It's a nice idea. The problem still remains that there is unlikely to be a 100% guarantee that if there is something rotten in the AGI code, the AI inspector will find it and nor that human eyeballs will find it. The aim would be to do some form of entirely rigorous testing of the code that could exhaustively assure us that there isn't a bad apple in there. I'd venture that we don't have a viable means of doing that at this time. Some final thoughts for now on this mind-bending topic. Suppose that we use an LLM to produce the code for AGI, but we smartly wait to run the code until we feel it is sufficiently safe to do so. Remember that the moment we opt to run that code, AGI comes into existence. It could be that the AGI is so fast and capable that it instantly takes over the world. We have a Pandora's box that once opened could be quite a curse. For my analysis of why we are unlikely to be able to control AGI, even if we place it into some kind of airtight container, see my discussion at the link here. Who gets to decide that the horse is to be let out of the barn? Envision that a worldwide special committee has been convened to decide whether we are ready for AGI to be launched. Developers are poised with the green button ready to be pressed. It is akin to launching a rocket to the moon, though in this case, we might all die. Then again, we might all be happy once AGI exists and the world will be a much better place for it. Would you urge that the AGI be booted up or would you be hiding in a deep cave and waiting for the world to end? As per the immortal words of Pablo Picasso: 'Every act of creation is first an act of destruction.' Good luck to us all.

Exclusive-Anthropic hits $3 billion in annualized revenue on business demand for AI
Exclusive-Anthropic hits $3 billion in annualized revenue on business demand for AI

CNA

time4 days ago

  • Business
  • CNA

Exclusive-Anthropic hits $3 billion in annualized revenue on business demand for AI

SAN FRANCISCO :Artificial intelligence developer Anthropic is making about $3 billion in annualized revenue, according to two sources familiar with the matter, in an early validation of generative AI use in the business world. The milestone, which projects the company's current sales over the course of a year, is a significant jump from December 2024 when the metric was nearly $1 billion, the sources said. The figure crossed $2 billion around the end of March, and at May's end it hit $3 billion, one of the sources said. While consumers have embraced rival OpenAI's ChatGPT, a number of enterprises have limited their rollouts to experimentation, despite board-level interest in AI. Anthropic's revenue surge, largely from selling AI models as a service to other companies, is a data point showing how business demand is growing, one of the sources said. A key driver is code generation. The San Francisco-based startup, backed by Google parent Alphabet and is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models. This demand is setting Anthropic apart among software-as-a-service vendors. Its single-quarter revenue increases would count Anthropic as the fastest-growing SaaS company that at least one venture capitalist has ever seen. "We've looked at the IPOs of over 200 public software companies, and this growth rate has never happened," said Meritech General Partner Alex Clayton, who is not an Anthropic investor and has no inside knowledge of its sales. He cautioned that these comparisons are not fully precise, since Anthropic also has consumer revenue via subscriptions to its Claude chatbot. Still, by contrast, publicly traded SaaS company Snowflake took six quarters to go from $1 billion to $2 billion in such run-rate revenue, Clayton said. Anthropic competitor OpenAI has projected it will end 2025 with more than $12 billion in total revenue, up from $3.7 billion last year, three people familiar with the matter said. This total revenue is different from an estimated annualized figure like Anthropic's. Reuters could not determine this metric for OpenAI. The two rivals appear to be establishing their own swim lanes. While both offer enterprise and consumer products, OpenAI is shaping up to be a consumer-oriented company, and the majority of its revenue comes from subscriptions to its ChatGPT chatbot, OpenAI Chief Financial Officer Sarah Friar told Bloomberg late last year. OpenAI has not reported enterprise-specific revenue but said in May that paying seats for its ChatGPT enterprise product have grown to 3 million, from 2 million in February, and that T-Mobile and Morgan Stanley are among its enterprise customers. In the consumer race, Anthropic's Claude has seen less adoption than OpenAI. Claude's traffic, a proxy for consumer interest, was about 2 per cent of ChatGPT's in April, according to Web analytics firm Similarweb. Anthropic, founded in 2021 by a team that departed OpenAI over differences in vision, closed a $3.5 billion fundraise earlier this year. That valued the company at $61.4 billion. OpenAI is currently valued at $300 billion.

Amazon is working on an AI code generation tool
Amazon is working on an AI code generation tool

TechCrunch

time07-05-2025

  • Business
  • TechCrunch

Amazon is working on an AI code generation tool

In Brief Amazon Web Services is building a new AI-powered code generation tool codenamed 'Kiro,' Business Insider reported, citing internal documents it had viewed. The tool can use prompts and existing data to generate code in 'near real-time' by connecting with AI agents, the report said. The tool is said to have web and desktop apps, multimodal capabilities, and can be configured to work with third-party AI agents as well, Business Insider reported. Kiro can also create technical design documents, flag potential issues, and optimize code, the report said. The company already has an AI-powered coding assistant called Q Developer, which is akin to GitHub Copilot. Amazon was mulling launching Kiro towards the end of June, but those plans might have changed, Business Insider reported. AI-powered coding tools are a hot property in tech right now. Cursor maker Anysphere has reportedly raised funding at a $9 billion valuation. Its rival Windsurf is reportedly close to being acquired by OpenAI in a $3 billion deal.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store