logo
The new programming language: English

The new programming language: English

In case you haven't noticed, the pedantic annoying colleagues around you who constantly correct your grammar mid-speech are having a moment. As AI models become a big part of our work and personal lives, all of us have realized the one thing we absolutely need to prompt an AI model to do our bidding is absolute command over plain old English. An ability to express what we want, in words, phrases and complete sentences, in brief, with clarity. The more specific your prompt, the more likely it is that you will be able to cajole the right answer, the right image and video, or as software programmers are realizing, the right code from an AI model.
This is dramatically different from what computer programming used to be five years ago. For the longest time, interacting with machines was through a special programming language that the machine could understand and humans had to learn. It was based in mathematics, a set of instructions, zeros and ones, syntax, data, variables, functions and code that a computer could understand and execute to perform specific tasks. In the 1960s, engineers instructed computers using COBOL and BASIC which quickly consolidated into C++ in the 1980s, giving way to Internet languages of Python and Java. Computer engineers spent years learning these syntaxes and functions, excelling in the logical art of conversing with a computer to create software, websites, applications, and other technologies.
STEM universities focused all their energies in educating their students in these languages so they could join the workforce to interact with computers, to build digital systems as we know it. So much so that the programming language training global market ballooned from $ 3.32 billion in 2018, to a projected $ 8.53 billion in 2028, an increase of around 10% year-on-year, according to a Technavio report.
All this while, a dictionary and a command over vocabulary began to gather dust in the digital world of instant messaging. Complete sentences and words gave way to distorted language, misspellings, abbreviations, memes and emojis. The aforementioned pedantic colleagues and editors were told they suffered an ailment similar to OCD, Grammatical Pedantry Syndrome (GPS).
AI models like it when you're precise and concise
Just when we had resigned ourselves to breakups and business communication with thumbs up and hearts, along came AI models, and software engineers and the rest of us, suddenly needed to brush up our language skills. In February this year, Andrej Karpathy, OpenAI's co-founder and AI age's favourite software philosopher, was the first one to name this phenomenon of co-coding with an AI model. He created a rather poetic term, 'vibe coding', to describing his bromance with LLMs (Large Language Models), where he would cajole the model to create a code for an application he wanted to build. 'You fully give in to the vibes, embrace exponentials and forget that the code even exists,' he said in a tweet, adding that all he needed to interact with the LLM was a command over English language so he could give the right prompts to the model. 'The hottest new programming language,' he concluded, 'is English.'
Along with prompt engineering, vibe coding soon became an adjective to describe a relaxed and intuitive approach to programming or coding through constant conversation and iterations with an AI models such as Claude, Gemini or ChatGPT. A style of building software not by writing lines of codes or syntax, but by describing what you want in succinct, plain, simple words and phrases of English language.
'I don't remember ever, in the history of computer science, where we've abdicated logic to machines,' exclaimed Martin Casado, general partner at Andreesen Horowitz, one of the most influential venture capital firms in Silicon Valley. Talking with his colleagues in a recent podcast, he wondered if we require to rethink what it means to be a programmer, or what it means to create a software. What Casado and most of the AI community are trying to grapple with is how AI models have dramatically upended developer workflows, tooling and code, blurring lines between technical know-how and communication, between human and machine.
Spec-writing and context is the new superpower
For it is communicating with an AI model, that has quickly become an important artifact of the modern engineer. In the recently concluded AI Engineer World's Fair in San Francisco, software engineers discussed how writing code was only 10-20% of what they would produce. 'The most valuable skill is communicating intent with precision,' said Sean Grove from Open AI, giving a talk on 'The New Code'.
Last month, the AI community developed yet another term to describe this concise interaction with an AI model to develop software code, perhaps first used in a tweet by Tobi Lutke, CEO at Shopify: context engineering. Karpathy jumped on the lexicon, further popularizing it. 'Context engineering is the delicate art and science of filling the context window with just the right information for the next step,' he explained in a tweet. By tailoring information AI systems receive, context engineering is a way to make LLM output more effective. Give too much information, or a wrong description and a model will not give you accurate answers. Prompt it with too little and again, the output would be too general. Already the proponents of context engineering are building its lexicon: strategies such as Retrieval-Augmented Generation (RAG), quarantine, pruning, summarization, poisoning, distraction, and offloading.
Job requirements in the Valley and across the world have changed to reflect this. As AI models become part of an engineer's life and role requirements, companies are looking for context engineers who can not only understand code, architect a product but also converse with an AI model in plain English. Last week, grappling with AI onslaught on their business, Wipro Ltd announced that it has mandated English competency tests for its senior executives, adding that those who fare poorly, might be put on performance improvement plans.
Does a context engineer need to write code?
As more and more students rely on AI models to write code for them, will the software developers of the future not be able to write code? As one of the test engineers in India who visited a university to recruit told me: 'Students are coding using AI models and don't even know how to write a 'for' loop in Python.' But with context becoming everything, do they even need to know what a 'for' loop is? This is the looming question for STEM universities today.
One thing's for sure. The awkwardly mumbling geek using faltering English is dead. The new age software programmers will need to have a better grasp at vocabulary (unless that is outsourced to AI models as well), be able to communicate with AI models and agents and be concise and clear in their intentions.
Though some AI models do understand other languages such as Chinese, Japanese and even French and German, their contextual understanding is limited for now, with higher instances of hallucinations and errors. Most of the models have trained on English language data (this might change as well, in the quicksand world of AI). While the other languages catch up, it's time to brush up on your skills to instruct, describe and express your intent in plain English. For writing code right now means full-sentence conservations with an AI system.
Shweta Taneja is an author and journalist based in the Bay Area. Views expressed are personal.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI to build its first European data centre in Norway, with partners
OpenAI to build its first European data centre in Norway, with partners

The Hindu

time20 minutes ago

  • The Hindu

OpenAI to build its first European data centre in Norway, with partners

OpenAI is launching its first Stargate data centre project in Europe, partnering with developer Nscale Global Holdings and investment group Aker ASA to open a $1-billion facility in Norway next year, the companies said on Thursday. The artificial intelligence partnership, called Stargate Norway, will initially install 100,000 Nvidia processors and could see a tenfold expansion in future phases amid soaring AI demand, the companies said. The centre, to be built near Narvik in northern Norway, is the first in Europe under OpenAI's Stargate programme and follows the recent announcement of similar-sized Stargate UAE in the United Arab Emirates, the U.S. tech firm said. The gigafactory will be fully powered by renewable energy from local hydropower production, the companies added. "Infrastructure like this is really important and unlocks a ton of potential for developers, researchers, scientists, and startups all across Norway and Europe," OpenAI CEO Sam Altman said in a video presentation. The shares of Norwegian conglomerate Aker ASA rose 11% to a two-year high of 729 Norwegian crowns before paring gains to trade at 717 crowns at 1337 GMT. Nscale and Aker will jointly own Stargate Norway with 50% stakes each, investing about $1 billion for the initial phase of the project, which will consume about 20 megawatts of electrical power capacity and have OpenAI as its first customer. Aker said it was working with local power producer Nordkraft to secure access to enough electricity to allow Stargate to increase its consumption to 230 MW and eventually add a further 290 MW. An Aker spokesperson declined to comment on investment costs beyond the initial commitment or a time frame for the additional capacity. The facility will run on Nvidia's advanced GB300 Superchip processors, connected by the U.S. company's high-speed NVLink network technology, Nvidia CEO Jensen Huang said in the video presentation. The facility will be among the first AI gigafactories in Europe, Nscale CEO Josh Payne said in the statement. "Sovereign, scalable, and sustainable infrastructure is now essential to remain competitive," he added.

OpenAI kills ChatGPT feature that exposed personal chats on Google: All you need to know
OpenAI kills ChatGPT feature that exposed personal chats on Google: All you need to know

Mint

time38 minutes ago

  • Mint

OpenAI kills ChatGPT feature that exposed personal chats on Google: All you need to know

OpenAI has just announced that it is killing a new feature that made ChatGPT conversations available on Google Search. The feature was rolled out quietly earlier in the year but caused much uproar recently as many private conversations started showing up on Google search results, putting the privacy and safety of users at risk. OpenAI Chief Information Security Officer (CISO) Dane Stuckey, in a post on X (formerly Twitter), informed users about taking down the feature from the popular chatbot. 'We just removed a feature from @ChatGPTapp that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines (see below),' Stuckey wrote. 'Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option,' he added. Stuckey also noted that OpenAI is working with search engines to remove the already indexed content from the web. OpenAI had rolled out an update to ChatGPT that gave users the option to make their chats discoverable on search engines. While the company has clarified that this was an opt-in feature and not enabled by default, it seems that many users inadvertently clicked on this option while sharing their chats with friends or family. As OpenAI CEO Sam Altman rightly pointed out in a recent podcast, 'people talk about the most personal shit in their lives to ChatGPT.' A new report by Fast Company revealed that there were over 4,500 conversations indexed on Google. While many of them did not contain personal data, others did — including identifiable details like names, places and more — that could put users at risk. Moreover, even if a user deleted the conversation link or deleted the conversation altogether, it would not guarantee that it was no longer indexed by Google. In fact, the conversation would likely remain public until Google updates its index.

OpenAI CEO Sam Altman's biggest fear: ChatGPT-5 is coming in August and Altman is scared — know why
OpenAI CEO Sam Altman's biggest fear: ChatGPT-5 is coming in August and Altman is scared — know why

Time of India

time2 hours ago

  • Time of India

OpenAI CEO Sam Altman's biggest fear: ChatGPT-5 is coming in August and Altman is scared — know why

As artificial intelligence continues to advance at an astonishing pace, even the leaders at its forefront are beginning to voice concern. OpenAI CEO Sam Altman has openly admitted feeling 'scared' about the upcoming release of GPT-5, the newest and most powerful version of ChatGPT. Scheduled to launch in August 2025, GPT-5 is expected to significantly surpass its predecessors in speed and capability. Yet it is precisely this rapid evolution that has led Altman to compare its development to the Manhattan Project. His candid remarks, shared during a recent podcast appearance, highlight growing unease about the lack of oversight, ethical uncertainty, and the accelerating race toward Artificial General Intelligence (AGI). ChatGPT-5 feels 'very fast,' says Sam Altman Speaking on Theo Von's This Past Weekend podcast, Altman confessed that testing GPT-5 left him with a profound sense of unease. 'It feels very fast,' he said, referring not just to the model's impressive processing power, but to the alarming speed at which AI is advancing overall. His remarks echoed the reflective anxiety of scientists involved in the Manhattan Project, who, after creating the atomic bomb, grappled with the unintended consequences of their innovation. Altman invoked a similar moment of reckoning, asking rhetorically, 'What have we done?' The comparison underscores his concern that GPT-5, like nuclear technology before it, may mark a pivotal turning point—one where humanity's ability to build something outpaces its readiness to understand or control the fallout. One of Altman's most pointed comments was about the lack of control in AI development. 'It feels like there are no adults in the room,' he said, suggesting that regulatory frameworks have not kept up with AI's breakneck speed. While OpenAI advocates for responsible deployment, the pace at which GPT-5 was built may have outstripped safety protocols and human readiness. These remarks have sparked fresh debate about who, if anyone, is truly in charge of monitoring such transformative technologies. ChatGPT-5 could redefine intelligence and risk While specifics about GPT-5 remain under wraps, insiders suggest it will feature major upgrades over GPT-4, including improved multi-step reasoning, longer memory, and more advanced multimodal capabilities. It is also expected to reduce model-switching delays, offering more seamless user interactions. But with power comes risk. Altman has openly said that GPT-4 'kind of sucks' compared to what's coming next. That raises alarms about how much more potent and possibly uncontrollable GPT-5 could be. A stepping stone to AGI? Altman has long discussed AGI, or Artificial General Intelligence, as OpenAI's north star. But GPT-5 may be the company's closest step yet to that goal. While Altman once claimed the arrival of AGI would pass with little societal upheaval, his new tone is more cautionary. If GPT-5 gets us significantly closer to AGI-level capabilities, the lack of global consensus on how to manage such a leap could become a major problem. Internal and external pressure on OpenAI Behind the scenes, OpenAI is reportedly under immense pressure from investors to transition into a for-profit entity by year-end. With Microsoft holding a 13.5 billion dollar stake in the company, and other competitors like Google DeepMind and surging forward, GPT-5 could be a make-or-break moment. Some reports even suggest OpenAI might declare AGI prematurely to end its contract with Microsoft, triggering new power dynamics in the AI sector. While some observers have dismissed Altman's remarks as theatrical marketing, his track record suggests otherwise. His fear appears genuine, not of AI turning evil, but of humans losing control over the systems they've built. In a space where speed is often equated with success, Altman's words might be the first honest pause in a race most of the world barely understands.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store