logo
The Future Of Code: How AI Is Transforming Software Development

The Future Of Code: How AI Is Transforming Software Development

Forbes04-04-2025
Rupesh Dabbir is a Software Engineering Manager at Google with over a decade of experience building highly scalable systems in the cloud.
The emergence of artificial intelligence (AI) is transforming the software engineering domain in ways we haven't seen in the past few years. What was once entirely dependent on human creativity and problem-solving is now being enhanced—and, in some cases, even automated by a plethora of AI tools growing every hour. Although this shift brings challenges, it also opens up opportunities for engineers to rethink their roles and adapt to the changing technology landscape.
As AI becomes deeply integrated with how software engineers write code, it's essential to understand how developers can take advantage of AI and thrive in the new technology era. Software engineering roles are increasingly moving to AI-assisted programming roles, using tools like GitHub Copilot and Cursor that not only make coding more efficient but also save time for developers to focus on core business logic and problem-solving rather than handling repetitive tasks.
This paradigm shift can enhance collaboration and increase efficiency. However, this also presents concerns about job displacement and the need for reskilling, making it crucial for software engineers to invest in education that helps them upskill in AI.
Will AI replace human jobs? This is difficult to say, but the integration of AI into software engineering will likely create new opportunities that require a partnership between machines and humans, who can harness AI's ability to solve problems while still maintaining oversight. Software engineering involves much more than just inserting code snippets. It demands creativity and collaboration among multiple stakeholders (e.g., the user experience team, product team and technical program managers) to address complex problems and deliver innovations that meet customer needs. Ultimately, the product being built should apply to real customer use cases.
The current state of AI presents ethical challenges that need to be tackled. For example, there are issues related to data privacy and risk-based algorithms. As AI emerges into decision making frameworks, it's important to guarantee fairness, transparency and accountability to uphold public confidence and encourage innovation through responsible AI.
Addressing these ethical considerations will require engagement from not just developers but also policymakers and committees responsible for upholding responsible AI use companywide to ensure diverse perspectives are considered. To this end, it's essential to establish clear guidelines on ethical AI use.
Through AI, monotonous tasks such as migrations, code generation, test executions, test generation and debugging can be delegated to sophisticated systems. This frees up developers' time to focus on the important aspects of the software development life cycle, increasing overall productivity and innovation.
AI can help developers analyze errors or find vulnerabilities or bugs that were overlooked in software development. This can significantly improve the overall quality and reliability of the code, thus saving a lot of time for the entire organization. This can also lead to the identification of issues before the code lands in production.
This is an area where AI truly stands out. By automating the code review process, AI tools can rapidly uncover potential issues, recommend enhancements and deliver valuable insights, which allows developers to concentrate on more sophisticated tasks while helping ensure that high-quality code is preserved during the development life cycle.
AI can help developers refactor their code to make it clear, readable and maintainable. These tools can also provide developers with an understanding of the security implications and vulnerabilities of their code, helping ensure the code is safe and backward-compatible with older systems and adheres to the best industry standards and coding practices.
This is one of the strongest use cases for AI. AI can help developers automate parts of the code review process by suggesting changes and identifying potential mistakes in the code, thus saving the developers' time. This also allows developers to focus on writing high-quality code throughout the development cycle.
AI tools can have a significant impact on testing and debugging processes. They can be useful for automating test case generation, creating boilerplate test code and analyzing test results, thus significantly reducing the overall time required to identify and fix issues.
It's evident that software engineering is undergoing significant changes driven by advancements in AI. There's a dire need to evaluate existing practices and establish new frameworks to prioritize responsible AI as part of the development life cycle. The real challenges lie with adapting to these changes while ensuring the benefits of AI are distributed equitably across society, where innovation can thrive without compromising ethical principles.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How to Keep Your Home Private on Google Street View
How to Keep Your Home Private on Google Street View

CNET

time6 hours ago

  • CNET

How to Keep Your Home Private on Google Street View

It's easy to think of Google Maps and Street View as nothing more than handy navigation tools, but they actually show a lot more than directions. A quick search can pull up a clear image of your front door, yard, and driveway for anyone to see. At first glance, that might not seem like a big deal, but it does raise privacy concerns. If you'd rather not have your home visible to strangers online, there are simple steps you can take to make it less exposed. With a few quick steps, you can help protect your privacy and limit how much strangers can see of your personal space. Here's how. For more, check out essential Google Maps tips for travel. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. Now Playing: How to Blur Your Home or an Object in Google Maps 02:24 How to blur your home on Google Maps You'll need to do this on your computer since the blurring feature isn't available in the Google Maps application on iOS or Android. It is accessible through the web browser on your mobile device, but it's rather difficult to use, so your best option is a trusted web browser on your Mac or PC. At enter your home address in the search bar at the top-right, hit return, then click the photo of your home that appears. Click on the photo of your home, right above your address, on the top-left part of the page. Screenshot by Nelson Aguilar/CNET Next, you'll see the Street View of your location. Click Report a Problem at the bottom-right. The text is super tiny, but it's there. This is the Street View of your location. Screenshot by Nelson Aguilar/CNET Now, it's up to you to choose what you want Google to blur. Using your mouse, adjust the view of the image so that your home and anything else you want to blur is all contained within the red and black box. Use your cursor to move around and the plus and minus buttons to zoom in and out, respectively. If you want to blur more than what's in the red and black box, use the + button to zoom in. Screenshot by Nelson Aguilar/CNET Once you're finished adjusting the image, choose what you're requesting to blur underneath: A face Your home Car/license plate A different object You'll be asked to give a bit more detail as to what exactly you want blurred, in case the image is busy with several cars, people and other objects. Also, be completely sure that what you select is exactly what you want blurred. Google cautions that once you blur something on Street View, it's blurred permanently. Finally, enter your email (this is required), verify the captcha (if needed), and click Submit. You are required to provide additional information about what you want to blur so be thorough. Screenshot by Nelson Aguilar/CNET You should then receive an email from Google that says it'll review your report and get back to you once the request is either denied or approved. You may receive more emails from Google asking for more information regarding your request. Google doesn't offer any information on how long your request will take to process, so just keep an eye out for any further emails. For more, take an inside look at how Google built Immersive View for Maps.

Criminals, good guys and foreign spies: Hackers everywhere are using AI now
Criminals, good guys and foreign spies: Hackers everywhere are using AI now

NBC News

time7 hours ago

  • NBC News

Criminals, good guys and foreign spies: Hackers everywhere are using AI now

This summer, Russia's hackers put a new twist on the barrage of phishing emails sent to Ukrainians. The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims' computers for sensitive files to send back to Moscow. That campaign, detailed in July in technical reports from the Ukrainian government and several cybersecurity companies, is the first known instance of Russian intelligence being caught building malicious code with large language models (LLMs), the type of AI chatbots that have become ubiquitous in corporate culture. Those Russian spies are not alone. In recent months, hackers of seemingly every stripe — cybercriminals, spies, researchers and corporate defenders alike — have started including AI tools into their work. LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents. The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it's making skilled hackers better and faster. Cybersecurity firms and researchers are using AI now, too — feeding into an escalating cat-and-mouse game between offensive hackers who find and exploit software flaws and the defenders who try to fix them first. 'It's the beginning of the beginning. Maybe moving towards the middle of the beginning,' said Heather Adkins, Google's vice president of security engineering. In 2024, Adkins' team started on a project to use Google's LLM, Gemini, to hunt for important software vulnerabilities, or bugs, before criminal hackers could find them. Earlier this month, Adkins announced that her team had so far discovered at least 20 important overlooked bugs in commonly used software and alerted companies so they can fix them. That process is ongoing. None of the vulnerabilities have been shocking or something only a machine could have discovered, she said. But the process is simply faster with an AI. 'I haven't seen anybody find something novel,' she said. 'It's just kind of doing what we already know how to do. But that will advance.' Adam Meyers, a senior vice president at the cybersecurity company CrowdStrike, said that not only is his company using AI to help people who think they've been hacked, he sees increasing evidence of its use from the Chinese, Russian, Iranian and criminal hackers that his company tracks. 'The more advanced adversaries are using it to their advantage,' he said. 'We're seeing more and more of it every single day,' he told NBC News. The shift is only starting to catch up with hype that has permeated the cybersecurity and AI industries for years, especially since ChatGPT was introduced to the public in 2022. Those tools haven't always proved effective, and some cybersecurity researchers have complained about would-be hackers falling for fake vulnerability findings generated with AI. Scammers and social engineers — the people in hacking operations who pretend to be someone else, or who write convincing phishing emails — have been using LLMs to seem more convincing since at least 2024. But using AI to directly hack targets is only just starting to actually take off, said Will Pearce, the CEO of DreadNode, one of a handful of new security companies that specialize in hacking using LLMs. The reason, he said, is simple: The technology has finally started to catch up to expectations. 'The technology and the models are all really good at this point,' he said. Less than two years ago, automated AI hacking tools would need significant tinkering to do their job properly, but they are now far more adept, Pearce told NBC News. Another startup built to hack using AI, Xbow, made history in June by becoming the first AI to climb to the top of the HackerOne U.S. leaderboard, a live scoreboard of hackers around the world that since 2016 has kept tabs on the hackers identifying the most important vulnerabilities and giving them bragging rights. Last week, HackerOne added a new category for groups automating AI hacking tools to distinguish them from individual human researchers. Xbow still leads that. Hackers and cybersecurity professionals have not settled whether AI will ultimately help attackers or defenders more. But at the moment, defense appears to be winning. Alexei Bulazel, the senior cyber director at the White House National Security Council, said at a panel at the Def Con hacker conference in Las Vegas last week that the trend will hold, at least as long as the U.S. holds most of the world's most advanced tech companies. 'I very strongly believe that AI will be more advantageous for defenders than offense,' Bulazel said. He noted that hackers finding extremely disruptive flaws in a major U.S. tech company is rare, and that criminals often break into computers by finding small, overlooked flaws in smaller companies that don't have elite cybersecurity teams. AI is particularly helpful in discovering those bugs before criminals do, he said. 'The types of things that AI is better at — identifying vulnerabilities in a low cost, easy way — really democratizes access to vulnerability information,' Bulazel said. That trend may not hold as the technology evolves, however. One reason is that there is so far no free-to-use automatic hacking tool, or penetration tester, that incorporates AI. Such tools are already widely available online, nominally as programs that test for flaws in practices used by criminal hackers. If one incorporates an advanced LLM and it becomes freely available, it likely will mean open season on smaller companies' programs, Google's Adkins said. 'I think it's also reasonable to assume that at some point someone will release [such a tool],' she said. 'That's the point at which I think it becomes a little dangerous.' Meyers, of CrowdStrike, said that the rise of agentic AI — tools that conduct more complex tasks, like both writing and sending emails or executing code that programs — could prove a major cybersecurity risk. 'Agentic AI is really AI that can take action on your behalf, right? That will become the next insider threat, because, as organizations have these agentic AI deployed, they don't have built-in guardrails to stop somebody from abusing it,' he said.

Google's Gmail Warning—Hackers Gain Access To User Accounts
Google's Gmail Warning—Hackers Gain Access To User Accounts

Forbes

time9 hours ago

  • Forbes

Google's Gmail Warning—Hackers Gain Access To User Accounts

Republished on August 17 with new Gmail hacking warning. Google has confirmed that Gmail attacks are surging, as hackers steal passwords to gain access to accounts. This also means a surge in 'suspicious sign in prevented' emails, Google's warning that 'it recently blocked an attempt to access your account.' Attackers know this — that Gmail user concerns are heightened by security warnings, and they use this to frame their attacks. 'Sometimes hackers try to copy the 'suspicious sign in prevented' email,' Google warns, 'to steal other people's account information,' which then gives those hackers access to user accounts. If you receive this Google email warning, do not click on any link or button within the email itself. Instead, 'go to your Google Account, on the left navigation panel, click security, and on the recent security events panel, click to review security events.' If any of the events raise concerns — times or locations or devices you do not recognize — then 'on the top of the page click secure your account' to change your password. If you do click a link from within this email or any other email purporting to come from Google, you will be taken to a sign-in page that will be a malicious fake. If you enter your user name and password into that page, you risk them being stolen by hackers to hijack your account. And that will give them access to everything. This is the same risk as the recent Amazon refund scam, which texts a link for a fake Amazon refund, but which actually steals login credentials. The answer is twofold. First, never click any such link in a text message or email. And second, add passkeys to your Google, Amazon and other accounts to stop such hijacks. This exploitation of seemingly legitimate emails, messages and calls that perfectly mimic the content and style of the real thing has become an alarming theme in the last year. This also includes exploiting legitimate infrastructure to add authenticity. Beyond adding passkeys and shoring up two-factor authentication with something other than SMS, the key rule is never to use links to access accounts. Always use your app or the sign-in page you usually use in your browser. Account hijacks are painful, and while there are mechanisms to recover lost accounts, these can be time consuming and will not stop the content in your account from being stolen. It takes just seconds to secure your accounts — do that now. Those same account defenses will protect you from the latest Gmail attacks, which use fake voicemail notifications to steal login credentials to gain access to accounts. Malware analyst Anurag flagged the issue on Reddit, as a 'seemingly harmless' email claimed 'I had a 'New Voice Notification'' with 'a big 'Listen to Voicemail' button.' After clicking the link, per Cybersecurity News, the attack 'systematically captures and exfiltrates all entered data through encrypted channels. The system is designed to handle various Gmail security features, including: Primary email and password combinations, SMS and voice call verification codes, Google Authenticator tokens, Backup recovery codes, Alternative email addresses, Security question responses.' Anurag says 'this campaign is a good example of how phishing operations abuse legit services (Microsoft Dynamics, SendGrid) to bypass filters, and use captchas as both a deception tool and a barrier against automated security tools. Staying alert and performing deep inspection of suspicious emails is crucial. A single click on the Gmail phishing login could have led to stolen credentials.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store