Latest news with #softwareengineering


Forbes
12 hours ago
- Business
- Forbes
How Engineering Teams Are Reimagining Work Through AI
Bratin Saha, Chief Product and Technology Officer at DigitalOcean, tech executive with 20 years of experience across AI and cloud computing. AI is one of the most transformative technologies of our time. Given the rapid pace of AI innovation, it seems that practically every leader is thinking about how they can reimagine their work using AI. While this may be daunting and will take persistence, I believe the reward is well worth it. Having worked on many projects using AI, I want to discuss the factors that are critical to ensuring the success of these projects—from an iterative mindset to new mechanisms and rigor in tracking metrics. AI For Coding AI is rapidly emerging as an indispensable tool for software engineers. As teams explore AI-powered coding assistants, it's essential to look beyond 'lines of code generated' and consider impact on code quality, security and long-term maintainability. Many engineering teams are now experimenting with tools like GitHub Copilot, and we've personally seen up to 40% increases in code generation. But productivity isn't the only metric worth tracking—security, code quality and maintainability are just as critical. One helpful practice is to implement internal evaluation systems that compare AI-generated and human-authored code for defects, rollback frequency and overall impact on velocity. Our initial findings suggest that AI-generated code can match or even outperform human benchmarks in some areas, though consistent monitoring remains essential. For leaders considering similar integrations, there are a few principles that can help guide responsible adoption. Define baseline metrics early, evaluate AI output with the same rigor applied to human-authored code and build feedback loops to inform ongoing tool selection. Thoughtful experimentation combined with clear evaluation criteria is key to realizing the value of AI without compromising quality or trust. AI For Rootcausing Cloud Incidents One way of figuring out where to add AI is to understand where your employees are spending the most time and then automating that activity with AI. Cloud engineers, for instance, typically spend over 20% of their time troubleshooting incidents. Additionally, high availability is critical for customers who rely on cloud services. AI can play a powerful role in accelerating incident resolution, especially when engineers are spending a significant portion of time on root cause analysis. As an example, we developed a GenAI-powered site reliability engineer (SRE) agent that assists engineers by analyzing real-time logs and telemetry to find root causes autonomously during incidents. Engineers can ask follow-up questions and rate accuracy. By eliminating the need to assemble multiple engineering teams for incident triage and diagnosis, this approach can help reduce the time and effort required to resolve issues and restore the service faster. Accuracy is one of the crucial measures of the agent's effectiveness, and achieving that was an iterative process. The agent needs to be trained on high-quality, representative data; it needs to be tightly integrated into incident response workflows with real-time access to observability systems; and it must have mechanisms to learn from new incidents and user feedback. Besides encouraging engineer feedback, one thing we found beneficial is to incorporate the agent into the post-incident review (PIR) process. This retrospective analysis refined the agent's accuracy and functionality by clarifying incident causes and guiding engineers on prompt optimization. Once the agent meets your predefined success metrics, you can then expand its role. In our case, we extended the agent role beyond reactive incident response. By embedding the agent earlier in the incident lifecycle to monitor system alerts, the agent can automatically assess alerts and propose root cause solutions, significantly reducing investigative time for engineers. These continuous and targeted improvements are key to building a successful GenAI agent. AI For Server Maintenance Another way to figure out how to use AI is to consider operations where data analysis can be used to avoid undesirable outcomes and help teams move from a reactive mode (fire fighting) to a proactive mode. For example, server downtime in a data center is undesirable because it directly impacts the service uptime. Servers usually do not fail out of the blue; there is a pattern of malfunction that can be detected by closely monitoring server health with AI tools. At DigitalOcean, we use AI to analyze logs in real time, providing a confident root cause to engineers. This analysis can help repair machines faster while reducing repeat outages. We also collect messages that are emitted by the operating system or the out-of-band management controllers of servers, and perform a rules-based evaluation to trigger the removal of workloads from at-risk machines. If a stick of RAM issues a hardware warning or a disk array degrades in a production hypervisor, AI can automatically migrate customer and internal workloads to healthy machines. Companies can use similar techniques that use AI to perform a real-time analysis of relevant metrics to drive operational improvements and use predictive rather than reactive operations. In Conclusion AI is already changing every aspect of how we work, and it is important for leaders to get in front of it. The most important part is to get started; identify some workflows that are ripe for automation, put together a tiger team and give them the latitude to experiment till they get the AI right. Even if the initial experiments do not work, the learnings are invaluable and set you up for success down the road. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Entrepreneur
22-07-2025
- Business
- Entrepreneur
Ex-Meta Staff Says Software Engineers Make 3 Common Mistakes
Meta's former "coding machine" says that software engineers often make the same mistakes that can hurt their careers — and they can be applied to every profession. Software engineers are likely making a few common mistakes that prevent them from advancing in their careers, says a former Meta senior staff engineer — and these mistakes are general enough to apply to any job. Michael Novati, an engineer who spent eight years at Meta and earned the nickname "Coding Machine" after being the top code committer company-wide for several years, told "The Peterman Pod" that there are three common mistakes engineers and other professionals make that prevent them from moving forward in their careers. Novati, who now works at a remote software engineering fellowship program, Formation, as its chief technology officer, said one problem is "thinking too much" and "not doing enough." Novati said that oftentimes developers come to him to ask for advice or questions without first "turning the gears" and writing code to address the problem. "Step one is do something, just do anything," Novati told "The Peterman Pod." Related: 'It's Laughable': Okta's CEO Says AI Won't Replace Software Engineers Despite Other Tech Leaders' Predictions The second mistake Novati identified is not asking "the right people" for feedback, or not going to "respected people" or "people who have that experience and taste and judgement" for advice on how to improve. He gave the example of his days at Meta, when he was writing so much code that his manager spent all day reviewing it. His manager was the respected person Novati turned to for feedback, because his manager had the "judgement and taste" that Novati aspired to have. The third mistake, which Novati admitted to making "a lot," was not taking action on feedback and taking it more as a harsh judgment or a pat on the back of approval than a call to action. "My advice to people who are ambitious and who want to get those perfect scores and check off all the boxes is to really reflect on feedback, on how you can improve and try to push your comfort zone there, instead of trying to look at it as a judgment or a grade," Novati said. The end goal is to "write a lot of code," get feedback from experienced people, and "actually [take] action" on the feedback, Novati said. Related: OpenAI Is Creating AI to Do 'All the Things That Software Engineers Hate to Do' As AI advances, software engineers might not have to write as much code as they used to, anyway. Microsoft CEO Satya Nadella said in April that engineers at Microsoft are using AI to generate up to 30% of new code at the company. Google CEO Sundar Pichai stated in the same month that Google was generating "well over 30%" of new code with AI. Meanwhile, Anthropic CEO Dario Amodei predicted in March that AI would take over coding completely for all software engineers within a year. Join top CEOs, founders and operators at the Level Up conference to unlock strategies for scaling your business, boosting revenue and building sustainable success.
Yahoo
20-07-2025
- Business
- Yahoo
Man Tells Job Candidate to ‘Shut Up' During Interview After Being Repeatedly Interrupted
A man was pushed to his limits in a job interview and scolded the man he was interviewingNEED TO KNOW A man was pushed to his limits in a job interview and scolded the man he was interviewing After being interrupted multiple times, the interviewer told the man to "shut up and listen" People on the internet sided with the interviewer, saying that he told the man being interviewed what he really needed to hearA man revealed on Reddit that he told a job candidate he needed to "shut up and listen" after an exhausting interview. In the post, the man explained that he and his colleague were "interviewing this guy for a very good software engineering position." The candidate passed the first round, and everyone agreed that he had a great "resume and experience." "On paper, he had everything we were looking for, and honestly, this looked like a life-changing opportunity for him," the poster explains. However, when the man showed up for the second interview, he was "very serious and not smiling at all." After a round of introductions, the man immediately asked if he could tell them a bit about himself. 'He started talking and talking about his personal and professional background. After about three minutes, I jumped in to ask a follow-up question based on something he mentioned. He replied, 'I will answer, just give me a moment,' and continued talking,' the man recalls. 'A coworker jumped in with another question, and he said the same thing to her. But after more than five minutes, I jumped in again with another question. I had to talk over him to do it," the poster adds. "He finally paused and answered, but in such a long-winded way that he ended up veering into another topic." His coworker then asked another question, and the same thing happened. At this point, the poster "was ready to end the interview," and "tried to politely wrap it up several times, but he was unable to read the room and just kept talking." Finally, he "raised [his] voice slightly" and asked the man being interviewed if he had any questions for the team, and he did. However, after each question, the candidate interrupted and found a way to bring the conversation back to himself "I tried once more to interrupt politely, but he kept talking," the poster explains. "At that point, I was done. I said, 'John, you really have to shut up and listen.' He was surprised, as was my coworker, but he finally stopped talking." "I continued, 'You walked into this room with a 99 percent chance of getting the job. Now that chance is zero. The only reason is because, in less than fifteen minutes, you've demonstrated that you don't have the capacity to listen at all. So I'm telling you now, you're not getting the job. But if you take anything away from this interview, let it be this: no matter how good you are technically, if you can't listen, you'll never excel in this career.' " The man being interviewed apologized and asked if they could start again, to which the interviewer declined. The poster wished the interviewee good luck in his job hunt and encouraged him to "listen" more in the future. Looking back, he admits he "could have handled that differently," and "feels bad for the guy," but Reddit users suggested it was likely for the best. 'Honestly, he needed to hear it from someone in a position of hiring authority. Sure, his ego may be bruised, but that likely needed to happen,' one person commented. Another person added, 'You actually gave him more feedback than many people would have.' Read the original article on People Solve the daily Crossword


Forbes
18-07-2025
- Business
- Forbes
AI Security Demystified: Practical Insights For Nontechnical Leaders
Alex Pinaev, CEO of Mobix, is a software engineering expert with Fortune 500 consulting experience in secure development. Machine learning (ML) and generative AI (GenAI) are reshaping the organizational landscape. Companies increasingly recognize that AI drives innovation, helps sustain competitiveness and boosts workforce productivity. It has also become clear that internal company data can provide a unique competitive advantage for AI-powered applications. However, as more developers build solutions on top of large language models (LLMs), organizations are learning that truly cutting-edge results often require complex AI systems, far beyond simple prompt-based tools or plugins. These multicomponent LLM applications open the door to new capabilities but also introduce serious risks. Companies are rightfully concerned about data loss, breaches of confidentiality, model theft and the challenges of meeting current and future compliance obligations when using proprietary data for ML and GenAI. Without strong access controls, users may unintentionally gain access to sensitive information. In customer-facing applications, this can even result in data leaks to competitors. The absence of auditability and traceability further increases the risk of noncompliance. This article is intended for practitioners who understand that GenAI must be implemented—not feared—and who are actively seeking tools and processes to reduce exposure and errors. The Expanding Threat Surface Of LLM Applications It is becoming increasingly obvious that LLMs significantly expand the attack surface. Even relatively simple prompt-based applications are exposed to security risks. These include prompt injection attacks and data leakage caused by poorly constructed or malicious prompts. The absence of context awareness or session management can result in unpredictable or unintended behavior. Additionally, user-generated prompts may produce inconsistent or biased outputs, raising concerns related to regulatory compliance and ethical standards. Agent-based LLM applications, which operate autonomously and interact with multiple systems, pose even greater risks. They are vulnerable to unauthorized access, and their integration across APIs widens the attack surface further. Moreover, weaknesses in their decision making logic can be exploited by attackers. If attackers compromise an autonomous agent, the consequences may be critical, particularly in healthcare, finance or infrastructure. To mitigate such risks, strong access controls, end-to-end encryption and secure APIs are essential. Data integrity and confidentiality must be enforced at rest, in transit and at the interface level. Complex LLM applications face additional challenges due to tight integration with enterprise systems and the handling of large volumes of sensitive data. Common issues include API vulnerabilities, adversarial input attacks and misconfigurations that can result in unauthorized access. The complexity of such systems also makes it harder to maintain compliance across all components. LLM Security Tools: What's Emerging The development of LLM-based systems is still in its early stages, and their risks often fall outside the scope of traditional cybersecurity or DevSecOps practices. While existing standards help reduce risks in conventional software systems, LLMs pose new, model-specific challenges. These models may be trained on data that changes over time, impacting reliability and behavior in ways that are hard to explain. LLMs also operate in complex, socio-technical environments influenced by human behavior, policy and social norms, making failure modes difficult to detect. Static code analysis won't prevent prompt injection or reveal ethical and social biases in model outputs. Understanding the open-source components of an AI agent is useful but far from sufficient. Defending LLM-based systems requires a fresh security strategy and a new development-testing-deployment lifecycle. The good news is that we are beginning to see emerging solutions aimed at closing these security gaps. One of the most promising developments is the introduction of LLM firewalls: security layers specifically designed to protect large language models from unauthorized access, malicious queries and potentially harmful outputs. These firewalls monitor and filter interactions with LLMs, blocking suspicious or adversarial inputs that could influence the model's behavior. They also enforce predefined rules and policies to ensure that the model only responds to legitimate queries within clearly defined ethical and operational boundaries. In addition, LLM firewalls help prevent data leakage and safeguard confidential information by controlling the flow of data into and out of the model. Another emerging category includes LLM vulnerability scanners: specialized tools developed to identify and assess risks unique to large language models. These scanners can detect prompt injection attempts and adversarial input attacks. They evaluate model behavior across various scenarios to uncover vulnerabilities that might go unnoticed by traditional security tools. Three Things You Should've Done Yesterday In time, we can expect the emergence of formal threat models, maturity assessment frameworks, certification standards and a new class of seasoned consultants specializing in LLM security. Until then, here's what pragmatic leaders should do now: Start by identifying whether the system is already in production and clarifying what stage of its lifecycle it's currently in. Document which internal or external systems the application interacts with, and what LLM engine powers it. Specify where the model's source code is stored and who is responsible for owning and maintaining the application. Review the available documentation and note whether a specific training framework was used. Determine whether the system is capable of making autonomous decisions, and finally, assess whether there are any safeguards in place to secure the interaction channel between clients and the model. Stay informed. Security in the GenAI era requires continuous learning. If you have the courage to experiment in this entirely new field on your own, I'd recommend the following hands-on exercise: Deploy any LLM, install a vulnerability scanner designed for language models and observe firsthand how obfuscation and prompt injection attacks work in practice. You might even succeed in making the model produce intentionally false conclusions or reveal secrets it was never meant to disclose. You'll quickly see that LLM vulnerabilities are unlike anything you've encountered in the world of traditional web application security. You'll need a budget, whether for safeguards, audits, tooling or rearchitecture. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Entrepreneur
17-07-2025
- Business
- Entrepreneur
Robinhood Is Using AI to Generate Half of All New Code
Robinhood co-founder and CEO, Vlad Tenev, says almost all engineers at the company have adopted AI coding tools. Engineers at the stock trading and investing app Robinhood are using AI to generate new code instead of writing it themselves. Robinhood CEO Vlad Tenev said on the 20VC podcast earlier this week that "close to 100%" of software engineers at the company are using AI to write blocks of code, tapping into tools like Cursor and Windsurf, which advertise advanced coding, debugging, and editing capabilities. According to Tenev, over 50% of new code at Robinhood is AI-generated, the same percentage as Salesforce. Related: Robinhood Is Offering a Credit Card for the First Time — and It's Available in 10-Karat Gold Tenev said that it was difficult to differentiate between AI-written code and human-created code, estimating that only a "minority" of new code at Robinhood was now written by humans. "It's hard to even determine what the human-generated code is," Tenev said on the podcast. "If I had to guess, it's in the minority." Meanwhile, Google CEO Sundar Pichai and Microsoft CEO Satya Nadella have individually stated that AI writes 30% of the code at their respective companies, putting Robinhood's AI coding adoption ahead of those big tech companies. Robinhood CEO Vlad Tenev. Photo byfor Breakthrough Prize Tenev also said on the podcast that AI has had a "huge" impact on Robinhood internally, affecting teams like customer support. For example, Robinhood built its own version of ChatGPT for customer service. "The impact that it's had on internal teams, ranging from software engineering to customer support, the really big internal teams, has been huge," Tenev said on the podcast. Related: OpenAI Blasts Robinhood for Selling OpenAI Tokens: 'We Do Not Endorse It' Robinhood has more than quadrupled its market capitalization in the past eight months, from $21 billion in November to about $90 billion at the time of writing. In 2024, the company achieved total net revenue of $2.95 billion, up 58% year-over-year.