logo
#

Latest news with #AI-produced

Is AI Safe For Software Development?
Is AI Safe For Software Development?

Forbes

time28-07-2025

  • Forbes

Is AI Safe For Software Development?

Arthur Hicken is chief evangelist at Parasoft, a global leader in automated software testing solutions. It should come as no surprise that the common response to concerns about AI's safety for software development is: 'It depends.' AI is exceptional at many things, like handling common coding tasks. But when AI is forced to consider context or deal with incomplete instructions, you risk compromising software safety and security. Software bugs are caused more often by insufficient requirements than poor code. Add AI to the mix and you introduce increased opportunities to compromise code quality, reliability, security and safety. Sure, AI makes it easy to automate software creation and testing—if you start with clearly understood instructions. This same problem extends to the prompts given to the system to generate code or fixes. If the prompt is ambiguous, incomplete or otherwise incorrect, AI is unlikely to produce the correct code. Unfortunately, however, a lot of software development is designed using a move-fast-and-break-things philosophy due to inexact specifications and evolving rules. Put AI on an imprecise path and the opportunity to veer off course grows significantly. Ambiguity is the enemy of software development, so it is reasonable to expect uncertainty to be one of the biggest obstacles to ensuring AI-produced software is safe for deployment. AI In The Mirror The phrase 'garbage in, garbage out' describes the results of working with vague specifications. AI exacerbates this issue because it has not learned yet how to recognize code quality. Today, AI is a mirror of ourselves—not as individuals, but as a whole, because it looks at the sum of software code that humans have written and reflects it back to us. Since few objective measures exist for determining whether code is good or bad, AI simply looks for code that will solve the requirement it is given without taking quality into account. While matching coding patterns can yield code that does what the developer wants, more learning is required to help AI recognize good coding practices. Major attributes must be considered, such as quality, security, performance and maintainability. Until then, early AI interactions are likely to recommend code that is not completely up to snuff. And it's not AI's fault, as it just echoes what is found without being able to apply context to the results. The good news is that, over time, AI will become increasingly adept at discerning between good and bad code. But like newbies on your development team, it needs to be mentored first. Beginner programmers are not given unfettered access until they earn trust by proving their code is reliable, safe, maintainable, editable and scalable. Why should AI-generated code be different? The Capricious Prodigy Think of AI as a capricious prodigy who has the potential to be brilliant but occasionally does something idiotic without warning. Most days, this wunderkind quickly produces helpful code suggestions. While exceedingly clever, however, AI-driven solutions also can present a false sense of security that they know what they're doing all the time, even when that is not true. Unfortunately, new whiz kids become easily distracted and careless unless given explicit directions. AI needs around-the-clock oversight from experienced programmers to improve accuracy. Skilled experts are ideally suited to discover early opportunities for AI to become a force multiplier. Armed with knowledge and experience, senior-level developers are poised to accomplish greater feats of creative problem solving through measured applications of AI. I believe that when AI receives detailed guidance from development experts, the results can lift the entire team's productivity. Conversely, providing AI tools to everyone regardless of skill level is more likely to cause a precipitous drop in productivity along with a spike in software errors. Not everyone agrees with this thinking, especially developers of AI-coding assistants and Meta CEO Mark Zuckerberg. He told Joe Rogan earlier this year that AI will replace mid-level engineers soon. While that is a great aspirational goal, I see it as unrealistic. What is most needed right now are vital protective barriers that safeguard development teams and give the new generation of capricious prodigies the best shot at success. It's All About The Guardrails Guardrails along highways prevent driving our cars off the road or into oncoming traffic. AI guardrails for software development perform similar functions by protecting teams from inadvertently introducing software bugs or security risks. That is why it is critical to share rules, best practices and automated controls that guide AI to move decisively within safe and efficient boundaries. I cannot overstress the importance of peer review to ensure your capricious AI prodigy operates within specific parameters, encompassing everything from basic naming conventions to acceptable behaviors. Remember, constant coaching and immediate feedback play equally pivotal roles in training AI as well as mentoring the newest member of your development team. Establishing effective guardrails is not about relying on a single tool, but about creating a strategic ecosystem of safeguards that reinforce one another. For example, static code analysis provides an extra measure of protection by examining code structure, syntax and patterns to find potential bugs, vulnerabilities and other violations—but this must be complemented by other foundational practices to ensure reliability and maintainability at every stage of the development lifecycle. Approaches like automated static code analysis can function like a pair programmer or 'buddy system'—flagging bugs, vulnerabilities or deviations from coding standards in real time. When complemented by other approaches such as integrated code coverage or unit testing, these safeguards can help provide critical visibility across the development lifecycle. To reap the benefits from AI to boost quality in software development and testing requires experience and mature internal processes. Quality must be viewed as integral to a complete ecosystem of safeguards to ensure seamless collaboration between software developers, testers and quality assurance teams. Regular check-ins, ongoing training, open communication and well-documented processes are essential to keeping pace with rapid AI advancements. To increase AI's effectiveness and efficacy for software development, apply both the right mindset and testing methodologies to all code, generated by humans and AI alike. In doing so, we give AI the best opportunity to fulfill its potential to transform software development, safely and securely. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

We've all got to do more to protect kids from AI abuse in schools
We've all got to do more to protect kids from AI abuse in schools

New York Post

time28-06-2025

  • Science
  • New York Post

We've all got to do more to protect kids from AI abuse in schools

For the sake of the next generation, America's elected officials, parents and educators need to get serious about curbing kids' use of artificial intelligence — or the cognitive consequences will be devastating. As Rikki Schlott reported in Wednesday's Post, an MIT Media Lab study found that people who used large language models like ChatGPT to write essays had reduced critical thinking skills and attention spans and showed less brain activity while working than those who didn't rely on the AI's help. And over time the AI-users grew to rely more heavily on the tech, going from using it for small tweaks and refinement to copying and pasting whole portions of whatever the models spit out. Advertisement A series of experiments at UPenn/Wharton had similar results: Participants who used large language models like ChatGPT were able to research topics faster than those who used Google, but lagged in retaining and understanding the information they got. That is: They weren't actually learning as much as those who had to actively seek out the information they needed. The bottom line: Using AI for tasks like researching and writing makes us dumber and lazier. Advertisement Even scarier, the MIT study showed that the negative effects of AI are worse for younger users. That's bad news, because all signs are that kids are relying more and more on tech in classrooms. A Pew poll in January found that some 26% of teens aged 13 to 17 admit to using AI for schoolwork — twice the 2023 level. It'll double again, faster still, unless the adults wake up. Advertisement We've known for years how smartphone use damages kids: shorter attention spans, less fulfilling social lives, higher rates of depression and anxiety. States are moving to ban phones in class, but years after the dangers became obvious — and long after the wiser private schools cracked down. This time, let's move to address the peril before a generation needlessly suffers irrevocable harm. Some two dozen states have issued guidance on AI-use in classrooms, but that's only a start: Every state's education officials should ensure that every school cracks down. Advertisement Put more resources into creating reliable tools and methods to catch AI-produced work — and into showing teachers how to stop it and warning parents and students of the consequences of AI overuse. Absent a full-court press, far too many kids won't build crucial cognitive skills because a chat bot does all the heavy lifting for them while their brains are developing. Overall, AI should be a huge boon for humanity, eliminating vast amounts of busy work. But doing things the hard way remains the best way to build mental 'muscle.' If the grownups don't act, overdependence on AI wll keep spreading through America's classrooms like wildfire. Stop it now — before the wildfire burns out a generation of young minds.

Italy regulator probes DeepSeek over false information risks
Italy regulator probes DeepSeek over false information risks

Time of India

time17-06-2025

  • Business
  • Time of India

Italy regulator probes DeepSeek over false information risks

Italy's antitrust watchdog AGCM said on Monday it had opened an investigation into Chinese artificial intelligence startup DeepSeek for allegedly failing to warn users that it may produce false information. DeepSeek did not immediately respond to an emailed request for comment. The Italian regulator, which also polices consumer rights , said in a statement DeepSeek did not give users "sufficiently clear, immediate and intelligible" warnings about the risk of so-called "hallucinations" in its AI-produced content. It described these as "situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information." In February, another Italian watchdog, the data protection authority , ordered DeepSeek to block access to its chatbot after it failed to address its concerns on privacy policy.

Italy regulator probes DeepSeek over false information risks
Italy regulator probes DeepSeek over false information risks

Time of India

time17-06-2025

  • Business
  • Time of India

Italy regulator probes DeepSeek over false information risks

Italy's antitrust watchdog AGCM said on Monday it had opened an investigation into Chinese artificial intelligence startup DeepSeek for allegedly failing to warn users that it may produce false information. DeepSeek did not immediately respond to an emailed request for comment. The Italian regulator, which also polices consumer rights , said in a statement DeepSeek did not give users "sufficiently clear, immediate and intelligible" warnings about the risk of so-called "hallucinations" in its AI-produced content. It described these as "situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information." In February, another Italian watchdog, the data protection authority , ordered DeepSeek to block access to its chatbot after it failed to address its concerns on privacy policy.

Italy regulator probes DeepSeek over false information risks
Italy regulator probes DeepSeek over false information risks

The Hindu

time17-06-2025

  • Business
  • The Hindu

Italy regulator probes DeepSeek over false information risks

Italy's antitrust watchdog AGCM said on Monday it had opened an investigation into Chinese artificial intelligence startup DeepSeek for allegedly failing to warn users that it may produce false information. DeepSeek did not immediately respond to an emailed request for comment. The Italian regulator, which also polices consumer rights, said in a statement DeepSeek did not give users "sufficiently clear, immediate and intelligible" warnings about the risk of so-called "hallucinations" in its AI-produced content. It described these as "situations in which, in response to a given input entered by a user, the AI ​​model generates one or more outputs containing inaccurate, misleading or invented information." In February, another Italian watchdog, the data protection authority, ordered DeepSeek to block access to its chatbot after it failed to address its concerns on privacy policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store