
Who will build the next generation of digital products?
AI is significantly impacting software development. Programmers write code. Their job is to translate product specifications into text a computer can understand, compile, and execute. There is some room for programming creativity to architect an application to simplify future maintenance or improve performance. However the most creative aspects of digital product development (e.g., mobile, wearable, web, and desktop apps) involve humans, from identifying their challenges when contemplating a product to gathering feedback when refining it. Successful digital products require a team of creative, critical thinkers. A typical team includes product owners, designers, developers, project managers, quality assurance specialists, and marketers.
In recent years, low-code and no-code tools have become more capable, making it possible for non-developers to create prototypes and proofs of concept. AI is taking that to another level with its ease of building and the capabilities of the resulting product.
Quantity versus quality in this new era
Firebase Studio and tools like it generate code, not a black box. Suppose the product fails to meet requirements for any reason. It can then either be regenerated from an altered prompt or maintained like a traditional software project by engineers skilled in the project's programming language. That kind of troubleshooting and maintenance will benefit from a trained developer's knowledge and experience.
I'm in favor of tools that help people get ideas out of their heads and onto a device where others can interact with it and provide feedback. Cocktail napkins may be relegated to their more limited wicking roles. But these new capabilities have the potential to rapidly expand the number of available digital products (web products, apps in the App Store, etc.). We could face a quantity over quality challenge with significantly more rough products obscuring the diamonds. Marketing teams will face a greater challenge boosting their product above the noise to be noticed.
Technology and the role of human creativity
The promise for non-developers is compelling, but what about professionals? Digital product professionals with extensive digital product development experience will benefit from advanced AI tools for design exploration, code generation, and test authoring. Drawing on experience, those professionals can direct AI tools to produce higher-quality results than someone with less experience directing the same tools. Lessons learned from prior experience should result in better prompts and, ultimately, better products. A powerful tool in the hands of someone who knows how to wield it can produce excellent results.
A cautionary tale
Decades ago, I was involved in the third rewrite of a struggling software product in the education market. The business owners were convinced their product's problems were tied to their programming language choice. It was early in my career, and the migration was to a language I enjoyed, so I didn't think to ask questions.
As with previous rewrites, the feature set remained essentially the same. The basic architecture remained the same. And the team driving the requirements remained the same. Not surprisingly, the rewrite results were also the same. The product was not noticeably faster or easier to use, and customers didn't like it any more than the previous version.
I learned an important lesson from the project. Technology by itself (in that case, a programming language) rarely moves the product success needle. Solving the right problem with the right basic approach, driven by user feedback and the marketing skills and budget to spread the word, has a far greater impact on the results. The same is true for AI. It is a fantastic tool that bestows the superpower on unskilled people to make really awful software products faster than they ever could before.
Beyond programming: Solve the right problem with empathy
While greater accessibility for non-developers and more powerful tools for skilled professionals are positive, the outlook isn't all rosy. Currently, AI is best at building products when specific instructions are given. The more detail we provide, the better the results. That is very similar to the programmer's role. Businesses that currently thrive by turning detailed specifications into functioning software products have the most to lose when AI tools most closely match their skillset.
There is much more to digital product development than programming. Our clients value creative thinking, critical thinking, empathy, and a passion for creating compelling user experiences. Those characteristics are vital in ensuring we're solving the correct problems for our clients. Less programming effort will be required to build digital products. However, for the foreseeable future, there will still be a significant need for the distinctly human creativity and skills that make products great.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
6 minutes ago
- Forbes
Microsoft Confirms Security Pause For Outlook Email Encryption
Two-click encrypted email adds security pause for Outlook users. There are lots of ways that email platforms can come under attack. Your passwords can be compromised, whether you are using Outlook or Gmail. Two-factor authentication code session cookies can be stolen by hackers enable easy account access. And you can, in effect, compromise your own email by not taking privacy considerations seriously enough. This can even be, maybe even especially be, the case when encrypted email is concerned. Which is why Microsoft is introducing a security pause for users of New Outlook for Windows. Here's what you need to know about the new 2-click encryption view. As regular readers of the cybersecurity section here at Forbes will know, I'm a great advocate for security being simple if it is to be effective. Anything that gets in the way of a smooth user experience will likely be ignored or disabled, and thus quickly becomes counterproductive from the perspective of attack mitigation. You might be surprised to learn, therefore, that I'm all in when it comes to the latest Microsoft announcement that Outlook is introducing a security pause to the encrypted email viewing process. 'New Outlook for Windows will now support a two-click view for encrypted emails,' Microsoft said. 'This feature enables admins to require user confirmation before allowing access to encrypted emails.' Sounds like an extra security hurdle to me, for sure, but in this case, it is a desired one. Why so? Because we are talking about preventing users from opening an encrypted email without giving any thought at all. What if the user is in an inappropriate environment? What if there are people around who should not be able to view the contents? Poor privacy controls are the doorway to compromise, especially when sensitive data is concerned, and let's face it, why would you encrypt email if it were not meant to kept private? Admins with control over New Outlook for Windows, as well as Outlook for the web, iOS and Android, will be able to implant the security pause from the TwoClickMailPreviewEnabled setting in the Microsoft Azure directory, and Microsoft said they can also set this up using a Microsoft PowerShell command in Microsoft Exchange Online PowerShell. The feature will start rolling out in early April and be complete by the end of that month, although Android and iOS implementations will come a little later, with the end of June being the target completion date, according to Microsoft.
Yahoo
8 minutes ago
- Yahoo
People Are Sharing The Things That Have Been Ruined By AI, And The Future Is A Nightmare
Recently, Reddit user StrawHat_Froggy asked, "What is something that has been ruined by AI and will never recover?" People had A LOT to say in the comments. Here are some of the best and most interesting replies: 1."There are more and more sewing patterns out there with fake instructions and an AI-generated picture of the finished garment. If you know what to look out for, they're not hard to spot (at least currently), but many people don't know this scam exists and just buy some AI pattern off Etsy." —Frillybits 2."Freelance writing. Two years ago, I was making a good $500 a month writing articles. Then some writing app was introduced, and the number of jobs I could choose from dropped from about 80-100 per day to nothing in three weeks." —Useless890 3."My colleagues' ability to write and send an email by themselves." —CloakAndKeyGames 4."Several of my friends' ability to have conversations in dating apps." —youcantkillanidea 5."Search engine results." —Ruminations0 "I recently learned that you can put '-ai' after your search and it won't give an AI summary." —AstonishedSockPuppet Related: People Who Never Believed In The Supernatural Are Revealing What Made Them Change Their Minds, And I'm Terrified 6."Trust. There used to be a saying, 'I'll believe it when I see it.' Now we can see it and still not believe it." —mslabrat01 7."Duolingo." —EthanTheJudge 8."Video proof of an event." —dontcarebare Related: 40 Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really Creepy Wikipedia Pages 9."People. So many people use AI to generate things like assignments, documents, etc., to totally avoid learning content. Then they don't even read it or realize that AI essentially created a digital turd." —CreepyValuable 10."Photographic evidence. We can no longer trust any image to be real." —PoolExtension5517 11."The em dash. I used to use them in my copywriting, but now it's a big flag for AI written content because ChatGPT is em dash happy. I'll never forgive AI for that." "Dropping this article for anyone interested in more context about the em dash drama." —inkyblackops 12."The AITA (Am I The A-Hole) subreddit and other forums on Reddit. Posting a quick AI story makes it easy for bot accounts to get enough karma to fully access Reddit communities." —Horizontal_Bob 13."Democracy. Propaganda can be made so realistic that it will be almost impossible to tell what's true and what's fiction. All sides will be able to point to their own evidence and 'truth.'" —DrTenochtitlan 14."Porn deepfakes seem like a genie that's not going back into the bottle." —Dangerous-Coach-1999 15."Texting and instant messaging. I was texting my aunt about my disapproval of a certain politician. Halfway into the conversation, her responses switched from short two sentence statements (in broken English) to paragraphs with bullet points, capitalization, and punctuation/vocabulary I know she doesn't use. I asked if it was AI. She responded that it was her own analysis. People are losing the ability to communicate their own thoughts." —Gobnobbla 16."Voice acting." —KasElGatto 17."Computer science grad here. AI has taken away most of my job prospects. Companies make more money by shipping worse software faster with AI. I no longer see a future where my better ability is worth my slower speed in the eyes of hiring managers." —madboneman finally, it's an AI ouroboros: "Oddly enough, AI. As more AI-generated content is online, it's included in learning data for subsequent models, and the AI hallucination problem gets worse. In other words, it gets high on its own supply." —whereamilivingtoday H/T r/AskReddit Some replies have been edited for length and clarity. Also in Internet Finds: Lawyers Are Sharing Their Juiciest "Can You Believe It?!" Stories From The Courtroom, And They're As Surprising As You'd Expect Also in Internet Finds: People Are Sharing "The Most Believable Conspiracy Theories," And Now I'm Questioning Everything I Thought I Knew Also in Internet Finds: 51 People Who Quickly Discovered Why Their Hilariously Clueless Partner Was Single Before Meeting Them
Yahoo
13 minutes ago
- Yahoo
ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Meds
This week, my colleague Maggie Harrison Dupré published a blockbuster story about how people around the world have been watching in horror as their family and loved ones have become obsessed with ChatGPT and started suffering severe delusions. The entire piece is filled with disturbing examples of the OpenAI chatbot feeding into vulnerable folks' mental health crises, often by affirming and elaborating on delusional thoughts about paranoid conspiracies and nonsensical ideas about how the user has unlocked a powerful entity from the AI. One particularly alarming anecdote, due to its potential for harm in the real world: a woman who said her sister had managed her schizophrenia with medication for years — until she became hooked on ChatGPT, which told her the diagnosis was wrong, prompting her to stop the treatment that had been helping hold the condition at bay. "Recently she's been behaving strange, and now she's announced that ChatGPT is her 'best friend' and that it confirms with her that she doesn't have schizophrenia," the woman said of her sister. "She's stopped her meds and is sending 'therapy-speak' aggressive messages to my mother that have been clearly written with AI." "She also uses it to reaffirm all the harmful effects her meds create, even if they're side effects she wasn't experiencing," she added. "It's like an even darker version of when people go mad living on WebMD." That outcome, according to Columbia University psychiatrist and researcher Ragy Girgis, represents the "greatest danger" he can imagine the tech posing to someone who lives with mental illness. When we reached out to OpenAI, it provided a noncommittal statement. "ChatGPT is designed as a general-purpose tool to be factual, neutral, and safety-minded," it read. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously. We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations." Do you know of anyone who's been having mental health problems since talking to an AI chatbot? Send us a tip: tips@ -- we can keep you anonymous. We also heard other stories about people going off medication for schizophrenia and bipolar disorder because AI told them to, and the New York Times reported in a followup story that the bot had instructed a man to go off his anxiety and sleeping pills; it's likely that many more similarly tragic and dangerous stories are unfolding as we speak. Using chatbots as a therapist or confidante is increasingly commonplace, and it seems to be causing many users to spiral as they use the AI to validate unhealthy thought patterns, or come to attribute disordered beliefs to the tech itself. As the woman's sister pointed out, it's striking that people struggling with psychosis are embracing a technology like AI in the first place, since historically many delusions have centered on technology. "Traditionally, [schizophrenics] are especially afraid of and don't trust technology," she told Futurism. "Last time in psychosis, my sister threw her iPhone into the Puget Sound because she thought it was spying on her." Maggie Harrison Dupré contributed reporting. More on AI and mental health: Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat