logo
Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques

Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques

Forbes7 hours ago
In today's column, I provide GPT-5 prompt engineering tips and techniques that will aid in getting the best outcomes when using this newly released generative AI. I'm sure that just about everyone by now knows that OpenAI finally released GPT-5, doing so after a prolonged period of immense and wildly fantastical speculation about what it would be like.
Well, now we know what it is (see my in-depth review of GPT-5 at the link here).
Bottom line is that GPT-5 is pretty much akin to all the other generative AI and large language models (LLMs) when it comes to doing prompting. The key is that if you want to ensure that GPT-5 works suitably for your needs, you must closely understand how GPT-5 differs from prior OpenAI AI products. GPT-5 has distinctive features and functionality that bring forth new considerations about composing your prompts.
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Top-notch prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI.
Prompting Is Still Tried And True
The first place to begin when assessing GPT-5 from a prompt engineering perspective is that prompts are still prompts.
Boom, drop the mic.
I say that somewhat irreverently. Here's the deal. There was prior conjecture that perhaps GPT-5 would turn the world upside down when it came to using prompts. The floated ideas of how GPT-5 might conceivably function were astounding and nearly out of this world ('it will read your mind', 'it will know what you want before you even know', etc.).
The truth is now known. GPT-5 is essentially a step-up from ChatGPT and GPT-4, but otherwise you do prompting just like you've done all along. There isn't a new kind of magical way to write prompts. You are still wise to compose prompts as you've been doing since the early days of contemporary generative AI.
To clarify, I am emphasizing that you should astutely continue to write clearly worded prompts. Be direct. Don't be tricky. Write prompts that are long enough to articulate your question or task at hand. Be succinct if possible. Definitely don't be overly profuse or attempt to be complicated in whatever your request is. And so on.
Those are all golden rules and remain perfectly intact when using GPT-5. I am confident that all the prompt engineering specialized techniques that I've previously covered will generally work appropriately with GPT-5. Some might require a tweak or minor refinement, but otherwise, they are prudent and ready to go (see my list at the link here).
Auto-Switching Can Be A Headache
We can next consider how to artfully try and accommodate GPT-5 via composing prompts that GPT-5 will efficiently and effectively act on.
The biggest aspect that entails both good news and bad news about GPT-5 is that OpenAI decided to include an auto-switcher. This is a doozy. It will require you to potentially rethink some of your prompting since it is quite possible that GPT-5 isn't going to make the right routing decisions on your behalf.
Allow me a moment to explain the quandary.
It used to be that you would have to choose which of the various OpenAI AI products you wanted to use for a particular situation at hand. There had been an organic expansion of OpenAI's prior models in the sense that there have been GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, and so on. When you wanted to use OpenAI's AI capabilities, you had to select which of those available models you wanted to utilize. It all depended on what you were looking to do. Some were faster, some were slower. Some were deeper at certain classes of problems, others were shallower.
It was a smorgasbord that required you to pick the right one as suitable for your task at hand. The onus was on you to know which of the models were particularly applicable to whatever you were trying to do. It could be a veritable hit-and-miss process of selection and tryouts.
GPT-5 now has uplifted those prior versions into new GPT-5 submodels, and the overarching GPT-5 model makes the choice of which GPT-5 submodel might be best for whatever problem or question you happen to ask. The good news is that depending on how your prompts are worded, there is a solid chance that GPT-5 will select one of the GPT-5 submodels that will do a bang-up job of answering your prompt.
The bad news is that the GPT-5 auto-switcher might choose a less appropriate GPT-5 submodel. Oops, your answer will not be as sound as if the more appropriate submodel had been chosen. Worse still, each time that you enter a prompt or start a new conversation, the GPT-5 auto-switcher might switch you to some other GPT-5 submodel, back and forth, doing so in a wanton fashion.
It can make your head spin since the answers potentially will vary dramatically.
Craziness In Design
The average user probably won't realize that all these switcheroo mechanics are happening behind the scenes. I say that because GPT-5 doesn't overtly tell you that it is taking these actions. It just silently does so.
I appreciate that the designers apparently assumed that no one would care or want to know what is going on under the hood. The problem is that those who are versed in using AI and are up-to-speed on prompting are being bamboozled by this hidden and secreted behavior.
A savvy user can almost immediately sense that something is amiss.
Frustratingly, GPT-5 won't let you directly control the auto-switching. You cannot tell the AI to use a particular submodel. You cannot get a straight answer if you ask GPT-5 which submodel it intends to use on your prompt. It is perhaps like trying to get the key to Fort Knox. GPT-5 refuses to play ball.
The marketplace has tweeted vociferously that something needs to be done about this lack of candor by GPT-5 regarding the model routing that is occurring. Sam Altman sent out a tweet on X that suggested they are going to be making some changes on this aspect (see his X posting of August 8, 2025).
The thing is, we can applaud the desire to have a seamless, unified experience, but it is similar to having an automatic transmission on a car. Some users are fine with an automatic transmission, but other, more seasoned drivers want to know what gear the car is in and be able to select a gear that they think is most suitable for their needs.
Prompting GPT-5 For Routing
As the bearer of bad news, I should also add that the auto-switching comes with another said-to-be handy internal mechanism that decides how much processing time will be undertaken for your entered prompt.
Again, you have no particular say in this. It could be that the prompt gets tons of useful processing time, or maybe the time is shortchanged. You can't especially control this, and the settings are not within your grasp (as an aside, to some degree, if you are a developer and are using the API, you have more leeway in dealing with this; see the OpenAI GPT-5 System Card for the technical details).
Let me show you what I've been doing about this exasperating situation.
First, here is a mapping of the prior models to the GPT-5 submodels:
The GPT-5 submodels are considered successors and depart from the earlier models in various ways. That being said, they still are roughly on par as to the relative strengths and weaknesses that previously prevailed.
I will show you what I've come up with to try and sway the GPT-5 auto-switcher.
Prompting With Aplomb
Suppose I have a prompt that I believe would have worked best on GPT-4o. But I am using GPT-5, thus I am not using GPT-4o, plus OpenAI has indicated that it will sunset the prior models, so you might as well get used to using GPT-5.
Darned if you cannot simply tell GPT-5 to use gpt-5-main (i.e., realizing that gpt-5-main is now somewhat comparable to GPT-4o, per my chart above). The AI will either tell you it doesn't function that way or might even imply that it will do as you ask, yet it might do something else.
Bow to the power of the grand auto-switcher.
This eerily reminds me of The Matrix.
Anyway, we need to somehow convince GPT-5 to do what we want, but we must do so with aplomb. Asking straightaway isn't a viable option. The need to sway the AI is our best option at this ugly juncture.
In the specific case of my wanting to use gpt-5-main, here is a prompt that I use and seems to do the trick (much of the time):
It appears that by emphasizing the nature of what I want GPT-5 to do, it seems possible to sway the direction that the auto-switcher will route my next prompt.
Not only will I possibly get the submodel that I think is the best choice for the prompt, observe that I also made a big deal about the depth of reasoning that ought to take place. This potentially helps to kick the AI into giving an allotment of processing time that it, by enigmatic means, would have perhaps inadvertently shortcut (OpenAI refers to processing time as so-called 'thinking time' – an anthropomorphizing of AI that I find to be desperate and despairing).
I am not saying this sway-related prompting is a guaranteed result. After trying a bunch of times, it seemed to be working as hoped for.
I came up with similar prompts for each of the other GPT-5 submodels. If there is enough interest expressed by readers, I will do a follow-up with those details. Be on the watch for that upcoming coverage. On a related note, I will also soon be covering the official GPT-5 Prompting Guide that OpenAI has posted, along with their Prompt Optimizer Tool. Those are aimed primarily at AI developers and not especially about day-to-day, ordinary prompting in GPT-5.
Watch Out That Writing Is Enhanced
On the writing side of things, GPT-5 has improvements in a myriad of writing aspects.
The ability to generate poems is enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus. My guess is that the everyday user won't discern much of a difference.
For a more seasoned user, you are bound to notice that the writing has gotten an upgrade. I suppose it is something like getting used to a third grader and now being conversational with a sixth grader. Or something like that.
I use this prompt to get GPT-5 to be closer to the way it was in the GPT-4 series:
That seems to get me the kind of results that I used to see. It is not an ironclad method, but it generally works well.
I realize that some people are going to scream loudly that I ought not to suggest that users revert to the GPT-4 writing style. We all should accept and relish the GPT-5 writing style. Are we going backwards by asking for GPT-5 to speak like GPT-4? Maybe. I grasp the angst.
It's up to you, and I'm not at all saying that everyone should use this prompting tip. Please use it at your personal discretion.
Lies And AI Hallucinations
OpenAI claims that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is yet another misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth).
I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here.
A presumed upbeat consideration is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The downbeat news is that it isn't zero. In other words, it is still going to lie and still going to hallucinate. This might happen on a less frequent basis, but nonetheless remains a chancy concern.
Here is my prompt to help try and further reduce the odds of GPT-5 lying to you:
Here is my prompt to help further reduce the odds of GPT-5 incurring a so-called hallucination:
My usual caveats apply, namely, these aren't surefire, but they seem to be useful. The crucial motto, as always, still is that if you use generative AI, make sure to remain wary and alert.
One other aspect is that you would be shrewd to use both of those prompts so that you can simultaneously try to strike down the lying and the hallucinations. If you only use one of those prompts, the other unresolved side will potentially arise. Try to squelch both. It's your way of steering out of the range of double trouble.
Personas Are Coming To The Fore
I've repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so.
For example, you might tell AI to pretend to be Abraham Lincoln. The AI will respond based on having pattern-matched on the writings of Lincoln and the writings about Lincoln. It is instructive and useful for students and learners. I even showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here.
OpenAI has indicated they are selectively making available a set of four new preset personas, consisting of Cynic, Robot, Listener, and Nerd. Each of those personas represents those names. The AI shifts into a mode reflecting those types of personalities.
The good news is that I hope this spurs people to realize that personas are a built-in functionality and easily activated via a simple prompt. It doesn't take much work to invoke a persona.
Here is my overall prompt to get a persona going in GPT-5:
Use personas with due caution. I mention this because some people kind of get lost in a conversation where the AI is pretending to be someone. It isn't real. You aren't somehow tapping into the soul of that actual person, dead or alive.
Personas are pretenses, so keep a clear head accordingly.
Prompt Engineering Still Lives
I hope that these important prompting tips and insights will boost your results when using GPT-5.
One last comment for now. You might know that some have fervently claimed that prompt engineering is a dying art. No one will need to write prompts anymore. I've discussed in great depth the automated prompting tools that try to do the prompting for you (see my aforementioned list of prompt engineering strategies and tactics). They are good and getting better, but we are still immersed in the handwriting of prompts and will continue down this path for quite a while to come.
GPT-5 abundantly reveals that to be the case.
A final remark for now. It has been said that Mark Twain made a wry comment that when a newspaper reported him as deceased, he said that the audacious claim was a bit exaggerated. That was smarmily tongue-in-cheek.
I would absolutely say the same about prompt engineering. It's here. It isn't disappearing. Keep learning about prompting. You'll be glad that you spent the prized time doing so.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Voice cloning, celebrity impersonations and the need for safeguarding — Hume's CEO sounds off on the world of AI voice generation
Voice cloning, celebrity impersonations and the need for safeguarding — Hume's CEO sounds off on the world of AI voice generation

Tom's Guide

time27 minutes ago

  • Tom's Guide

Voice cloning, celebrity impersonations and the need for safeguarding — Hume's CEO sounds off on the world of AI voice generation

On a Wednesday afternoon, I'm sitting on a video call listening to Ricky Gervais tell me a joke about voice cloning. Then, Audrey Heburn follows up to tell me her opinions on artificial intelligence. Unsurprisingly, neither of these people were actually on the call. Instead, it's Hume's CEO and chief scientist, Dr Alan Cowen, on the other side. He's showing off the latest update to his company's AI voice creation service EVI 3. Given just 30 seconds of audio, the tool can create a near-perfect replica of someone's voice. Not just their tone or accent, this new feature captures and replicates mannerisms and personality, too. Ricky Gervais telling me jokes about voice cloning features has his same dry wit and sarcastic tone. And Audrey Heburn is wistful and intrigued, while talking in a softer British accent of the time. But it's not just celebrities. This tool can take and replicate any voice in the world, all from just one small audio clip. Obviously, a tool like this has the benefit of changing the world, both for the better and the worse. Cowen sat down with Tom's Guide to explain this new tool, his background, and why his team wants to revolutionize the world of AI voice cloning. Hume operates in an area of AI that oddly doesn't come up as much. They are a voice generation software, making the claim of being 'the world's most realistic voice AI'. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. I think this is the fastest evolving part of the AI space. There are competitors from OpenAI and Google, but what we've done with Evi 3 is take the technology to the next step. It has come a long way over the years, now offering text-to-speech with a range of preset voices, as well as the ability to design a voice from a description. Now, with this latest update, the company can also clone any and all voices. 'I think this is the fastest evolving part of the AI space. There are competitors from OpenAI and Google, but what we've done with Evi 3 is take the technology to the next step,' Cowen explained on the call. 'Previous models have relied on mimicking specific people. Then you need loads of data to fine-tune for each person. This model instead replicates exactly what a person sounds like, including their emotions and personality.' This is achieved by using Hume's large backlog of voice data and reinforcement learning so that they don't have to mimic specific people. Give the model a 30-second clip, and it can recreate it from scratch. This allows the model to learn your specific inflections, accent and personality, while training it against a huge backlog of voice data to fill in the gaps. Of course, a model like this works best when given a good representation. A muffled clip of you talking in a monotone voice won't match your personality much. However, it currently only works for English and Spanish, with plans for more languages in the future. If, like me, your first thought at hearing all of this is concern, then surprisingly you have something in common with Cowen. 'I think this could be very misused. Early on at Hume, we were so concerned about these risks that we decided not to pursue voice cloning. But we've changed our mind because there are so many people with legitimate use cases for voice cloning that have approached us,' Cowen explained. 'The legitimate use cases include things like live translation, dubbing, making content more accessible, being able to replicate your own voice for scripts, or even celebrities who want to reach fans.' While these use cases do exist, there are just as many negative ones out there as well. Sam Altman, CEO of OpenAI, recently warned of the risks of AI voice cloning and its ability to be used in scams and bank voice activations. This technology, paired with video and image generation could be the push deepfakes have needed for a while to become truly problematic. Cowen explained that he was aware of these concerns and claimed that Hume was approaching it as best as they could. 'We are releasing a lot of safeguards with this technology. We analyze every conversation ,and we're still improving in this regard. But we can score how likely it is that something is being misused on a variety of dimensions. Whether somebody is being scammed or impersonated without permission,' Cowen said. 'We can obviously shut off access when people aren't using it correctly. In our terms, you have to comply with a bunch of ethical guidelines that we introduced alongside the Hume Initiative. These concerns have been on our mind since we started, and as we continue to unroll these technologies, we are improving our safeguarding too.' The Hume Initiative is a project set up by the Hume company. It's ethos is that modern technology should, above all, serve our emotional well-being. That is somewhat vague, but the Initiative lists out six principles for empathetic technologies: Of course, while these are good guidelines to follow, they are subjective, and only beneficial when followed. Cowen assured me that these are beliefs that Hume stands by and that, when it comes to voice cloning, they are well aware of the risks. Early on at Hume, we were so concerned about these risks that we decided not to pursue voice cloning. But we've changed our mind because there are so many people with legitimate use cases for voice cloning that have approached us. 'We are at the forefront of this technology and we try to stay ahead of it. I think that there will be people that don't respect the guidelines of this kind of tool. I don't want people to walk away thinking there is no danger here, there is,' Cowen explained. 'People should be concerned about deepfakes on the phone, they should be wary of these types of scams, and it something that I think we need a cross-industry attempt to address.' Despite being aware of the risks, Cowen explained that he thought this was a technology that they had to build. 'The AI space moves so fast that I don't doubt that a bad actor in six months will have access to something like this technology. We need to be careful of that,' Cowen said. Cowen spent a lot of our chat focusing on guidelines and the legitimate concerns of this kind of technology. His background is in Psychology and strongly believes that this kind of technology will have more of a positive effect on people's wellbeing than negative. 'People have been really enjoying cloning their voices with our demo. We've had thousands of conversations already, which is remarkable. People are using it in a really fun way,' Cowen said, after discussing what he thinks people get wrong about this kind of technology. He strongly believes that it can be used for fun, to help build people's confidence and can even be used for training purposes or for voice acting needs in films as well as dubbing. Of course, just like with many other areas of AI, the positive benefits are competing with the negative. Being able to have a generic voice read a script is useful, but rather uneventful in risk. Being able to accurately recreate any voice in the world comes with a long list of concerns. For now, Cowen and his team are way ahead in this venture, and seem committed to the ethical side of the debate, but we remain early into the life of this kind of technology.

I use a duress PIN to protect my data — here's how it works and why everyone needs one
I use a duress PIN to protect my data — here's how it works and why everyone needs one

Android Authority

time27 minutes ago

  • Android Authority

I use a duress PIN to protect my data — here's how it works and why everyone needs one

Calvin Wankhede / Android Authority From two-factor authentication codes to conversations and photos, our phones contain a ton of sensitive data these days. We rely on PINs and biometrics for daily security, but I shudder to think what would happen if that data landed in the wrong hands. And while Android is secure enough against remote attacks and malware these days, what if I'm forced to unlock my phone and hand it over? GrapheneOS, the privacy-focused Android fork, offers a rare solution to this hypothetical: the ability to set a duress PIN or secondary password that wipes your device clean and leaves no trace of your presence. I've had a duress PIN set up on my phone for a while now. While it's not something I hope to ever need, knowing it's there gives me peace of mind. And even though I don't think Google will add a feature as extreme as this one to stock Android, I can definitely see a use-case for a less extreme implementation. Here's why. The duress PIN: What it is and why it matters Calvin Wankhede / Android Authority Most devices will lock you out after too many failed unlock attempts. But that doesn't mean your data is safe — what if you're forced to give up your password or the attacker guesses your PIN? This is where GrapheneOS' duress PIN flips the dynamic: it lets you set an alternate PIN or password that instantly triggers a silent and irreversible factory reset in the background. The duress PIN doesn't give you a second chance and will trigger anywhere you enter it: on the lockscreen, while enabling Developer options, or even while unlocking an app that requests authentication. And unlike a regular factory reset, a duress PIN will erase all encryption keys and your phone's eSIM partition as well. This makes it impossible for an attacker to access my data just by having physical possession of your device and knowledge of the PIN. I think the real strength of GrapheneOS' duress PIN lies in its subtlety. There are no confirmation prompts, no announcements, and no obvious signs that the wipe was intentional on your part. Of course, GrapheneOS is no longer a fringe operating system these days — it has even attracted the ire of law enforcement in some jurisdictions. In other words, a professional attacker might be aware of the existence of a duress PIN. But if you can enter it quickly enough, it achieves its intended effect: no data can be lifted from your phone. Why I use a duress PIN Mishaal Rahman / Android Authority Old vs new lock screen PIN entry screen UI in Android The idea of a duress PIN sounds like something out of a spy movie, but is it really necessary? The feature is admittedly only useful in fringe scenarios where I would know about an imminent risk to my phone's data. Take mugging, for example. If an attacker forced you to unlock your phone before they ran off with it, you could enter your duress PIN instead. Providing a duress PIN could mean the difference between losing a $1,000 device and having your bank accounts drained or your identity stolen. A duress PIN is useful to everyone, not just for those with something to hide. Even if you aren't forced to divulge the PIN yourself, I read an interesting suggestion on the GrapheneOS forum: what if you set an extremely simple or obvious sequence as your duress PIN? An amateur attacker is bound to try PINs like 1234 or 0000 when they get a hold of your device — and that will be enough to wipe the system for good, without any action on your part. You could even tape a note with the duress PIN to the back of your device and encourage them to enter it. Then there's the elephant in the room — using a duress PIN if you expect to get into trouble with law enforcement. This is a murky topic given that erasing your data could be counted as obstruction or even destruction of evidence. So you could get into more trouble than necessary, if you had nothing to hide. I think the latter is a bad faith argument as it ignores the potential and tangible threat of overreach. Still, I don't know if I would use my duress PIN if law enforcement ever asked me to unlock my phone. But for government dissidents and activists, I'm sure the feature can be invaluable if they know someone unfriendly is knocking on their door. What Android could learn from Graphene's duress PIN Andy Walker / Android Authority One of Android's biggest advantages is its robust support for multiple users. I find this feature especially useful on tablets, since they're typically shared devices. Each user in a household can log into their own profile, with their own set of apps and data. But getting to that profile currently requires multiple taps on most Android devices. Even on the Pixel Tablet, you need to select a specific profile before entering the unlock PIN for that user. But what if that wasn't the case? GrapheneOS can recognize when you enter a duress PIN to trigger a wipe, so why stop there? Imagine if Android could log you into a different user profile based on which PIN you've entered. In a situation where you're forced to unlock your phone, you could enter the decoy PIN. This would open a seemingly functional but heavily sandboxed version of your phone, hiding your banking apps, private messages, or work accounts. I think it straddles the line between handing over everything and Graphene's nuclear option of wiping the device entirely. Android might never adopt the duress PIN, but what about a decoy? Of course, you will need more than this level of plausible deniability if you get into any serious trouble. But for airport checkpoints where you might be asked to give up access to your device, a decoy PIN might be enough to avoid scrutiny. Or if you need a stowaway profile for files and data you don't necessarily want in your primary profile, a secondary PIN could bring you there. The GrapheneOS community's stance on decoy PINs is that redirecting to a secondary profile is not as secure as triggering a full device reset, which is the current duress PIN implementation. For a project that takes security seriously, simply logging into a different profile is only a half-measure. Will Google ever adopt a feature like GrapheneOS' duress PIN? It's unlikely, but on the plus side, Android's built-in Lockdown mode is a step in the right direction. In the US, courts have ruled that you can be compelled to provide a fingerprint, but not a password. By disabling biometrics, Android's Lockdown mode provides some protection against legal coercion. If that's not enough for you, GrapheneOS might just be the answer. Follow

How to Turn Your Security Camera Into an All-Purpose Home Care Tool
How to Turn Your Security Camera Into an All-Purpose Home Care Tool

CNET

time27 minutes ago

  • CNET

How to Turn Your Security Camera Into an All-Purpose Home Care Tool

When you're thinking about buying a home security cam, you're probably thinking about stopping bad guys, like burglars and porch pirates. But my security cameras put in constant work as everyday helpers. AI detection, other AI features and smart alerts team up to help out with common problems around my home and save me time. Here are my favorite ways that you can turn your security cameras into ever-present aids, from finding lost toys to handling smart locks and lots more. Make your security cam multipurpose, and you'll multiply the value you get from it many times over. Read more: Spots to Never Install a Security Camera Package instructions Security cams can give delivery people instructions. Hyve With their AI detection features, many home security cameras can recognize packages and alert you if they appear or disappear. That's not only handy for stopping porch pirates. Even if your packages aren't in immediate danger, it's useful to see when a person is arriving with a package so you can give them quick instructions like placing it near the door so it doesn't get rained on, putting it in a delivery box, waiting until you get the garage door opened, and so on. If you don't want to activate the two-way audio, a number of video doorbells and similar devices have customized preset messages that Alexa or other voice assistants can give with the press of a button to save time. Opening your door for family Yale's latest Google Home integrations could enable even more routines. Yale Today's video doorbells and security cameras aren't just smart, they can also connect with other home devices and control them. One of my favorite tricks is using facial recognition for family and other allowed guests, then automatically unlocking a smart lock for them as they approach. Not too many locks can do this yet but compatibility is on the rise if you don't mind a bit of facial recognition. Yale's latest smart lock can do this with a Nest Doorbell, for example, or integrate with an ADT+ security system to perform a similar task. On a similar note, if you have a security camera at the right angle in front of your home, you can also use it to double-check if the garage door is closed, just in case you forgot. Spotting bugs and pests A security system can even do some pest control work for you. Tharon Green/CNET Home security cameras aren't always watching: When armed, they're motion-activated and they can ignore certain types of motion like swaying branches or small pets. However, you can turn up motion sensitivity if you want to take a really close look at something, which can come in surprisingly handy if you're trying to track down a pest problem. From roaches to rodents, indoor cameras and their night vision capabilities plus motion detection can alert you when they appear and, most important, give you vital clues about where they are coming from and how to stop them. Activating a backyard intercom Outdoor security cams can make surprisingly useful intercoms. Tyler Lacoma/CNET Every home security cam worth its price includes two-way audio that you can activate from an app. That's not only handy in video doorbell scenarios, it also makes a great mini-intercom when someone's too far away to yell at, like sunning on the patio or playing in the backyard. Use the audio to call in kids for dinner, ask your S.O. what they want for takeout, let someone know the oven timer went off … OK, maybe I'm just hungry. But if there's a walkie-talkie reason to talk to someone, your security cam can handle it. Finding lost objects More advanced security AI can help you find lost objects. Google One of the 2025 advances in security cameras is the ability for AI like Google's Gemini to look through saved security video footage in the cloud. Ring's newest generative AI can perform the same sort of tasks, and upcoming upgrades to Alexa Plus and Siri are likely to include similar features. If you're OK with AI looking through your videos and identifying objects, it can be a great solution to track down something missing. In the case of Gemini, you can ask the AI directly, "Where did the kids leave their bikes?" or "Where did the dog leave the ball?" and it will answer with what info it can collect from the latest video footage. Watching over your pets Security cameras with AI detection and audio can also function as pet monitors. Speaking of pets, if you leave a pet at home during the day or on vacation and would like to keep an eye on it, an indoor security camera can easily handle the job. Dedicated pet cams do exist but many general security cams can also recognize pets, send you alerts when one is spotted, allowing you to use the two-way audio to comfort them or, as needed, yell at them to get off the table. Switching to a dedicated pet cam allows for more specific controls, like tossing treats and getting more pet-related notifications. Keeping older residents safe Security cams can be an easy way to get alerts about older loved ones, too. Granny pods and other independent living situations still let you keep an eye on loved ones with the right security camera. For example, a porch security camera with face recognition can let you know when an older relative is leaving at an unusual time, like "Front door cam sees Dolores leaving the house." Or if older relatives like to go out shopping or for a walk, the same cam can send you alerts when it recognizes them returning home so you don't have to worry. Monitoring babies and toddlers You can switch a security cam to baby and toddler duty, too. Hubble Connected / CNET At CNET we've tested a variety of dedicated baby monitors with plenty of useful features. Security cameras offer broad alternatives that you can repurpose for security or easily switch rooms later as needed. The security cam can still send motion alerts, let you check in on a baby at any time or provide notifications if it sees a toddler leaving its room -- no matter where the parents may be. Birdwatching Security cameras are great for birdwatching in the right spot. Avatarmin via Getty If you're a fan of keeping an eye on what birds visit your feeders, posting a nearby security camera is a fun addition. You can peek through the live view whenever you want and save pics or videos when you spot an unusual or brightly colored visitor. Equip your birdwatching security camera with a solar panel and you'll rarely have to worry about recharging, either. Oh, and you'll get updates about strangers on your property, too. Now that your mind is on home safety, why not visit my guide on the best DIY home security systems, the top tricks to prevent trespassing and the best mounting choices for security cameras.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store