Get an AI investment coach for just A$84 for life
The following content is brought to you by Mashable partners. If you buy a product featured here, we may earn an affiliate commission or other compensation.
TL;DR: Sterling Stock Picker has an AI that helps you invest in the stock market, and it's only A$84 for life.
The stock market has been especially volatile lately, but that doesn't mean you have to wait to invest. A new specialized AI from the creators of ChatGPT has been trained on the stock market to help you invest your money safely, even in a chaotic market. Sterling Stock Picker can help you determine which investments are worth the money, and a lifetime subscription is even on sale for A$84 (reg. A$744) using coupon code SAVE20.
Sterling Stock Picker uses AI-driven tools to help simplify the investing process for beginners and experienced investors alike.
At the heart of the platform is Finley, your AI financial coach. You can ask Finley questions about the market, your portfolio, or individual stocks and get real-time insights based on actual data. It's like having a finance expert on call, without booking an appointment or paying hourly rates.
The software uses a combination of technical, earnings, growth, and risk analysis to generate personalized stock recommendations. It even offers a risk tolerance questionnaire and portfolio builder to help you create a strategy that fits your specific goals. If you're not sure whether to buy, hold, or sell, the platform's North Star system helps clarify what action to take.
For anyone who wants to learn as they go, Sterling Stock Picker also has a suite of educational tools. You'll get detailed explanations of investing strategies and stock performance, plus access to a community forum where users can share insights and ask questions.
And if you prefer a more hands-off approach, the app still does the heavy lifting by highlighting top-performing stocks and suggesting updates to your portfolio.
It's only A$84 to get a Sterling Stock Picker Lifetime Subscription using coupon code SAVE20, but it won't stay that way.
StackSocial prices subject to change.
Opens in a new window
Credit: SSP Software
Sterling Stock Picker: Lifetime Subscription
AU$84 AU$744 Save AU$660
Get Deal

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Buzz Feed
an hour ago
- Buzz Feed
17 Horror Stories Teachers Have About Gen Alpha
Being a teacher can be a tough job as is, but when you add in the nonchalant attitudes of some parents, some students using AI, and issues caused by some students who can't detach themselves from social media, it becomes, well, tougher. Recently, Redditor u/MineTech5000 asked the teachers of the Reddit community to share their Gen Alpha student horror stories, and ohhhh boy: "A high school student asked me what it means to 'put it in his own words' instead of copying and pasting." —Outrageous_Owl_9315 "I left teaching in 2023. I taught an earth science class with mostly sophomores. One of the questions on a test about plate tectonics said, 'What do Chile, California, Alaska, and Japan have in common?' I was looking for anything related to the Ring of Fire — or anything mentioning that those places are on a fault line or get earthquakes. Well, one student answered, 'They're all in America.'" "We had a kid who didn't know that deserts are dry. Also, hardly any of my students know the difference between vowels and consonants." "I asked an undergraduate student for their opinion on a text, and they pulled out their phone, typed my question into ChatGPT, and then read aloud the answer it gave." "I'm currently studying to be a teacher. In one of my in-school placements, I had students come up to me and ask what time it was. I would always look at the clock on the wall (that they had a clear view of ) and tell them the none of the kids knew how to read a clock. I even explained how to read it, but they looked at me like I had two heads." —spookkish "This didn't happen in my class, but I was at a coffee shop, and two young girls didn't recognize a quarter. One asked the other, 'Is this a nickel? Wait, is a nickel only five cents?'" "I've taught high school for the last nine years. The scariest thing for me is that they can't THINK. Problem-solving, troubleshooting, reasoning, etc. — there are so many kids who have little processing power, and it seems to be getting rapidly worse in the last couple of years. I think it's TikTok. Kids these days don't even have time to think about the bite-sized piece of media they just consumed before the next one is up." "Elementary PE teacher here. The number of children who bite when angry is terrifying. I have several who will chase a kid down after they've been wronged, grab their arm, and bite them like a dog. So far, they're all under seven, but I'm up to four in different classes. It's a specific problem, but I'll add that, in general, we are doomed." "I work for a school transportation department, and when I'm not in the office, I'm a 'permanent substitute' who fills in for absent bus drivers. Each bus is parked in the same 'lane' at the school every single day, regardless of who's driving. Most students walk up to the bus, and if they see it isn't their regular driver, they'll look around in horror and start aimlessly wandering around. I tell them every time that their bus will always be in the same lane, yet some kids will wander around every time. It also doesn't help that many of them don't know their address, either. High school is the worst. I've done this for 11 years, and it gets worse every single year. Mind you, I drive for some of the top academic schools in the state." —oldatheart515 "I've spent almost a decade in education now. These kids have zero attention span — absolutely none. They need constant stimulation, nonstop engagement, and distraction after distraction shaped by far too much access to social media. It's both sad and frightening." "I had a pre-K student whose speech was unintelligible. He could not communicate. I told his mom he needed to be tested for speech therapy, and she replied, 'Aw, so he won't baby-talk anymore? I'm gonna miss hearing that!'" "I teach high school math. I've encountered many students who were operating at maybe a second-grade level of math. I've encountered high school seniors who couldn't do two times three in their heads, juniors who didn't know what a square root was, juniors who didn't know how to multiply by zero or one, etc. I had one sophomore this year who couldn't wrap her mind around '20 more than' in a certain problem. I tried to explain it for a few minutes, saying, 'Let's say you and I go into a store. I'm going to buy some apples, and you plan on buying 20 more apples than I do. If I buy five apples, how many would you buy?' She answered, '20?'" —get_your_mood_right "One of my fourth graders was provisionally promoted from third grade in the middle of the school year. He was reading at a first-grade level when he arrived in my class. He hit his 14-day suspension cap quickly because he was constantly fighting with other kids instead of actually trying to learn. The guidance counselor pulled him out to do a therapeutic art project one afternoon. That's how we discovered that he doesn't know his shapes, either. It was one of the worst cases of educational neglect that I've seen. Of course, the parent was upset that he kid was not on the Honor Roll and demanded to know why he was failing every subject. Oh, yeah, the regional superintendent overruled us and promoted the student to fifth grade." "Only 27% of the incoming sixth graders at the local middle school are proficient in math skills. They also have next-to-zero problem-solving or critical thinking skills. They have a 'do it for me or I'm not doing it' mindset for schoolwork. You could turn in zero work for a class and get passed, though. We don't require motivation anymore." "I teach high school seniors and am trying to bring back 'pen and paper schooling.' Kids don't know how to use staplers, and this is not an isolated incident. One broke my stapler trying to refill it because he didn't realize you could break/shorten the replacement staples. Many don't know which side to staple or how to line up pages. I had a student today claim that a stapler was broken because they didn't push it down hard enough until it, you know, stapled. With so many things done online, they can't do anything with paper. It's depressingly defeating at times." "I've had a student use AI on fully opinion-based questions several times. Kids aren't even reading the question or the internet answer. One quiz question asked students to share their comfort level with performing CPR. A student's response was, 'As an AI, I cannot perform CPR.' Someone even used AI to rank my classroom values (things like respect, integrity, and resilience). AI is so new, and no one knows how to use it as a learning tool. It's evolving faster than we can harness." —cumulobiscuit Lastly: "I teach high school, and I have kids about to be adults who don't know the difference between left and right without making the 'L' shape with their hands. I also have kids who Google basic math calculations, like four times two." *Sigh.* If you're a teacher, what's something completely shocking or unhinged that you've witnessed or experienced with your Gen Alpha students? Let us know in the comments, or you can anonymously submit your story using the form below!
Yahoo
3 hours ago
- Yahoo
OpenAI CEO Sam Altman Says ‘We Are Heading Towards a World Where AI Will Just Have Unbelievable Context on Your Life'
OpenAI CEO Sam Altman has called ChatGPT's new memory feature his 'favorite recent' advancement, but the technology's ability to remember and leverage deep context about users is raising as many questions as it answers. In a podcast released today, Altman described the feature as a 'real surprising level up,' saying, 'Now that the computer knows a lot of context on me, and if I ask it a question with only a small number of words, it knows enough about the rest of my life to be pretty confident in what I want it to do. Sometimes in ways I don't even think of. I think we are heading towards a world where, if you want, the AI will just have unbelievable context on your life and give you super, super helpful answers.' Is Palantir Stock Poised to Surge Amidst the Israel-Iran Conflict? CoreWeave Stock Is Too 'Expensive' According to Analysts. Should You Sell CRWV Now? 'It Has No Utility': Warren Buffett Doesn't Care How High Gold Goes, He Isn't a Buyer Tired of missing midday reversals? The FREE Barchart Brief newsletter keeps you in the know. Sign up now! The new memory feature allows ChatGPT to retain information from past interactions and build a persistent profile of each user's preferences, routines, and even personal milestones. This means the AI can provide more tailored, anticipatory responses — streamlining tasks, making recommendations, and even reminding users of important events or deadlines without being prompted. For many, this represents a long-awaited leap toward truly personal digital assistants, capable of understanding context and nuance in a way that feels almost human. However, this leap in convenience comes with significant privacy implications. Persistent memory means ChatGPT is storing more personal data than ever before, including potentially sensitive details about users' lives, work, and relationships. There are also concerns about how this data might be used for targeted advertising, profiling, or even surveillance. While much of this is just speculation right now, any large pool of stored data will inevitably become a target for bad actors. As scams become more elaborate, unauthorized access to a user's OpenAI account could spell disaster, as it would allow hackers access to sensitive work documents, fragile mental states, ongoing personal issues, and a host of other information. Even worse, bad actors could essentially use one's account as a search engine for detrimental information on that user. OpenAI has anticipated some of these concerns, and is promising robust user controls and transparency. According to the company, users will be able to review, edit, and delete their stored memories at any time. There will also be options to disable the memory feature entirely, reverting ChatGPT to a 'stateless' mode where no information is retained between sessions. Altman emphasized the importance of consent and user choice in the podcast, which hopefully means users currently have little to worry about. 'I hope this will be a moment where society realizes that privacy is really important,' Altman said during the interview. The introduction of persistent memory in consumer AI comes at a time of heightened regulatory scrutiny. Lawmakers in the U.S., EU, and elsewhere are actively debating new rules for AI transparency, data retention, and user rights. OpenAI's approach to privacy and user empowerment could set a precedent for the industry, but it will also be closely watched by regulators and privacy advocates. Competitors such as Google (GOOGL) (GOOG), Meta (META), and Anthropic are reportedly developing similar features, suggesting that persistent memory may soon become standard in advanced AI systems. This raises the stakes for getting privacy protections right from the outset. For users, the promise of a digital assistant that truly 'knows' them is both exciting and unsettling. The convenience of having an AI that can anticipate needs, manage schedules, and offer personalized advice is undeniable. Yet, as Altman's comments make clear, this future depends on a delicate balance between utility and privacy. As AI systems become more deeply integrated into daily life, the conversation around memory, context, and consent will only grow more urgent. OpenAI's latest innovation is a glimpse of what's possible — but also a reminder that, in the world of intelligent machines, privacy and trust must remain at the forefront. On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
3 hours ago
- Yahoo
OpenAI CEO Sam Altman Says ‘We Are Heading Towards a World Where AI Will Just Have Unbelievable Context on Your Life'
OpenAI CEO Sam Altman has called ChatGPT's new memory feature his 'favorite recent' advancement, but the technology's ability to remember and leverage deep context about users is raising as many questions as it answers. In a podcast released today, Altman described the feature as a 'real surprising level up,' saying, 'Now that the computer knows a lot of context on me, and if I ask it a question with only a small number of words, it knows enough about the rest of my life to be pretty confident in what I want it to do. Sometimes in ways I don't even think of. I think we are heading towards a world where, if you want, the AI will just have unbelievable context on your life and give you super, super helpful answers.' Is Palantir Stock Poised to Surge Amidst the Israel-Iran Conflict? CoreWeave Stock Is Too 'Expensive' According to Analysts. Should You Sell CRWV Now? 'It Has No Utility': Warren Buffett Doesn't Care How High Gold Goes, He Isn't a Buyer Tired of missing midday reversals? The FREE Barchart Brief newsletter keeps you in the know. Sign up now! The new memory feature allows ChatGPT to retain information from past interactions and build a persistent profile of each user's preferences, routines, and even personal milestones. This means the AI can provide more tailored, anticipatory responses — streamlining tasks, making recommendations, and even reminding users of important events or deadlines without being prompted. For many, this represents a long-awaited leap toward truly personal digital assistants, capable of understanding context and nuance in a way that feels almost human. However, this leap in convenience comes with significant privacy implications. Persistent memory means ChatGPT is storing more personal data than ever before, including potentially sensitive details about users' lives, work, and relationships. There are also concerns about how this data might be used for targeted advertising, profiling, or even surveillance. While much of this is just speculation right now, any large pool of stored data will inevitably become a target for bad actors. As scams become more elaborate, unauthorized access to a user's OpenAI account could spell disaster, as it would allow hackers access to sensitive work documents, fragile mental states, ongoing personal issues, and a host of other information. Even worse, bad actors could essentially use one's account as a search engine for detrimental information on that user. OpenAI has anticipated some of these concerns, and is promising robust user controls and transparency. According to the company, users will be able to review, edit, and delete their stored memories at any time. There will also be options to disable the memory feature entirely, reverting ChatGPT to a 'stateless' mode where no information is retained between sessions. Altman emphasized the importance of consent and user choice in the podcast, which hopefully means users currently have little to worry about. 'I hope this will be a moment where society realizes that privacy is really important,' Altman said during the interview. The introduction of persistent memory in consumer AI comes at a time of heightened regulatory scrutiny. Lawmakers in the U.S., EU, and elsewhere are actively debating new rules for AI transparency, data retention, and user rights. OpenAI's approach to privacy and user empowerment could set a precedent for the industry, but it will also be closely watched by regulators and privacy advocates. Competitors such as Google (GOOGL) (GOOG), Meta (META), and Anthropic are reportedly developing similar features, suggesting that persistent memory may soon become standard in advanced AI systems. This raises the stakes for getting privacy protections right from the outset. For users, the promise of a digital assistant that truly 'knows' them is both exciting and unsettling. The convenience of having an AI that can anticipate needs, manage schedules, and offer personalized advice is undeniable. Yet, as Altman's comments make clear, this future depends on a delicate balance between utility and privacy. As AI systems become more deeply integrated into daily life, the conversation around memory, context, and consent will only grow more urgent. OpenAI's latest innovation is a glimpse of what's possible — but also a reminder that, in the world of intelligent machines, privacy and trust must remain at the forefront. On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on