logo
In the 'Golden Age of tech stocks, how do we use tech itself to assess risk and evaluate the markets?

In the 'Golden Age of tech stocks, how do we use tech itself to assess risk and evaluate the markets?

Techday NZ30-07-2025
The tech-heavy Nasdaq told the story in 2024, with Palantir up 340.5%, Nvidia up 171.2% and Broadcom up 107.7%. The latter two are among the so-called BATMMAAN stocks whose success has led some worried commentators to point out the concentration risk now present in US share markets: the top 10 stocks in the S&P 500 hit 37.3% of the value of the entire index in mid-July, just shy of the highest level on record, which was 38% in January this year according to Reuters.
In the current Golden Age of the tech sector, it is the emergent AI and analytics tools being created by top-performing companies that are proving some of the best virtual assistants for evaluating stocks in the tumultuous 2020s, generally by using in combination with traditional analytical techniques. Here's how:
1. Going beyond the basics. We have access to enormous amounts of research and data about every tradable stock, but traditional statistics, like revenue or P/E ratio, don't always tell the whole story with tech companies, especially those rapidly reinvesting for growth. Rather, one can look for: Price-to-sales (P/S) ratio: Especially useful for high-growth, pre-profit tech firms.
Free cash flow (FCF) growth: Indicates whether a company is capable of self-funding continued innovation.
R&D expense growth: Is the business consistently investing in future products and features?
Scale and market cap: Is the company large enough to weather market challenges?
SG&A (selling, general, and admin expenses) to revenue: Offers clues about efficiency in scaling operations.
2. Using technical analysis for trends. The rise of quantitative trading and algorithmic strategies means technical analysis is an important supplementary lens for active traders, though not a substitute for deep research. Traders are looking at markers such as: Volatility metrics: Identifying periods where momentum or reversals are likely.
Advanced charting: Using visual tools to spot levels of investor support or resistance.
Changes in options implied volatility and ratios of puts to calls in analysis of both the indexes and individual stocks.
3. Leveraging quantitative and AI tools. The next generation of evaluation involves AI and big data. These tools filter vast amounts of information from financial reports, market sentiment, news, web analytics. Some of my preferred research platforms and AI-driven tools include: Tiger Trade App's AI-powered chatbot TigerAI. Its features allow investors to research stocks, summarise key insights from earnings calls and releases, and extract pertinent company news and sentiment analysis based on the nature of the questions asked, all within seconds. TigerAI can be accessed through the Tiger Trade app, so everything is in one place.
Perplexity: This AI-powered research co-pilot synthesises web results and provides live monitoring, trend analysis, Q&A to feed back to users.
ChatGPT: The biggest name brand in LLMs to date, this is conversational AI for brainstorming and quick synthesis, and a good tool to test investment ideas and pull data summaries.
AlphaSense: Offers AI search for business/financial filings and news; users can deep dive for company and sector insights.
Google Gemini: This is multimodal AI (text and images) for competitive research; users can scan public information fast.
4. Developing valuation frameworks. Valuing tech stocks is both an art and a science. Of course, getting it right or wrong can make a big difference in ROI terms for traders and clients. Key techniques one can use: Discounted cash flow (DCF): Projects future value but is highly sensitive to assumptions.
Relative valuation: Compares companies' multiples within the sector.
Premium for growth: Sometimes justified if a company is truly dominant or highly innovative.
5. Making qualitative assessments. Without context, numbers can be misleading – and in an age of massive data volume, investors need to figure out which context is actually relevant. One can evaluate: Leadership quality: Track record, vision, and ability to execute.
Innovation pipeline: New products/services and IP protection.
Industry ecosystem position: Is the business a vital cog in a rising sector like AI, cloud computing, or cybersecurity?
ESG practices: Environmental, social, and governance disclosures, especially around climate responsibility, are highly relevant. For companies involved in AI, the conversation is becoming increasingly heated around the vast energy consumption of data centres.
6. Finding practical uses for AI in research. AI can change the scope of intense periods such as earnings season. It can be used in reporting and analysis in a few ways: Hourly news alerts: Using Perplexity or AlphaSense for customisable updates on specific tech companies.
Rapid data summarisation: With ChatGPT, one can parse lengthy earnings calls or filings quickly.
Scenario analysis: running "what if" scenarios via AI, such as how a new product might reshape a market, the expected effects of tariffs on sector X or Y, or what headwinds a new regulation could create.
Monitoring social trends: AI tools aggregate social media sentiment and web traffic, offering another layer of insight into a company's traction.
Idea validation: When considering a trend or hypothesis, cross-examine it using multiple AI platforms to find the weak points.
7. Remembering the risks. AI can give the impression that there is a final right answer to everything, but any tool can only digest the data it is designed to process. And like any tool, it is only as good as the person using it. Given AI's complexity and known pitfalls (like "hallucinations"), the risk of relying on its output is for the user to bear – it should not be taken as nor is a substitute for professional advice. Only experienced users of AI should use it for financial analysis. It is well known that AI can be, and often is, wrong in its analysis and every finding of AI needs to be verified and double checked.
No research method guarantees anything, and the risks include: Extreme volatility: Of course, tech stocks can swing wildly, and AI can't tell you for sure when or by how much. AI can be a predictor, but it is not a perfect one.
Disruption risk: Share market leaders today can lag tomorrow if innovation slows.
Overvaluation: High hopes can lead to painful corrections. These can be sudden or extreme.
Regulatory changes: New rules on data or antitrust can shift the landscape overnight.
Behavioural bias: Even seasoned investors can be swayed by hype or groupthink.
There are investors who think the current Golden Age of tech is another bubble and the only question is when it will burst, not if.
Sources: https://www.reuters.com/ business/autos-transportation/ us-stock-market-concentration- risks-come-fore-megacaps- report-earnings-2025-07-23/
Disclaimer:
This article is presented by Tiger Fintech (NZ) Limited and is for information only. It does not constitute financial advice. Investing involves risk. You should always seek professional financial advice before making any investment decisions.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Farmer Creates 'Tinder' For Dairy Cows
Farmer Creates 'Tinder' For Dairy Cows

Scoop

time15 hours ago

  • Scoop

Farmer Creates 'Tinder' For Dairy Cows

, Senior Journalist, Rurals A Waikato farmer created what he says is essentially Tinder for cows, after a spreadsheet error caused him to lose the breeding data for his herd. Matthew Zonderop, a 50-50 sharemilker, previously used multiple spreadsheets and coding to track his herd, insemination, and make genetic improvements. But after an error in his coding saw his system no longer working, he turned to artificial intelligence and ChatGPT, which he had previously played around with but not in a work capacity. After the AI powered tool fixed the code he uploaded the entire spreadsheet and realised it could do it all for him. He said he hadn't looked back, launching his business Perfect Cow Breeding Solutions at Fieldays earlier this year to help other farmers. It removed the hassle of spreadsheets and helped farmers breed better cows by choosing the right bull. "Basically, it's a matchmaking service for dairy farmers and their cows. "Most cows throughout New Zealand are DNA profiled, and they give specific trait data on cows, so her protein levels, her fat levels, gestation length, live weight, and they are given to us into our herd records. "So each cow has got its own head record and we can extract that data and then analyse it. "It was always available to us, but we really, I suppose, didn't have the tools available to analyse it the way we are now with using AI."

NZ sharemarket flat amid US slump, eyes on Trump's tariffs
NZ sharemarket flat amid US slump, eyes on Trump's tariffs

NZ Herald

time18 hours ago

  • NZ Herald

NZ sharemarket flat amid US slump, eyes on Trump's tariffs

'I think people are still holding their breath for the reporting season, which begins next week, and then it really gets going the following week,' Lister said. 'We're just in a bit of a holding pattern in terms of the stock-specific news. We don't really want to hear from any of these businesses because when you hear from a business this close to reporting season, it can often mean that it's going to be a profit downgrade they're obliged to tell us about. 'No news is probably good news at this point on the corporate front.' On the NZX, Lister pointed to Chorus and Gentrack Group, which had positive days. Chorus lifted 29c to $9.09 on turnover of $5.4m, while Gentrack rallied by 3.78% or 37c to $10.15. Infratil also continued its positive run, with Lister noting its share price had gained just over 4% during the week. Infratil lifted 1.26% or 15c to $12.05, on turnover worth $24.6m. As for the decliners, SkyCity was down 2.04% to $0.96, Restaurant Brands fell 1.72% to $2.85 on turnover of $3.57m, and Auckland Airport fell 1.18% to $7.55 on turnover of $7.6m. Elsewhere, Stats NZ released unemployment data for the June quarter, with the headline figure rising to 5.2%, slightly below economists' predictions of 5.3%. Lister said while it hadn't gone up as much as people were expecting, it was still at its highest levels since the pandemic. 'That's the last really important data point ahead of the Reserve Bank meeting in a fortnight's time. The balance of risks definitely suggests that another cut is required this month. 'You've got to remember that unemployment tends to be a lagging indicator, so it doesn't tell you a lot about what's coming.' Overnight, the Global Dairy Auction took place, with the second increase in a row following four consecutive declines. International news Wall Street stocks retreated on Tuesday local time as markets monitored ongoing trade talks ahead of new tariff increases set to take effect later this week. US stocks initially looked poised to extend Monday's rally but soon tumbled into negative territory. 'We're struggling for direction right now,' said Adam Sarhan of 50 Park Investments. 'After a big move up over the last few weeks, it's perfectly normal and healthy to see the market consolidate or go sideways a little bit.' The Dow Jones Industrial Average finished down 0.1% at 44,111.74. The broad-based S&P 500 dropped 0.5% to 6299.19, while the tech-rich Nasdaq Composite Index shed 0.7% to 20,916.55. US President Donald Trump told CNBC he plans fresh tariffs on imported pharmaceuticals and semiconductors. The statement comes before a separate set of tariff increases goes into force on dozens of economies later this week. Swiss officials travelled to Washington on Tuesday to try to strike a last-minute deal. – Additional reporting AFP Tom Raynel is a multimedia business journalist for the Herald, covering small business, retail and tourism.

The AI doctor will see you … soon
The AI doctor will see you … soon

Newsroom

timea day ago

  • Newsroom

The AI doctor will see you … soon

Comment: Artificial intelligence is already widely used in healthcare. There are now more than 1000 Federal Drug Administration-authorised AI systems in use in the US, and regulators around the world have allowed a variety of AI systems to support doctors and healthcare organisations. AI is being used to support radiologists examining X-rays and MRI scans by highlighting abnormal features, and to help predict how likely someone is to develop a disease based on their genetics and lifestyle. It is also integrated with consumer technology that many people use to manage their health. If you own an Apple watch, it can use AI to warn you if you develop an abnormal heart rhythm. More recently, doctors (including many GPs in Aotearoa New Zealand) have adopted AI to help them to write their medical notes. An AI system listens into the GP-patient conversation and then uses a large language model such as ChatGPT to turn the transcript of the audio into a summary of the consultation. This saves the doctor time and can help them pay closer attention to what their patient is saying rather than concentrating on writing notes. But there are still lots of things we don't know about the future of AI in health. I was recently invited to speak at the Artificial Intelligence in Medicine and Imaging conference at Stanford University, and clinicians in the audience asked questions that are quite difficult to answer. For example, if an AI system used by a doctor makes a mistake (ChatGPT is well known for 'hallucinating' incorrect information), who is liable if the error leads to a poor outcome for the patient? It can also be difficult to accurately assess the performance of AI systems. Often studies only assess AI systems in the lab, as it were, rather than in real world use on the wards. I'm the editor-in-chief of a new British Medical Journal publication, BMJ Digital Health & AI, which aims to publish high-quality studies to help doctors and healthcare organisations determine which types of AI and digital health technologies are going to be useful in healthcare. We've recently published a paper about a new AI system for identifying which artery is blocked in a heart attack, and another on how GPs in the UK are using AI for transcribing their notes. One of the most interesting topics in AI research is whether generative AI is better than a doctor for general purpose diagnosis. There seems to be some evidence emerging that AI may be starting to be better than doctors at diagnosing patients when given descriptions of complex cases. The surprising thing about this research is that it found that an AI alone might be more accurate than when a doctor uses an AI to help them. This may be because some doctors don't know how to use AI systems effectively, indicating that medical schools and training colleges should incorporate AI training into medical education programmes. Another interesting development is the use of AI avatars (simulated humans) for patient pre-consultations and triage, something that seems likely to be implemented within the next few years. The experience will be very similar to talking with a human doctor and the AI avatar could then explain to the real doctor what that they found and what they would recommend as treatment. Though this may save time, a balance will need to be struck between efficiency and patients' preferences – would you prefer to see an AI doctor now or wait longer to see a human doctor? The advancement of AI in healthcare is very exciting but there are risks. Often new technology is implemented without considering so-called human factors. These can have a big impact on whether mistakes are made using the new system, or even whether the system will get used at all. Clinicians and patients quickly stop using systems that are hard to use or that don't fit into their normal work routines. The best way to prevent this is to use 'human-centred design', where real people – doctors and patients – are included in the design process. There is also a risk that unregulated AI systems are used to diagnose patients or make treatment decisions. Most AI systems are highly regulated – patients can be reassured that any AI involved in their care is being used safely. But there is a risk that governments may not keep up with the accelerating development of AI systems. Rapid, large-scale adoption of inaccurate healthcare-related AI systems could cause a lot of problems, so it is very important governments invest in high-quality AI research and robust regulatory processes to ensure patient safety. Chris Paton will be giving a public lecture about AI in healthcare at the Liggins Institute on August 14 at 6pm. Register here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store