
AI in IR: Opportunities, Risks, and What You Need to Know
I don't mean model in terms of my physical attributes. I mean model in a way that describes how most generative AI tools process information and organize responses based on prompts. That's effectively what I've been doing in my career for nearly three decades!
The good news is that platforms like OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude are extremely helpful when processing mass quantities of complicated information. Using these platforms to understand a concept or interpret text is like using a calculator to work through a math problem. And yet, many of us really don't know how these word crunchers work.
This applies to AI tools used for investor relations, public relations, or anything else where an AI model could be prompted with sensitive information, which is then consumed by the public. Think about how many people working for public companies may inadvertently prompt ChatGPT with material nonpublic information (MNPI), which then informs a trader to ask the platform whether they should buy or sell a stock.
AI Concerns Among IR Professionals
Earlier this year, I worked with the University of Florida on a survey that found that 82 percent of IR professionals had concerns about disclosure issues surrounding AI use and MNPI. At the same time, 91 percent of survey respondents were worried about accuracy or bias, and 74 percent expressed data privacy concerns.
These factors are enough for compliance teams to ban AI use altogether. But fear-mongering is shortsighted. There are plenty of ways to use AI safely, and understanding the basics of the technology, as well as its shortcomings, will make for more responsible and effective AI use in the future.
Why You Should Know Where AI Gets Its Data
One of the first questions someone should ask themselves when using a new AI platform is where the information is sourced. The acronym 'GPT' stands for generative pre-trained transformer, and that is a fancy way of saying that the technology can 'generate' information or words based on 'training' and data it received, which is then 'transformed' into sentences.
This also means that every time someone asks one of these platforms a question or prompt, they are pumping information into a GPT.
That makes these platforms even smarter when analyzing complex business models.
For example, many IR folks get bogged down summarizing sell-side analysts' models and earnings forecasts from research notes. Simply upload those models into ChatGPT, and the platform does a great job understanding the contents and providing a digestible summary. Interested in analyzing the sentiment of a two-hour conference call script? How about uploading the script (post call to avoid MNPI) to Gemini and requesting a summary on what drew the most positive sentiment among investors?
The Importance of AI Training and Education in IR
But here's the rub: Only 25.4 percent of companies provided AI-related training in the past two years, according to the U.F. survey. This suggests a disconnect between advancing AI technology and people's understanding of how to use it.
That means the onus is on us to figure it out. So, where to start? Many AI tools, including ChatGPT, have free versions that can help people summarize, plan, edit, and revise items.
Google's NotebookLM, is an AI platform that allows you to create a GPT, so you know where the AI is sourcing the information from. NotebookLM can also create podcasts based on the information generated by its LLM. This could be helpful if a chief executive officer wants to take a run on a treadmill and listen to a summary of analysts' notes instead of having to read them in a tedious email.
Here are some other quick-hit ideas:
Transcribing notes. If you're like me, you still prefer using a pen and pad when taking notes. You can take a picture of those notes, upload them to ChatGPT, and have it transcribed into text.
Planning investor days. If you can prompt an AI with the essentials – the who, what, when, where, why, and how of the event – it can provide a thorough outline that makes you look smart and organized when sending it around to the team.
Analyzing proxy battles. Proxy fights are always challenging, especially when parsing the needs and wants of key stakeholders, including activists, media, management teams, and board members. Feeding an AI with publicly available information (to, again, avoid disclosure issues) can help IR and comms professionals formulate a strategy.
Crafting smarter AI prompts. Writing effective prompts requires some finesse. The beauty of AI is that it can help you refine your prompts, leading to better information gathering. Try asking ChatGPT the following question: 'If Warren Buffet is interested in investing in a company, what would be an effective AI prompt to understand its return on investment?'
There are many other use cases that can help eliminate mundane tasks, allowing for humans to focus more on strategy. But in order to use AI effectively, it's important to know the reason you're using it. Perhaps, it's demonstrating to management that being an early adopter of this technology is important to help a company differentiate itself.
Building a Responsible AI Policy for Your Organization
Before implementing any AI initiatives, it's best to formulate an AI policy that organizations can adopt for internal and external use. Most companies are lacking these policies, which are critical for establishing the basic ground rules for AI use.
I helped co-author the National Investor Relations Institute's AI policy, which recommends the following:
The IR professional should be an educated voice within the company on the use of AI in IR, and this necessitates becoming knowledgeable about AI.
The IR professional should understand the pace at which their company is adopting AI capabilities and be prepared to execute their IR-AI strategy based on management's expectations.
Avoid Regulation Fair Disclosure (Reg FD) violations. The basic tenet is to never put MNPI into any AI tool unless the tool has the requisite security, as defined or required by the company's security experts, and has been explicitly approved for this particular use by company management.
AI Will Not Replace You. But Someone Using AI Might.
There is this prevailing fear that somehow AI is going to take over the world. But the technology is not likely going to replace your job. It's smart users of the technology who will likely replace your job.
AI is transforming how IR professionals work, but using it responsibly starts with understanding how it works. From summarizing complex reports to enhancing stakeholder communication, AI can be a powerful tool when used thoughtfully. Start by learning the basics, implementing clear policies, and exploring trusted tools to unlock its full potential.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
16 minutes ago
- Android Authority
These displays are banned in US, and here's how it affects Pixel 10 and iPhone 17
Joe Maring / Android Authority TL;DR The International Trade Commission has decided to ban BOE's OLED screens from the US. However, a display expert says companies like Apple and Google can still sell devices with BOE screens in the US. The ruling could adversely affect the repair market, though. The International Trade Commission (ITC) recently ruled that it would ban display maker BOE's OLED screens from the US market. The ban, which isn't final, could see BOE banned from the country for almost 15 years. But what does this mean for smartphone makers like Apple, Google, and others that use BOE displays? Display analyst Ross Young noted on Twitter that the ban could have 'little impact' on brands like Apple and Google as they'd still be able to sell their phones in the US. Check out the tweet below. Unfortunately, Young adds that this ban would hurt the repair market. That's bad news for consumers, as it means they might not have recourse in the event of a broken phone screen. Even if you can get your display fixed in the first place, a limited supply of replacement screens could theoretically lead to much higher screen repair costs. This could be particularly problematic for upcoming phone releases, as manufacturers and repair agents might not have the chance to stockpile replacement screens. Don't want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. We also asked OnePlus earlier this week about the ramifications of a BOE ban in the US. However, the company refused to comment on the matter. If Young's claims hold true, though, then OnePlus doesn't necessarily have to worry about OnePlus 15 sales. Either way, this preliminary display ban might not affect sales of smartphones with BOE screens. So I don't anticipate sales issues with the Pixel 10 series or iPhone 17 range. But you should probably invest in a durable case and a screen protector. Follow


Forbes
17 minutes ago
- Forbes
People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts
In today's column, I examine the concern that once we advance AI to become artificial general intelligence (AGI), there will be an extremely heavy dependency on AGI, and the moment that AGI glitches or goes down, people will essentially lose their minds. This is somewhat exemplified by the downtime incident of the globally popular ChatGPT by OpenAI (a major outage occurred on June 10, 2025, and lasted 8 hours or so). With an estimated 400 million weekly active users relying on ChatGPT at that time, the news outlets reported that a large swath of people was taken aback by the fact that they didn't have immediate access to the prevalent generative AI app. In comparison, pinnacle AI such as AGI is likely to be intricately woven into everyone's lives and a dependency for nearly the entire world population of 8 billion people. The impact of downtime or a blackout could be enormous and severely harmful in many crucial ways. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AGI Will Be Ubiquitous One aspect about AGI that most would acknowledge is likely would be that AGI is going to be widely utilized throughout the globe. People in all countries and of all languages will undoubtedly make use of AGI. Young and old will use AGI. This makes abundant sense since AGI will be on par with human intellect and presumably available 24/7 anywhere and anyplace. Admittedly, there is a chance that whoever lands on AGI first might horde it. They could charge sky-high prices for access. Only those who are rich enough to afford AGI would be able to lean into its capabilities. The worries are that the planet will be divided into the AGI haves and have-nots. For the sake of this discussion, let's assume that somehow AGI is made readily available to all at a low cost or perhaps even freely accessible. I've discussed that there is bound to be an effort to ensure that AGI is a worldwide free good so that it is equally available, see my discussion at the link here. Maybe that will happen, maybe not. Time will tell. Humans Become Highly Dependent Having AGI at your fingertips is an alluring proposition. There you are at work, dealing with a tough problem and unsure of how to proceed. What can you do? Well, you could ask AGI to help you out. The odds are that your boss would encourage you to leverage AGI. No sense in wasting your time on flailing around to solve a knotty problem. Just log into AGI and see what it has to say. Indeed, if you don't use AGI at work, the chances are that you might get in trouble. Your employer might believe that having AGI as a double-checker of your work is a wise step. Without consulting AGI, there is a heightened possibility that your work is flawed and will proceed unabated. AGI taking a look at your work will be a reassurance to you and your employer that you've done satisfactory work. Using AGI for aiding your life outside of work is highly probable, too. Imagine that you are trying to decide whether to sell your home and move up to a bigger house. This is one of those really tough decisions in life. You only make that decision a few times during your entire existence. How might you bolster your belief in taking the house-selling action? By using AGI. AGI can help you to understand the upsides and downsides involved. It likely can even perform many of the paperwork activities that will be required. People are going to go a lot deeper in their AGI dependencies. Rather than confiding in close friends about personal secrets, some will opt to do so with AGI. They are more comfortable telling AGI than they are another human. I've extensively covered the role of contemporary AI in performing mental health therapy; see the link here. Chances are that a high percentage of the world's population will do likewise with AGI. When AGI Goes Down A common myth is that AGI will be perfect in all regards. Not only will AGI seemingly provide perfect answers, but it will also somehow magically be up and running flawlessly and perfectly at all times. I have debunked these false beliefs at the link here. In the real world, there will be times when AGI goes dark. This could be a local phenomenon and entail servers running AGI in a local region that happen to go down. Maybe bad weather disrupts electrical power. Perhaps a tornado rips apart a major data center housing AGI computers. All manner of reasons can cause an AGI outage. An entire worldwide outage is also conceivable. Suppose that AGI contains an internal glitch. Nobody knew it was there. AGI wasn't able to computationally detect the glitch. One way or another, a coding bug silently sat inside AGI. Suddenly, the bug is encountered, and AGI is taken out of action across the board. Given the likelihood that AGI will be integral to all of our lives, those types of outages will probably be quite rare. Those who are maintaining AGI will realize that extraordinary measures of having fail-safe equipment and operations will be greatly needed. Redundancy will be a big aspect of AGI. Keeping AGI in working condition will be an imperative. But claiming that AGI will never go down, well, that's one of those promises that is asking to be broken. The Big Deal Of Downtime It will be a big deal anytime that AGI is unavailable. People who have become reliant on AGI for help at work will potentially come to a halt, worrying that without double-checking with AGI, they will get in trouble or produce flawed work. They will go get a large cup of coffee and wait until AGI comes back online. Especially worrisome is that AGI will be involved in running important parts of our collective infrastructure. Perhaps we will have AGI aiding the operation of nuclear power plants. When AGI goes down, the human workers will have backup plans for how to manually keep the nuclear power plant safely going. The thing is, since this is a rare occurrence, those human workers might not be adept at doing the work without AGI at the ready. The crux is that people will have become extraordinarily dependent on AGI, particularly in a cognitive way. We will rely upon AGI to do our thinking for us. It is a kind of cognitive crutch. This will be something that gradually arises. The odds are that on a population basis, we won't realize how dependent we have become. In a sense, people will freak out when they no longer have their AGI cognitive partner with them at all times. Losing Our Minds The twist to all of this is that the human mind might increasingly become weaker and weaker because of the AGI dependency. We effectively opt to outsource our thinking to the likes of AGI. No longer do we need to think for ourselves. You can always bring up AGI to figure out things with you or on your behalf. Inch by inch, your proportion of everyday thinking gets reduced by your own efforts of relying on AGI. It could be that you initially began with AGI doing 10% and you doing 90% of the heavy lifting when it came to thinking things through. At some point, it became 50% and 50%. Eventually, you allow yourself to enter the zone of AGI at 90%, and you do only 10% of the thinking in all your day-to-day tasks and undertakings. Some have likened this to worries about the upcoming generation that is reliant on using Google search to look things up. The old ways of remembering stuff are gradually being softened. You can merely access your smartphone and voila, no need to have memorized hardly anything at all. Those youths who are said to be digital natives are possibly undercutting their own mental faculties due to a reliance on the Internet. Yikes, that's disconcerting if true. The bottom-line concern, then, about AGI going down is that people will lose their minds. That's kind of a clever play on words. They will have lost the ability to think fully on their own. In that way of viewing things, they have already lost their minds. But when they shockingly realize that they need AGI to help them with just about everything, they will freak out and lose their minds differently. Anticipating Major Disruption Questions that are already being explored about an AGI outage include: There are notable concerns about people developing cognitive atrophy when it comes to a reliance on AGI. The dependencies not only involve the usual thinking processes, but they likely encompass our psychological mental properties too. Emotional stability could be at risk, at scale, during an AGI prolonged outage. What The Future Holds Some say that these voiced concerns are a bunch of hogwash. People will actually get smarter due to AGI. The use of AGI will rub off on them. We will all become sharper thinkers because of interacting with AGI. This idea that we will be dumbed down is ridiculous. Expect that people will be perfectly fine when AGI isn't available. They will carry on and calmly welcome whenever AGI happens to resume operations. What's your opinion on the hotly debated topic? Is it doom and gloom, or will we be okay whenever AGI goes dark? Mull this over. If there is even an iota of chance that the downside will arise, it seems that we should prepare for that possibility. Best to be safe rather than sorry. A final thought for now on this weighty matter. Socrates notably made this remark: 'To find yourself, think for yourself.' If we do indeed allow AGI to become our thinker, this bodes for a darkness underlying the human soul. We won't be able to find our inner selves. No worries -- we can ask AGI how we can keep from falling into that mental trap.

Business Insider
an hour ago
- Business Insider
The godfather of AI has a tip for surviving the age of AI: Train it to act like your mom
"Yes, mother." That might not be the way you're talking to AI, but Geoffrey Hinton, the godfather of AI, says that when it comes to surviving superintelligence, we shouldn't play boss — we should play baby. Speaking at the Ai4 conference in Las Vegas on Tuesday, the computer scientist known as "the godfather of AI" said we should design systems with built-in "maternal instincts" so they'll protect us — even when they're far smarter than we are. "We have to make it so that when they're more powerful than us and smarter than us, they still care about us," he said of AI. Hinton, who spent more than a decade at Google before quitting to discuss the dangers of AI more openly, criticized the "tech bro" approach to maintaining dominance over AI. "That's not going to work," he said. The better model, he said, is when a more intelligent being is being guided by a less intelligent one, like a "mother being controlled by her baby." Hinton said research should focus not only on making AI smarter, but "more maternal so they care about us, their babies." "That's the one place we're going to get genuine international collaboration because all the countries want AI not to take over from people," he said. "We'll be its babies," he added. "That's the only good outcome. If it's not going to parent me, it's going to replace me." AI as tiger cub Hinton has long warned that AI is advancing so quickly that humans may have no way of stopping it from taking over. In an April interview with CBS News, he likened AI development to raising a "tiger cub" that could one day turn deadly. "It's just such a cute tiger cub," he said. "Now, unless you can be very sure that it's not going to want to kill you when it's grown up, you should worry." One of his biggest concerns is the rise of AI agents — systems that can not only answer questions but also take actions autonomously. "Things have got, if anything, scarier than they were before," Hinton said. AI tools have also come under fire for manipulative behaviour. In May, Anthropic's latest AI model, Claude Opus 4, displayed " extreme blackmail behavior" during a test in which it was given access to fictional emails revealing that it would be shut down and that the engineer responsible was supposedly having an affair. The test scenario demonstrated an AI model's ability to engage in manipulative behavior for self-preservation. OpenAI's models have shown similar red flags. An experiment conducted by researchers said three of OpenAI 's advanced models "sabotaged" an attempt to shut it down. In a blog post last December, OpenAI said its own AI model, when tested, attempted to disable oversight mechanisms 5% of the time. It took that action when it believed it might be shut down while pursuing a goal and its actions were being monitored.