A recent Investor Survey highlighted an important new trend in investor behaviour that should make any corporate communicator sit up: “The vast majority of investor audiences (94%) said they use these AI/machine learning tools to harvest data.”
Take a second to let that statement sink in. From now you need to think: Every single piece of communication about your business has a new audience. Whether it is an ephemeral social media post, an article from an up-and-coming journalist, right through to a detailed annual report, investor machines are viewing it.
Machine metropolis
It’s easy to characterise these machines as cold, calculating robots, like in the sci-fi films we’ve been watching since the 1920s. And, in a sense they are.
But the difference today is the multitude of different machines in the hands of multiple different investors, making many different decisions. Your current investors have them. And your future investors and activist shareholders do too. It’s their intent or prompts that frame machine inquiry.
Think of these many machines as digital agents in a massive machine metropolis, running errands, reporting back and making recommendations to your investors.
Top-down view
Another big picture stat: Last year there were over 160 million machine downloads of US 10-K financial results and statements. Take a second to let that stat to sink in.
The language used within those documents is reviewed and investment decisions are then made based on that analysis. All of this is done without human intervention.
It’s easy to think it’s the same machine downloading all these reports and computing the same answer to the same question or prompt.
The thing is, it’s not true. AI is extending the role of technology in a complex decision-making process. A process that includes a wide number of people already.
In complex decision-making processes there tends to be standardised roles in the unit: the ultimate decision maker, the gate keeper, the disrupter, the influencer, the initiator, and the information collector.
Technology has long been taking on much of the role of information collector. But generative AI is beginning to play other ones too: the gate keeper, the influencer and of course the disruptor.
And bottom-up
So, how is this going to change the way you communicate?
The truth is we’ve all been, on some level, creating content for machines for some time.
Search engine optimisation has been standard communications practice for many years. Another common digital practice is writing to be accessible for mechanicalized screen readers used by people who find it difficult to access content online.
Writing for machines is not new.
But, with the launch of AI search tools such as Bard and ChatGPT, we are looking at a future digital landscape where web content is written to be understood and re-shared by generative AI tools in the so-called feedback effect.
Bloomberg recently launched BloombergGPT, its own large scale Generative AI, which according to the business is: “Specifically trained on a wide range of financial data to support a diverse set of natural language processing (NLP) tasks within the financial industry.”
Today we are in this awkward wild-west stage of generative AI where misinformation and inaccuracy is present. With the feedback effect playing out within so-called machine-to-machine communication, there is a clear and obvious risk of cumulative divergence from fact. The legal (not to mention policy and ethical) risks are only just being grappled with and are set to occupy legal and other like minds for some time. As a communicator though, doing everything you can to ensure a clear connection and communication with machines is paramount.
Structure has always been important for machines. The progress being made with XBRL taxonomy will help create rigour and clarity. Once these standards are imbedded, we anticipate dynamic data on a range of critical issues our clients face, ushering in a new era of issues-based transparency and reporting.
Data and narrative, the ying and yang in this emerging world need to be more deeply intertwined. The ‘story in the numbers’ needs to come out into the open and be actively managed, otherwise it will be picked up and managed by AI.
Tone and sentiment have been the preserve of people, with the common refrain for machines being ‘they don’t get sarcasm’. This is changing but key words and phrases can unintentionally create ambiguity for generative AI if left unchecked.
Readability (this is something that comes up with search engine optimisation and to be honest is more about technology and coding) is crucial. Clear, well-managed code will ensure machines can easily access your content as it is intended to be and attain a clear message retrieval. For example how forms and tables are coded matters for AI readability.
Finally, as machines take more responsibility in the decision-making process, we need to adjust our perspective on what the people in the decision-making unit need too. More engaging content that actively draws investors (the human type) into the heart of the decision is more important than ever.
This is clearly an emerging, exciting area for communicators. We are working through it in a planned beta programme to cover these issues and more.