top of page
Reference Equity

Know Your LLM

  • Writer: Ryan Bunn
    Ryan Bunn
  • Jan 25
  • 4 min read

LLMs offer transformative potential in investment management — But investors must always do their own research — Knowing your LLM is essential for investors.

 


KNOW YOUR LLM


"Do your own research" is a core tenet of investing. The conviction to hold investments during inevitable downturns stems from personal comfort with the investment itself. This principle poses a significant challenge when leveraging AI in investment management. Large Language Models (LLMs) are inherently black-box algorithms and often obscure the "logic" behind their recommendations, even when probed.


Unraveling the mystery behind an LLM's recommendations begins with understanding the models themselves.


Word Searchers


LLMs, such as ChatGPT, are surprisingly straightforward tools. When answering a question, an LLM generates a response one word at a time, each iteration searching for the most logical or statistically likely word to follow. In this manner, an LLM can construct coherent sentences and paragraphs.1 More powerful LLMs possess greater "knowledge" of the world, enabling them to provide better word recommendations and appear "smarter."


Interestingly, selecting the second or third most probable word, rather than the first, makes LLMs appear more human-like. Some randomness is intentionally incorporated into traversing the neural net to select words, which is why you don't get the same answer twice when querying an LLM.


Despite this simplicity, it is remarkable that by merely recommending one word at a time, an LLM can converse coherently and provide helpful information.


Amazing Encodings


LLM models are neural networks, operating in the digital realm similarly to human brains. Training an LLM involves assigning weights or probabilities to each neuron. The "net" consists of billions of interconnected neurons. At each node, the prompt is evaluated against all possible connections from that neuron. The LLM traverses through its weightings, identifying the most relevant response to construct an answer. This process is repeated for every word in the response.


One remarkable and surprising feat of LLMs is their ability to encode massive amounts of information—such as the entire public internet—within their neural nets. Essentially, all the knowledge of the internet resides in an LLM like ChatGPT, but it's stored as weightings in a neural net. You can download an open-source LLM onto a desktop computer today with only a modest amount of storage space! This represents incredible information compression.


Minor Memory


Neural networks and machine learning have existed for over 40 years. The success of ChatGPT and other LLMs today results from a combination of improved computing power (used in training but not execution of the models) and engineering enhancements that enable the LLM to better "understand" the context around a question. This is achieved by providing the LLM with prior questions or additional context, often in the background, but also through "super prompts" developed by individuals to enhance results by guiding LLMs down different parts of their neural nets.


There are undoubtedly many more engineering feats behind LLMs' recent success, which are beyond my knowledge. We should expect continued improvements as these technologies advance.

 

But That's It


Ultimately, when you use an LLM, you are simply receiving probability-weighted words. This understanding allows for a better estimate of the future usefulness of LLMs. Any creative or innovative response from an LLM is essentially random—the random construction of words that plants a creative idea in your head. There is no actual thinking involved; the knowledge in the LLM, the node weighting, is fixed, with a touch of randomness for variety.


Solving Solved Problems


The incredible encoding power and ease of use of interacting with LLMs make them perfect for supporting or automating tasks that are already solved. Does your company have an HR benefit pamphlet? Simply encode it in an LLM and ask the LLM questions instead of your HR manager. Writing a short story and need a surprising twist at the end? Ask ChatGPT, which has encountered hundreds or thousands of creative plot twists—you'll likely receive a recommendation you never thought of (but was probably created by a human in another story).


LLMs in Investment Management


LLMs have the potential to be excellent investment analysts, but poor stock pickers. Armed with deep insights into persuasive writing and human biases, LLMs can create compelling investment "narratives." However, within these narratives, substance is often lacking—they are, after all, just strings of words. While we can read the output and recognize that it makes sense, we know it is a false façade.


This issue arguably already exists in the industry. Many investment analysts pitch stocks because they are paid to do so, not because they have found exceptional investments. Investment analyst training materials often emphasize creating compelling "narratives" to attract capital allocations from portfolio managers.


Understanding the functioning of LLMs, along with their potential and limitations, will be essential as AI is increasingly adopted across all industries. Knowing your LLM's capabilities and constraints will be critical to successfully leveraging AI where it is appropriate and avoiding its use where it is not.


1) Notice that LLMs are quite  poor at numerical tasks. For instance, when asking for an average exchange rate, an LLM will not find daily exchange rates and calculate an average; instead the LLM will  simply put a sentence together such as “The average annual exchange rate is X” - “X” will typically be sourced from a website that already calculated the answer. 

 

Further Reading


Stephen Wolfram’s blog post (or book for a higher cost) is a great, reasonably short introduction to the technical workings behind ChatGPT.



 



bottom of page