There are firms which feel that they’ve missed out on the “AI Revolution” and wonder if they can make up ground quickly by throwing everything into an LLM and querying it with a chat agent. For firms that have already committed to a different AI strategy, everyone is talking about LLMs and how great they are, are you missing out? Is there an option to just push all your data into an LLM and use it as an all seeing all knowing guru? The saying “If the only tool you have is a hammer, you tend to see every problem as a nail” couldn’t be more true than LLMs and AI!

All Roads Lead To LLMs
It is extremely tempting to think that you could solve every problem by just throwing all of your data at an LLM, then asking it questions about the data. Surely this is what it was built for? And when you do this, you actually receive answers, brilliant! So what do you have to watch out for?
LLM Problem 1: Data privacy
Every time you use an LLM, where are your prompts going? If you’re uploading data or images, where are they going? Depending on which type of LLM solution you’re using, it could be that the provider is taking, saving and using that data to retrain their models. Further, if you look to try and turn that ability off, it can cost extra money. By default, almost all providers use your prompts and data for training purposes with their free version.
We will be exploring the different types of LLM solutions your firm could be using in a future article.
LLM Problem 2: Lack of levers to pull
Great, so you’ve uploaded the data you want to use, and you’re satisfied that you’re ok with the privacy set up. But now you’ve entered a prompt and you’re clearly not getting the right answer. So you change things slightly with your prompt, and you’re still not getting the right answer. Where do you now turn?

Only one lever
Here’s your issue: LLMs only offer one lever to pull. If you don’t like what you’re getting, it’s extremely difficult to fine tune for the right answers, set the right guardrails in place, and ensure deterministic answers. In fact, you can waste many an hour in LLMs to get them to output something that would take you seconds in excel. They are extremely good at some things, terrible at others.
LLM Problem 3: Limited Context Window
Again, you’ve uploaded the data you want to use, and now you’re asking questions of the LLM. You start to notice that the answers being output bear no relation to what you’ve uploaded. In fact, it seems like the LLM is using whatever it wants, yet it’s not letting you know your data source isn’t being used. What’s happening?
As mentioned previously, one of the features of an LLM is its context window. Think of this like its working memory, which is the number of tokens (or words) it can retain from the current input and previous responses. Increasing this window adds extensively to costs. At some point there is a cut off for this number of words, at which point even files you’ve uploaded get “forgotten”. At this point, you’ll have to reupload the data in order for it to get used again. Rather frustratingly, the LLM won’t even tell you it’s not using that data anymore, and will confidently tell you it’s happy with the output.
LLM Problem 4: Black Box
Ok, you’ve got the output from the LLM. You’re about to use the answer it’s given you, however you want to be sure that the answer is correct. How do you do this? It is very, very difficult to see exactly how an LLM has arrived at the answer it has, and if you’re using a commercial solution nearly impossible. You are effectively outsourcing your intelligence to a solution you don’t have control of.
What to do: Pick the right tool for the job
In a previous article, we attempted to define AI as “[helping] to estimate approximate solutions to problems which are otherwise difficult (or otherwise impossible) to solve.” If we were to then expand into how you should actually be using AI, it is critical that we then understand exactly what problem it is we’re trying to solve. No matter what line of business you’re in, you can analyse your use cases to a level of detail where it’s understood what the problem is (what is the question you’re trying to answer?) and the system into which the solution might fit.
For example, I might be a content creator, looking to ensure that I send relevant content only to people who are interested in the subject matter.

Creating and sending content, old style
We can split this larger problem into several smaller steps, each with their own question to answer:
How is the content created?
How do we know what the content is about?
How do we know which people are interested in that content?
By breaking down the problem into these questions, we can then start to understand the data needed and how we could use AI to answer each question
Potential Solutions
Problem/Solution | LLM | Other ML solution | Human |
Content Creation | Using prompt inputs, leading to content output | Bespoke output trained on previous content, enormous training effort and output likely very poor | Write a new article every time |
Content Tagging | Taxonomy upload and tagging automated | Output learned on many previous author tags | Manual tagging done by the author |
Content Interest | Prompt the LLM on data collected from previous readership | Train an appropriate model based on data collected from previous readership | Recipients manually chosen from a list |
Using LLMs only
Consider if we just used an LLM for all of this. We could end up with:
Well written but non-factual content
Brilliant tags
Sent to people who couldn’t care less about the subject matter
Using “other” ML solutions only
Similarly, using purely only “other” ML solutions, we could end up with:
Unreadable content
Irrelevant content tags
Sent precisely to the people who care about the subject
Human only
Or, we could fall back on the human content creator doing all of the above:
Time consuming but well written content
Poorly chosen content tags
Sent to people who couldn’t care less about the subject matter
Clearly, just choosing one option in the above leads to a sub-optimal solution in one way or another.
Why not pick the best of all worlds? Humans write the content, LLMs tag it, and other ML solutions (like a neural network or other interest model) for selecting who should receive it. Then, you’ve got the right type of solution solving each problem, and every problem no longer looks like a nail!

The right AI for the right job
When you’re considering your AI strategy, consider all of the tools you’d like in your firm’s toolkit. Would you build your house with only a hammer?
About the authors
Larry is a lifelong technologist with a strong passion for problem-solving. With over a decade of trading experience and another decade of technical expertise within financial institutions, he has built, grown, and managed highly profitable businesses. Having witnessed both successful and unsuccessful projects, particularly in the banking sector, Larry brings a pragmatic and seasoned perspective to his work. Outside of his professional life, he enjoys Brazilian Jiu-Jitsu, climbing and solving cryptic crosswords.
LinkedIn
Ash is a strategy and operations professional with 14 years of experience in financial services, driven by a deep passion for technology. He has led teams and projects spanning full-scale technology builds to client-facing strategic initiatives. His motivation comes from connecting people, processes, data and ideas to create solutions that deliver real-world impact. Beyond work, Ash enjoys exploring different cultures through food and cocktails and practices yoga regularly.
LinkedIn

