Understanding the risks and realities of deploying Agentic AI inside large organisations
In the current hype cycle, “Agentic AI” is pitched as the ultimate AI silver bullet. The truth is subtler. This piece is an attempt to translate the concept into plain language for executives: what agents actually are, how they differ from models and what has made them feasible now.

Agents
You’ve already met an agent when you try to use a commercial LLM. We wrote previously about a chat agent, which takes your input and interfaces with the LLM, so that the LLM does the “thinking”, and the chat agent does the “putting together”. But did you know that there are likely many, many other agents running tasks in the background, which allow for commercial LLMs to do all sorts of things which wouldn’t have been possible not that long ago?
What Is An Agent?
An agent is a piece of software that acts with some degree of autonomy towards a goal. Each agent is usually given a focused task with clear success criteria, for example, extracting data, checking its quality or drafting a slide. Where agents become powerful is in combination: for instance, one can create, the next checks the first one’s work, whilst another orchestrates the workflow and plans the next steps.
Why Is This Important?
It is important to understand how agentic workflow fits together as separate agents can be rolled out in your organisation with individual, specific tasks that allow for human-in-the-loop and real grass-roots problem solving. Instead of trying to solve problems “big bang”, you can start to break down rollout strategies into bite-size chunks, which play towards a bigger picture.

Let’s have a look at how agents have evolved over time:
Then: Using Agents for Training
A helpful example is the training of GANs (Generative Adversarial Networks).The training dynamic is instructive: a Generator agent proposes, a Discriminator agent critiques and through this adversarial loop the system improves. Agentic workflows borrow that spirit, pairing creators and checkers and extend it into real time action, where agents can plan, retry and adapt in the wild. Once trained, only the generative part of the “GAN” is needed and so the Generator agent is what you’ll interact with.
Now : Manually Creating Agentic Workflows using LLMs
Modern agentic frameworks let you specify a goal in natural language, for instance “build an agent to summarise this memo and draft tomorrow’s client pitch deck” and “build an agent to check the output of the memo and ensure it meets this criteria”.
Where we are now vs where we were only a few years ago is that agentic frameworks have been created where you can create agents using natural language. By harnessing the power of generative AI, you can describe what you want an agent to do in a chat prompt and it can guide you through the steps of setting up agents without you needing to know how to program. This lowers the barrier to entry almost to zero, meaning that agents can now be deployed en masse across a workforce.
Unlike earlier software, you don’t need to hard-code rules in VBA or Python. You just describe the outcome and the framework helps spin up the agents to achieve it.
Future: Allowing an Agent to Perform Large, Multi-Legged Tasks
Agency moves into real time execution. You should be able to specify multi-legged tasks across multiple systems, inputs and data. The “agentic AI” will plan and handle all sub-tasks required of the larger task

LLMs vs Agents: Two Very Different Nights at the Office
Night with only an LLM
It is 2:15 a.m. The analyst’s screens glow against a desk scattered with half-empty coffee cups and red-lined pitch drafts. The prompt goes in: “Summarise this 20-page analyst report for tomorrow’s meeting.”
Outwardly, nothing stirs. The cursor blinks. No terminals connected, no models updated, no emails sent.
Inside the model though, computation ignites. The document is sliced into tiny chunks or tokens. Each token is turned into a vector or a string of numbers - the model’s way of representing meaning. Those numbers then flow through a vast network with billions of adjustable weights, guiding how the model predicts the next word.
Think of it as super-charged autocomplete: the model fills in blanks word by word, exploring many possible sentences in parallel and keeping only the most convincing ones. Along the way, it can pull in extra facts if it’s unsure and polish the tone so the result reads like something you’d actually send to a client.
All of this orchestration happens silently, inside a closed system. We just don't get to see the workings.
Then the output appears on the screen: two neat paragraphs, structured bullets, a closing caveat. The analyst reads, copies, pastes. The room remains unchanged. No systems were touched.
That is the nature of an LLM: sophisticated internal orchestration yielding an output.
Night with Agentic AI
Now, the request can be larger, many steps further: “Turn this report into tomorrow’s client pitch.”
The agent begins in the same place the human used the LLM. It first summarises the 20-page analyst report, condensing the key findings into a tight narrative but, instead of stopping there, it pauses to plan how to turn those findings into a full client pitch.
The plan takes shape:
Use the summary to identify the main themes
Pull updated market data from Bloomberg
Reconcile exposures in the risk system
Refresh the financial model with the new assumptions
Draft and format slides aligned to house style
Prepare an email to the deal team and schedule a run-through meeting
Execution begins.
Bloomberg comes first. Fresh data flows into valuation tables. The risk system follows - exposures are updated, mismatches flagged and when one reconciliation breaks, the agent applies its fallback procedure to keep the workflow moving.
Next, the financial model - Pitch_Model_v26.xlsx. Assumptions are adjusted based on the report’s findings. Sheets are recalculated. Where numbers breach thresholds, a compliance note is added and a new version is saved with today’s date.
Then PowerPoint. Slides repopulate with updated charts, the LLM-drafted commentary slots into speaker notes, tone is polished to fit house style. A footnote cites the data cut-off for regulatory hygiene.
Finally, Outlook. A draft email to the deal team is written, attaching the deck. Before sending, the agent pauses: “Draft prepared. Approve to circulate?”
The analyst clicks “Approve”. Calendars are scanned and at 08:15 a gap is found. A run-through meeting is booked with a Teams link included. The audit log closes: “Workflow complete. Pending explanation, slide 12.”
The office feels a bit different now. Systems touched, models updated, slides refreshed, emails queued, diaries rearranged.

It’s worth remembering: every “external” action the agent just took, like refreshing a spreadsheet, drafting a slide, booking a meeting was preceded by a flurry of invisible internal reasoning. It had to decide which tool to call, what parameters to send, how to handle errors and when to move to the next step. None of this is visible to the user and the orchestration is hidden behind the interface and that is precisely where the risks and governance challenges begin.
This is the promise of agentic AI: using LLMs where needed, orchestrating across enterprise systems to deliver a finished product. It is important to note that the above scenario depicting Agentic AI's full capabilities remains largely hypothetical. While the promise is significant, a comprehensive framework for such multi-legged tasks across enterprise systems does not yet fully exist. In practice, most organisations end up stitching together partial capabilities with human in the loop. The vendor’s brochure might look seamless, but reality is far messier.
About the authors
Larry is a lifelong technologist with a strong passion for problem-solving. With over a decade of trading experience and another decade of technical expertise within financial institutions, he has built, grown, and managed highly profitable businesses. Having witnessed both successful and unsuccessful projects, particularly in the banking sector, Larry brings a pragmatic and seasoned perspective to his work. Outside of his professional life, he enjoys Brazilian Jiu-Jitsu, climbing and solving cryptic crosswords.
LinkedIn
Ash is a strategy and operations professional with 14 years of experience in financial services, driven by a deep passion for technology. He has led teams and projects spanning full-scale technology builds to client-facing strategic initiatives. His motivation comes from connecting people, processes, data and ideas to create solutions that deliver real-world impact. Beyond work, Ash enjoys exploring different cultures through food and cocktails and practices yoga regularly.
LinkedIn

