You Can’t Automate What You Don’t Understand
In the past eighteen months, the story of AI has felt like a boom without a bust. But for those of us working in fund operations and private markets, the question isn't whether the hype is real, it’s whether the results are.
What does it mean, in practice, for a fund administrator to embrace AI? What would it take for private markets operations to actually benefit from it?
At Palmer, we’ve always seen technology as a strategic advantage, not just a toolkit. And while we’re excited by the progress in large language models (LLMs), AI agents and retrieval-augmented generation (RAG), we remain firmly rooted in the realities of our industry. One defined by regulatory oversight, trust-based relationships and above all, reliability.
AI Isn’t One Thing
When people talk about AI today, they often mean generative AI, but that’s just one dimension. The real shift is in how multiple branches of AI are converging. Traditional machine learning, bi-directional transformers such as BERT for classification and extraction, RAG using embedding models, LLMs which are fine tuned and enhanced with reasoning capabilities are now combining into composite systems that can reason, search, plan and act. These aren’t just smarter chatbots, they're emerging as task-level collaborators.
This convergence has unlocked new use cases in financial services, including fund operations, by extending automation into knowledge work. But power without precision doesn’t help our industry. In private markets, outcomes must be controlled, auditable and compliant. That’s where the real challenge lies and where we’re focusing our attention at Palmer.
The difference now is scale. These models can reason across vast inputs, retrieve dynamic information, and produce structured outputs at speed. Structured outputs like JSON or XML are particularly important in fund administration, where systems require defined formats to integrate with APIs (and emerging protocols), fund accounting platforms, regulatory reporting platforms and reporting workflows.
The emergence of “vertical AI” systems, specialised AI systems built and tuned for legal, compliance or fund operation tasks isn’t speculative. It’s already happening. And it’s not about replacing people. It’s about accelerating them.
If a workflow doesn’t make sense manually, it won’t work better with an agent strapped to it.

Where AI Could Work
We see a credible path forward for private markets and vertical AI agent systems. They will need to handle repetitive, structured fund operation tasks throughout the whole life cycle of a fund with high accuracy and traceability. The building blocks are already here:
- RAG pipelines that pull the latest fund documents, answer DDQs and build policy with regulatory citations.
- Agents that can plan multi-step workflows: search public and private documents and create reports, reading emails, understanding complex documents, triggering API requests, and populating systems with structured data.
- SaaS tools embedding LLMs natively, document automation, board minute drafting, RFP completion.
But let’s be clear: these tools aren’t magic. They need context, structured data, clear processes and governance. Otherwise, the output is as messy as the input and unreliable which means it is unusable.
Vertical AI systems, tailored to fund operations, will come. We believe they’ll be powerful. But they’ll only succeed if they reflect the actual complexity of the work—fund structures, jurisdictional nuances, compliance obligations, bespoke reporting.
Reliability Isn’t Optional
In our world, we don’t get partial credit. Fund allocations must reconcile to the penny. Regulatory filings must be exact. Financial records must stand up to scrutiny and audit.
Which is why determinism matters. AI that guesses isn’t good enough. This presents a fundamental conflict with the probabilistic nature of AI systems, particularly neural networks and large language models, which generate responses based on statistical patterns in training data rather than fixed rules.
These systems are inherently non-deterministic, meaning the same input may not always produce the same output, making consistency and validation more complex in regulated settings.
Agentic AI systems must have clear rules, traceable actions and human review points. That’s not just a technical challenge. It’s a governance imperative.
One promising route to address this challenge is through Vertical AI systems. These purpose-built solutions can incorporate domain-specific rules, context, and processes directly into their architecture, offering a more deterministic layer on top of probabilistic models.
By combining these pre-defined elements, structured outputs and controlled tool use, they can deliver more predictable and auditable results. This hybrid approach allows organisations to benefit from the reasoning and language capabilities of LLMs while maintaining compliance, control and reliability across regulated tasks. Decisions can then be audited, defended and trusted.
So, What Are We Doing?
At Palmer, we’ve chosen to focus less on the "wow" and more on the "how":
Standardise then systemise what you can: Turn procedures into workflow, systemise process checklists, build client onboarding packs with good SaaS software, keep on top of governance documentation, standardise formats of outputs such as notices, reports and minutes. Structured formats lead to better AI results.
Clean your data: Understand your operational and client Master Data. Build out your Master Data Management, adopt naming conventions and taxonomies, build your data pipelines so data is easily understood, secure and easily available. These aren’t glamorous but they’re essential.
Review your software stack: Which vendors have an API and how are they addressing interoperability? How are they addressing continuing complexity in our industry? Do you benefit from CI/CD to ensure you stay on top of innovation and developments? Do you understand their Master Data and their data models, how does that fit into yours? How are they handling AI, data security and data management? How do you verify this? What are your vendor security, exit and interoperability protocols?
Start with ROI, not AI: If a workflow doesn’t make sense manually, it won’t work better with an agent strapped to it. Your workflow should be well documented, battle-tested and then systemised.
Train your teams: Prompting is a skill. So is judging AI output. Data security is paramount. Build capability in teams, not dependence on IT.
Most importantly, we’re making sure our AI strategy supports, not disrupts, our clients, our controls and our commitment to precision.
Looking Ahead
Vertical AI systems, tailored to fund operations, will come. We believe they’ll be powerful. But they’ll only succeed if they reflect the actual complexity of the work, fund structures, jurisdictional nuances, compliance obligations and bespoke reporting.
That’s our edge. Not just being good with AI (and we are pretty good), but our people are great with funds. Knowing where the edge cases are. What the regulator cares about. What our clients expect and building accordingly.
Because in private markets, that is your biggest asset.