The adapter pattern for downstream delivery
Getting fund data in is only half the problem. Getting it out - to websites, portals, reporting systems, regulatory filings - is where things get properly messy. Every destination speaks a different language, and your core pipeline shouldn't have to learn any of them.
We deliver data to platforms like Kurtosys, to custom client portals, to SFTP drops for downstream consumers, and eventually to regulatory reporting systems. Each has its own authentication model, data format, entity structure, and update semantics. Embedding that knowledge in the core pipeline would be engineering malpractice.
Why adapters, not direct integrations
The core pipeline works with a canonical data model. Every fund, share class, and data point is stored in a normalised schema using Openfunds field IDs. The pipeline doesn't know or care where data ends up. It just validates, enriches, and stores.
An adapter sits between the canonical model and a specific destination. It handles three things:
- Field translation - mapping from Openfunds field IDs to the destination's property codes
- Format transformation - reshaping the data into whatever structure the destination expects
- Delivery mechanics - authentication, rate limiting, error handling, retry logic
If Kurtosys changes their API tomorrow, only the Kurtosys adapter changes. The pipeline, the validation rules, the data model - all untouched.
The Kurtosys adapter in practice
Kurtosys has a well-structured API, but it has specific opinions about how data should arrive. Here's what the adapter handles:
Entity hierarchy. Kurtosys organises data around entity types - funds, share classes, companies. Each entity has a clientCode that serves as the primary key. A fund's client code might be its Openfunds fund ID. A share class uses its ISIN. The adapter maps our internal identifiers to their expected codes.
Property code mapping. Where we use OFST005010 for Fund Legal Name, Kurtosys might use a property code like fund_name or a client-specific code from their Data Dictionary. The adapter maintains a mapping table between our Openfunds field IDs and Kurtosys property codes. This table is configured per client because different Kurtosys instances use different Data Dictionary configurations.
Upsert semantics. Kurtosys uses an upsert model - you send the full current state of an entity, and it creates or updates accordingly. The adapter needs to assemble a complete entity payload from our data store, not just send the fields that changed. This means tracking which fields belong to which entity type and building the payload accordingly.
The adapter is the only part of the system that knows the word "Kurtosys." Everything upstream just sees a delivery target with an ID.
The adapter interface
Every adapter implements the same interface. Simplified, it looks like this:
translate(records)- takes canonical records and returns destination-formatted payloadsdeliver(payloads)- sends the payloads to the destination, handles auth and retriesverify(deliveryResult)- confirms the delivery was accepted and logs the outcome
The pipeline orchestrator calls these methods in sequence. It doesn't know what translate does internally - whether it's reformatting JSON for a REST API or generating a CSV for an SFTP drop. That's the adapter's problem.
This means adding a new destination is a matter of writing a new adapter. The pipeline, the UI, the scheduling - all reused. We've onboarded new delivery targets in under a day because the boundary is clean.
Where it gets tricky
The adapter pattern sounds clean on a whiteboard. In reality, destinations have quirks that test the abstraction:
- Rate limits. Kurtosys has API rate limits. The adapter needs to batch and throttle. An SFTP destination doesn't care - you drop one file. The delivery semantics are fundamentally different.
- Partial failures. You send 500 share classes and 3 fail validation on the destination side. The adapter needs to report which 3 failed and why, without blocking the other 497.
- Data Dictionary dependencies. Some destinations require that property codes exist in their system before you can send data. The adapter might need to create metadata entries before delivering actual data.
- Idempotency. If a delivery fails halfway through and you retry, you need to ensure you don't create duplicates. Kurtosys handles this with upsert semantics, but not every destination does.
Each of these quirks lives inside the adapter. The pipeline just sees: translate, deliver, verify. Success or failure. Move on to the next target.
The payoff
Six months ago, a client asked us to deliver to a new platform we'd never integrated with. They sent us the API docs on a Monday. By Wednesday, the adapter was live and pushing data. The core pipeline didn't change. The UI didn't change. The validation rules didn't change.
That's the payoff of clean boundaries. The boring, old-school software engineering kind of payoff. No AI involved. Just interfaces, contracts, and the discipline to keep destination-specific logic out of the core.
Every time I'm tempted to add a quick if (destination === 'kurtosys') in the pipeline code, I remember the enterprise platform where we had 47 of those. Each one was a landmine. The adapter pattern exists so you never plant them in the first place.