Written by Technical Team | Last updated 06.03.2026 | 16 minute read
For many NHS trusts, the hardest part of digital transformation is not buying another application. It is making existing operational systems work together well enough to support bed management, elective recovery, discharge coordination, outpatient flow, theatre utilisation, diagnostics, patient tracking and performance management without forcing staff to chase updates across multiple screens and disconnected datasets. The NHS Federated Data Platform changes that conversation because it is designed to sit above operational systems rather than replace them, bringing data together in a secure, governed environment where trusts can build, adopt and run operational and analytical products with a common structure and access model.
That distinction matters. A trust’s electronic patient record, patient administration system, waiting list tools, radiology systems, pathology platforms, workforce systems and local departmental applications remain the systems of record for their operational domains. The FDP becomes the integration and decision-support layer that turns fragmented operational signals into usable workflows, dashboards, alerts and shared views. In practice, that means the quality of an FDP deployment depends far less on the front-end visual layer than on the integration architecture underneath it. If the connections into operational systems are brittle, incomplete, delayed or poorly governed, the outputs will never be trusted by clinicians, operational teams or executives.
This is why trusts need to think about FDP integration as an enterprise architecture programme rather than a reporting project. The challenge is to create a repeatable pattern for ingesting data from many systems, standardising meaning across them, preserving lineage, applying the right privacy and access controls, and then exposing that information to products, processes and users in a way that supports action. Trusts that approach FDP integration in this structured way give themselves something much more valuable than a single use case: they create an operational data foundation that can support multiple pathways over time without rebuilding the plumbing for every new requirement.
The strongest way to understand NHS Federated Data Platform integration is to picture three layers working together. The first layer is the trust’s existing operational estate: EPR, PAS, scheduling, diagnostics, theatres, bed state, community systems, workforce tools, inventory platforms and other departmental applications. The second layer is the integration and data platform layer, where data is brought in, versioned, quality checked, standardised and governed. The third layer is the product layer, where operational applications, dashboards, coordination tools and workflow-support solutions use that curated data to help staff make better decisions.
Within this model, a trust should avoid the common mistake of connecting every use case directly to source systems in an ad hoc fashion. A direct-to-source shortcut can be tempting when teams want fast delivery, but it usually creates a new silo rather than a reusable capability. One dashboard pulls live ADT messages one way, another solution uses a nightly extract from the PAS, a third team builds a spreadsheet upload from a theatre system, and soon the trust has several competing versions of the truth. The real architectural value of FDP comes from creating a managed middle layer where data is reconciled once, defined once and then reused many times.
This middle layer should be treated as a product in its own right. It needs clear ownership, release discipline, monitoring, change control and technical standards. Trusts should define which systems are authoritative for which fields, how frequently each source is synchronised, how corrections and late-arriving data are handled, what survivorship rules apply when two systems disagree, and what the expected latency is for each operational scenario. A discharge command centre may need near-real-time movement and task status; elective planning may tolerate refresh cycles measured in hours. Architecture that ignores these distinctions usually fails in production because it applies the same integration pattern to every workload.
The local FDP instance also changes governance boundaries in an important way. Trusts need to design integration around local control of data, local responsibility for approved use, and local accountability for who can see what. That means architecture cannot be separated from information governance, identity, role design and auditability. In mature deployments, access decisions are not bolted on at the end; they shape the data model, the product design and even the way pipelines are built. A ward coordinator, a theatre manager, a discharge team and a system control centre may all need overlapping but different views of the same underlying patient journey. Good integration architecture makes those variations manageable without duplicating the whole data estate.
One of the most useful architectural mindsets for trusts is to treat FDP as a federated operational intelligence platform. “Federated” does not simply mean multiple organisations exist in the same broad ecosystem. It means the architecture is expected to support local instances, local decisions, common standards, shared patterns and selective reuse. That has direct implications for how trusts should design connectors, mappings, data contracts and application components. If a trust builds in a way that only works for one local product, it misses the wider opportunity. If it builds against a standardised and reusable architecture, it becomes much easier to adopt solutions from elsewhere, contribute local innovations into broader catalogues and avoid paying the integration cost again and again.
The canonical data model is one of the most important concepts in FDP integration, and also one of the most misunderstood. Some teams hear “canonical model” and assume it means a giant enterprise schema that every source system must be forced into on day one. That is usually the wrong approach. In practice, the canonical model is most useful when it acts as the shared business language between products, data pipelines and operational users. It provides the standard object definitions, identifiers, relationships and properties that make cross-system data usable across the platform, while still allowing the trust to preserve the detailed, source-specific structures it needs for traceability and specialist workflows.
For a trust connecting operational systems, this means integration should generally follow a layered mapping pattern. Raw data should land with minimal transformation, preserving source fidelity and the ability to reprocess. A conformed layer should then harmonise keys, timestamps, coding systems, event semantics and organisational context. Only after that should data be promoted into canonical structures used by products and reusable components. This layered design is what prevents a trust from hard-coding business logic into every ingestion job and then struggling to unwind it later when source systems change.
The practical value becomes obvious when you look at the kinds of objects that recur across trust operations. Patients, appointments, encounters, admissions, referrals, observations, procedures and ward stays are not niche concepts. They appear in multiple systems, with different identifiers, different states and different timestamps. A trust may have one identifier for a booked appointment in a PAS, another in a clinic system and a third in a patient communications platform. Without a canonical strategy, every downstream product has to decide for itself how those records relate. With a canonical strategy, the trust resolves that once and makes the result reusable.
There is also a deeper architectural benefit. The canonical model creates separation between source volatility and product stability. Source systems change more often than operational users want to hear about. Interfaces get modified, fields are renamed, local codes are added, supplier upgrades alter payloads, and teams inevitably ask for additional enrichment. If product logic is written directly against source-specific schemas, every change in the upstream estate becomes an expensive downstream rewrite. When products consume a stable canonical layer instead, most source-side changes can be absorbed in the integration and conformance layers without breaking the user-facing solution.
Trusts should, however, resist the temptation to over-model. The right approach is not to model every possible healthcare concept before delivering value. It is to standardise the operational entities and relationships that are needed for current and likely future workflows, then expand deliberately. This is especially important for operational use cases, where timing, status and accountability often matter more than encyclopaedic clinical detail. For patient flow, for example, the trust usually needs dependable representations of occupancy, ward movement, estimated discharge, task completion, referral progression and handoff status long before it needs every specialist attribute that exists in a source record.
A strong canonical strategy in FDP integration usually includes the following design habits:
Another critical point is that a canonical model is not only for analytics. In FDP, it also supports operational applications and workflows. That means the model must be understandable to engineers and operational subject matter experts alike. If bed managers, divisional analysts and clinical leads cannot recognise the meaning of a canonical object, the trust has probably built something too abstract. The most effective models balance technical normalisation with business recognisability. They make it easy to ask questions such as which patients are medically optimised but still awaiting next steps, which theatre sessions are underutilised relative to plan, which appointments are at risk because of unresolved pre-operative steps, or where diagnostic bottlenecks are creating downstream delay.
Once the target architecture and canonical approach are clear, trusts need to choose the right integration patterns for each source system. There is no single connector strategy that suits every operational application. The best trusts build a small set of standard patterns and apply them deliberately based on source characteristics, network position, latency needs, data volume, operational risk and ownership model.
The first pattern is direct connection for systems that can securely accept inbound connectivity from the platform or expose standard cloud or API interfaces. This is usually the cleanest option for modern SaaS applications, standards-based APIs and well-managed services where network and credential controls can be handled centrally. Direct connection works best when the source system is stable, its interface is well documented, and the trust wants a low-friction, supportable path into FDP without extra middleware complexity.
The second pattern is private network connectivity via a local agent or proxy. This is often the right answer for on-premise systems, legacy applications and environments that cannot accept direct inbound access from the platform. In these cases, the trust should think carefully about where the connectivity agent sits, what network zones it can reach, how it is monitored, and which specific systems it is authorised to proxy. This pattern is usually essential for departmental systems that remain buried inside the trust estate but still provide operationally important signals.
The third pattern is file- or batch-based ingestion. It is less glamorous than API integration, but in many trusts it remains highly relevant. Not every source system is capable of safe near-real-time exchange, and not every operational question requires it. Scheduled extracts, secure file drops and managed batch loads can be perfectly appropriate for workforce, finance, historical activity, static reference data and some planning scenarios. The key is not to reject batch on principle, but to use it intentionally where its latency profile matches the business need.
The fourth pattern is event-driven messaging. This is especially useful where operational workflows depend on state changes rather than whole-record refreshes. Admission, discharge, bed transfer, appointment change and referral progression events can often be captured more efficiently through messages than repeated polling. Event-driven design can reduce load on source systems and improve timeliness, but only if the trust invests in idempotency, replay handling, sequencing logic and resilience. A badly implemented event feed can create more confusion than a dependable scheduled load.
The fifth pattern is controlled writeback or outbound integration. This is where architecture becomes more demanding, because the trust is no longer only reading from source systems. It is sending outputs, actions or updates back into them, or into adjacent workflow tools. Many operational use cases eventually need this step. A command centre that only displays problems but cannot trigger actions, update task states or pass outcomes into operational systems will always hit a limit. Trusts should treat writeback as a separate design decision with tighter governance, clearer accountability and more rigorous testing than read-only synchronisation.
A practical way to think about pattern selection is to align it with source categories:
Trusts should also build integration around data quality expectations, not just transport mechanics. Too many programmes ask whether a system can connect, but not whether its data is good enough to support action. Before onboarding a source into FDP, the trust should define expected completeness, timeliness, uniqueness, coding consistency and reconciliation rules. For example, if an outpatient scheduling system allows multiple local interpretations of cancellation reasons, then the integration design must include a standardisation step before that data is used in operational products. If ward movement timestamps are often corrected after the event, then downstream products need logic that distinguishes provisional from settled states.
Another valuable pattern is incremental domain onboarding. Rather than connecting every system at once, trusts should prioritise sources according to pathway value and reusability. A common sequencing model is to start with identity, activity and location data, then add scheduling and referral context, then enrich with diagnostics, theatre, discharge tasks, workforce and capacity signals. This delivers early operational value while still building towards a broader trust-wide architecture. It also reduces the risk of over-engineering the first release before teams have learned how local workflows really behave.
Trusts should not ignore semantic integration, either. Transporting data from one system to another is not the same as integrating it. Real integration means aligning meaning across systems: what counts as an active referral, what marks a completed appointment, how discharge readiness is represented, whether a ward stay starts at physical movement or administrative confirmation, and which procedure state is operationally meaningful. These are architectural decisions disguised as business rules. If they are left unresolved, every dashboard and workflow layer will reinvent them in inconsistent ways.
The most successful FDP integrations are not the ones with the flashiest user interface. They are the ones that continue to work after source upgrades, governance reviews, pathway redesigns and staff changes. That durability comes from operating model choices as much as from technical design. Trusts need a delivery and governance pattern that treats integration assets as long-lived products, not one-off project outputs.
At a minimum, every trust should establish clear ownership for source onboarding, canonical mapping, data quality rules, access controls, test strategy, release management and support. This does not mean centralising every decision in a remote architecture board. It means being explicit about who is accountable when data arrives late, when a source interface changes, when a product needs a new field, when role-based access has to be amended, or when two systems disagree. Without that clarity, operational teams lose trust quickly because no one can explain where a metric came from or why a workflow suddenly changed behaviour.
Information governance should be designed into the delivery lifecycle from the start. Local trusts remain responsible for lawful processing in their own instance, which means integration work should be accompanied by clear product purposes, documented data flows, local DPIA thinking, role design and transparency materials where needed. In architecture terms, this means each new connector should have a known purpose, a defined dataset scope, a named owner, an approved access pattern and an auditable route from source to product. Governance is not a brake on delivery when done well; it is what makes reuse and scaling possible because other teams can adopt a pattern with confidence.
Trusts should also invest in version control and release discipline for data pipelines, mappings and products. FDP’s development model supports branching, transforms, datasets and controlled promotion, and trusts should use those capabilities to avoid making integration changes directly in live workflows. A change to admission status logic, referral mapping or procedure classification can have immediate operational consequences. Strong release management means changes are developed in isolation, validated against realistic data, reviewed by technical and operational stakeholders, and then promoted through environments with clear rollback options.
Observability is another non-negotiable design pattern. Integration teams should monitor more than job success or failure. They need visibility into record counts, freshness thresholds, schema drift, duplicate rates, delayed events, rejected records, writeback outcomes and access anomalies. In an operational setting, a pipeline that completes technically but delivers incomplete bed-state updates or stale referral statuses is still a failed integration. Trusts that build meaningful operational observability can catch these issues before frontline teams discover them the hard way.
A scalable trust operating model usually includes:
There is also a cultural dimension. FDP integration works best when trusts stop treating operational, analytical and engineering teams as separate tribes. A patient flow product is not purely operational because it relies on data engineering quality. It is not purely technical because its usefulness depends on how bed managers and ward teams actually work. It is not purely analytical because operational users need actionability, not retrospective commentary. The best delivery pattern is a multidisciplinary one: architects, data engineers, product leads, operational SMEs, IG colleagues and service owners working from the same shared definition of the problem.
Finally, trusts should design every integration with future reuse in mind. That is the architecture habit that aligns most strongly with the wider FDP vision. A connector to PAS should not be built only for one dashboard. A ward stay model should not be defined only for one site command centre. A referral object should not be shaped only for one waiting list product. When trusts create reusable data assets, stable canonical mappings and disciplined delivery pipelines, they place themselves in a far better position to benefit from shared solutions, contribute local innovations outward and scale operational improvements without repeatedly re-laying the foundations.
Is your team looking for help with NHS Federated Data Platform integration? Click the button below.
Get in touch