Written by Technical Team | Last updated 17.10.2025 | 20 minute read
Interoperability in the NHS is not a single product, interface or dataset—it is a capability that emerges when data models, clinical vocabularies, transport protocols, governance and operational workflows align. In practical terms, it means the blood pressure measured in a district nursing visit can be safely re-used by a hospital clinician, a GP, or a population health analyst without re-keying or ambiguity. The central challenge is that NHS organisations—acute, community, mental health, primary care, ambulance services and social care partners—operate different systems, at different levels of digital maturity, often procured years apart. Designing data models that traverse this landscape requires a disciplined approach to semantics, structure and change management that respects both the clinical meaning and the lived realities of care delivery.
A good starting point is to recognise that interoperability has layers. Syntactic interoperability ensures systems can technically exchange payloads; semantic interoperability ensures the recipient interprets the payload exactly as intended; and process interoperability ensures the data is timely, trusted and usable within the recipient’s workflows. Data modelling sits at the heart of all three layers. The shape of the data—and the codes chosen to define meaning—either makes downstream integration straightforward and safe, or turns every hand-off into a bespoke mapping exercise. For NHS programmes spanning integrated care systems (ICSs), getting the data model right early saves months of interface rework and years of operational friction.
Interoperability is also a socio-technical endeavour. Governance, privacy and clinical safety must be designed in, not bolted on. A model that is perfect on paper but impossible for clinicians to capture accurately at the point of care will fail in practice. Conversely, a model that mirrors local forms too closely without considering wider standards will fragment the ecosystem. The balancing act is to adopt national and international standards wherever possible, constrain them to UK clinical practice where necessary, and then implement them in ways that fit real clinical pathways across NHS providers.
The contemporary NHS interoperability stack leans heavily on a small set of mature, widely adopted standards. At the structural layer, HL7 FHIR is the dominant pattern for APIs and messages. FHIR’s resource-based approach—Patient, Practitioner, Observation, Condition, MedicationRequest and so on—provides a consistent, well-documented surface for representing clinical concepts. For the UK, the FHIR UK Core profiles and extensions reduce ambiguity by constraining choice: where the base specification allows multiple ways to represent the same thing, the UK Core points to a single, nationally consistent approach. The practical implication for data modelling is that local schemas should align to UK Core profiles as their first principle, introducing deviations only where there is a demonstrable clinical need.
Alongside the structural model sits the semantic model, and here SNOMED CT is the general-purpose clinical terminology of record. SNOMED CT enables precise, machine-interpretable meaning for problems, findings, procedures and observable entities. When you record “Type 2 diabetes mellitus” using a SNOMED CT concept, that meaning travels unambiguously between systems, enabling decisions, analytics and patient-facing services to behave consistently. For medicines, NHS systems typically rely on the NHS Dictionary of Medicines and Devices (dm+d) to standardise products and ingredients, and ICD-10 and OPCS-4 still play important roles for secondary uses such as coding and commissioning. A robust data model will define which vocabulary applies at each field, and how to maintain cross-maps where multiple vocabularies intersect in downstream flows.
Information models published by the Professional Record Standards Body (PRSB) provide clinically validated, pathway-oriented guidance on what information should be recorded and shared for specific use cases—discharge summaries, outpatient letters, emergency care discharge, mental health crisis plans, social care assessments and more. These models are not mere lists; they spell out clinical intent, cardinalities and context, making them ideal source material for designing and validating FHIR profiles or openEHR templates. When you anchor your design to PRSB information models, you inherit the clinical consensus that underpins safe, useful interoperability across care settings.
It is tempting to think of standards as a constraint, but they are in fact an accelerant. Re-using existing profiles, value sets and examples means less time inventing and more time testing with clinicians. Standards also de-risk procurement: when providers ask suppliers to support UK Core profiles and PRSB-aligned payloads, the resulting solutions are far more portable. Moreover, adopting the same semantic choices across providers enables data quality initiatives and decision-support logic to be shared, rather than rewritten for each site. In short, standards transform interoperability work from bespoke projects into repeatable patterns.
Finally, do not overlook the role of national services and patterns that sit alongside your data model. The NHS Spine, Personal Demographics Service (PDS), Summary Care Record (SCR), National Record Locator and GP Connect capabilities provide reference points for identity, demographics and cross-care data access. A coherent model will integrate with these services sensibly—using NHS numbers as the primary identifier for patients, employing ODS codes for organisations and sites, and aligning messaging constructs with existing national messaging where appropriate. These anchors reduce the risk that your model drifts into local idiosyncrasy.
Core structural standards to adopt
Core semantic standards to embed
A canonical data model is a pragmatic way to manage heterogeneity across Trusts, GP systems and community services without forcing an immediate supplier change. Rather than every system mapping directly to every other system—a combinatorial explosion—you define a normalised, shared model in the middle. Each source maps into the canonical, and each consumer maps out from it, significantly reducing the integration burden. For an NHS ICS, the canonical model should be the smallest viable set of profiles and value sets that meet common user needs across the pathway, strongly aligned with UK Core and PRSB outputs to avoid inventing local semantics.
Designing the canonical model starts with a candid inventory of data producers and consumers. Catalogue which systems create authoritative data for key domains—diagnoses, medications, allergies, test results, care plans, observations, referrals—and which services consume them for direct care, ICS coordination or secondary uses. For each domain, identify the minimum safe dataset and the precision of coding required. For example, allergies require structured representation of the substance, reaction and severity, with clear provenance and dates. Laboratory results need numeric values, reference ranges, units and methods. These requirements translate into specific FHIR profiles, extensions and value sets.
The hardest part of a canonical model is almost never the central schema itself; it is the inbound and outbound mappings. Legacy source systems might encode a blood pressure as a free-text note, an EAV (entity-attribute-value) store or a proprietary code. Some systems double-record data—e.g., a problem list entry and a diagnosis on a discharge summary—leading to potential duplication. Effective mapping requires repeatable patterns: deterministic rules where possible, and clinically led heuristics where necessary. The mapping logic should be versioned, testable and traceable, with automated data quality checks that flag when a source system starts sending unexpected values or structures after an upgrade.
Another powerful pattern is to treat identity and provenance as first-class citizens in your model. Patient identity should rely on NHS numbers validated against PDS, augmented with local hospital numbers for operational convenience. Practitioner identity benefits from including GMC, NMC or HCPC numbers where available, and organisation/site identity should use ODS codes. Every resource should carry provenance metadata indicating the source system, event time, author and transformation steps. This provenance enables clinical safety checks, supports information governance audits, and helps analysts reason about timeliness and trustworthiness when linking records across care settings.
Finally, anticipate change. Clinical knowledge evolves, suppliers update systems, and national standards are refined. Your canonical model must be versioned, with clear rules for deprecating fields and introducing new ones. Backwards compatibility matters for consumers, but so does preventing semantic drift. Employ semantic versioning for profiles and value sets, publish release notes consumable by both technical and clinical audiences, and maintain a change advisory process with clinical safety input. In this way, the canonical model becomes a living asset that keeps pace with frontline care rather than a static document that rapidly falls out of date.
Interoperability designs increasingly benefit from event-driven thinking. Care is an evolving story, not a static snapshot, and event streams reflect that reality. Modelling key transitions—admission, transfer, discharge; referral sent, referral accepted; medication prescribed, administered, discontinued; observation recorded, threshold breached—allows subscribing systems to respond in near real time. In FHIR, events can be represented using MessageHeader with domain payloads, or more commonly through RESTful notifications and Subscription mechanisms that push changes as they occur. For NHS providers operating on tight budgets and variable connectivity, event-driven designs can reduce polling load, improve timeliness and enable targeted updates rather than large, batch-style exchanges.
APIs remain the principal integration surface for point-of-care systems, portals and apps. High-quality APIs exhibit predictable resource paths, consistent error handling and explicit versioning strategies. In NHS contexts, multi-tenant concerns matter: an ICS-level service may host data for multiple providers, each with its own legal responsibilities. Your API should encode tenancy boundaries cleanly, typically through ODS codes in scopes or request headers, and should align authorisation scopes with clinical roles to ensure the minimum necessary data is exposed for a given purpose. Rate limiting protects shared infrastructure without penalising critical clinical flows; for example, clinical alerting endpoints may require whitelisting to avoid throttling at exactly the wrong moment.
Data quality is the quiet determinant of success. A beautifully designed model will not save an ecosystem flooded with incomplete or incorrectly coded records. The best strategy merges automated and human feedback loops. Automated validators check structural conformity (does this resource match the profile?), semantic validity (is this SNOMED code permitted in this value set?) and business-rule logic (can a discharge date precede the admission date?). On top of this, clinical dashboards surface quality issues that matter to frontline teams: missing allergy status, free-text diagnoses without codes, observations without units, or maternity records with implausible gestational ages. When clinicians see data quality reflected in their own workflow and outcomes, they engage; when quality is framed as an IT-only metric, it languishes.
Event streaming and analytics platforms offer complementary capabilities to your API estate. A streaming backbone—using, for instance, a robust publish-subscribe technology—can carry clinically significant notifications to downstream consumers such as care co-ordination hubs, virtual wards or population health tools. These streams should carry references or minimal payloads that allow authorised consumers to fetch the full record via API, maintaining a clear separation between signal and substance. For batch analytics and longitudinal views, periodic snapshots into a trusted analytics platform remain valuable, but should be treated as derivatives; the point-of-care truth stays with the source systems and the canonical model that normalises them.
Testing at scale is the final element of the technical foundation. Interoperability fails not in pilot environments, but under the stress of “real” life—when clinic volumes spike, when the ambulance service sends bursts of updates during a major incident, or when a supplier deploys a patch on a Friday evening. Build synthetic load tests that mimic these patterns. Include data with edge cases: patients with no fixed address, newborns yet to receive an NHS number, people with identical names and dates of birth, extremely long medication histories, and records where consent flags vary across contexts. Only when your data model and infrastructure behave predictably under such conditions can you claim resilience.
Interoperability across NHS organisations is inseparable from information governance and clinical safety. The model must embody a lawful basis for processing, transparent consent and objection mechanisms, and calibrated minimisation so that only data required for the purpose is shared. The golden rule is clarity: every field in your model should be defensible in clinical and legal terms—why it is needed, under what conditions, and who can see it. Access control should combine role-based and attribute-based approaches: a clinician’s role grants general privileges, while patient context, care relationship and consent indicators fine-tune what is actually accessible in a given session.
Clinical safety management should be embedded from the outset. Treat your data model as a safety-related system: conduct hazard identification, assess the risks of mis-mapping or misinterpretation, and document mitigations. Where mappings are heuristic rather than deterministic, ensure the user interface in the consuming system renders the uncertainty appropriately—for example, requiring an explicit clinician confirmation before filing an inbound diagnosis to a problem list. Incorporate safety cases and test evidence into your change control so that when a field is re-coded, a value set updated or an event introduced, the impact on clinical risk is evaluated before deployment.
Operational governance completes the picture. An ICS needs a multi-disciplinary body to own the canonical model, the mappings and the release process. Representation should include clinical leaders from acute, primary and community care; information governance; digital and data architects; supplier partners; and citizen voices where patient-facing services are in scope. This group sets priorities, arbitrates disputes (for example, whether to mandate structured smoking status in a given pathway), and sponsors data quality improvement plans that cross organisational boundaries. Without clear ownership, even a technically sound model will fragment as local projects make pragmatic but diverging choices.
Security is not just about perimeter controls. Model design should plan for compartmentalisation, encryption in transit and at rest, robust audit trails and tamper-evident logging. Patient-level audit queries must be fast: clinicians and IG teams need to answer “who saw what, and when?” within minutes, not days. Where possible, incorporate pseudonymisation patterns for analytics, ensuring that population-level insights can be generated without unnecessary exposure of identifiable data. For shared care records and coordination platforms, session-based access with break-glass patterns—recording reason codes and notifying Caldicott Guardians where appropriate—protects confidentiality while enabling safe care during emergencies.
Governance and safety practices to bake in
Privacy and trust measures that build confidence
Even with strong standards and governance, teams still need pragmatic patterns to convert intent into working systems. One effective approach is “profile-first design”. Start by drafting FHIR profiles for the core resources in your pathway—Observation for vital signs, Condition for long-term conditions, MedicationStatement or MedicationRequest for current medications, Encounter and EpisodeOfCare for context. Use examples grounded in real clinical notes and letters, not synthetic data. Iterate with clinicians until the profile and example align with how they record and interpret the information day-to-day. This avoids the common trap where profiles look elegant but are functionally misaligned with clinical reality.
Another pattern is “minimal viable exchange” (MVE). Rather than aiming to cover every possible field, prioritise the 10–15 elements that deliver disproportionate value in the target pathway. For a virtual ward, that might be patient demographics, care contacts, active problems, current medications, allergies, escalation plan, latest relevant observations and thresholds. Ship this narrow slice end-to-end with strong quality and governance, then layer in additional fields as adoption grows. MVEs build trust, create momentum and provide live feedback that improves the model more than months of committee debate ever could.
For cross-system analytics, the “semantic lakehouse” model is helpful. Store incoming FHIR resources in their native form for traceability and re-ingest; project them into a curated, analytics-friendly schema for performance; and maintain semantic links (patient identifiers, code systems, provenance) so analysts can trace every metric back to source. This pattern allows the same canonical semantics to power operational dashboards, risk stratification and service planning without forking the meaning of fields across pipelines. Crucially, publish semantic definitions and data dictionaries alongside the model so that analysts and clinicians reason about metrics consistently.
Provenance-rich event logs deserve special care. Develop a consistent event envelope that carries who-what-where-when-why across all domains. Include the original source identifiers, the mapping version used, and a checksum of the input payload. This attention to detail pays off when investigating discrepancies between systems—why a medication appears in one view but not another, or why an allergy was suppressed. With high-quality event logs, your integration support team can answer these questions quickly, preventing erosion of trust in shared records.
Data models only succeed when they serve the flow of care. Start with concrete pathways—falls prevention, end-of-life care, maternity, mental health crisis response, long-term condition management—and trace the information needs across settings. For each hand-off, define who needs what information, how quickly, at what fidelity, and in what context. A maternity pathway, for example, spans GP referral, booking, scans, screening, antenatal visits, birth, postnatal care and health visiting. Each step has specific information hand-offs: screening results, risk factors, birth plans, safeguarding notes, neonatal observations. Modelling this pathway end-to-end surfaces the shared denominators (e.g., allergies and medications) and the specialised structures (e.g., parity, gravidity, estimated due date calculations).
Design for the user interface as much as the API. If community nurses must open three different screens to capture structured observations, structured data entry will deteriorate and the model will starve. Work with suppliers to streamline capture of structured fields critical to interoperability, embedding SNOMED pickers and unit guards into clinical workflows. Where free text is unavoidable, invest in natural language processing pipelines that extract key signals (with confidence scores and human validation), rather than pretending free text does not exist. The goal is not to eliminate narrative—clinicians need it—but to ensure that the canonical data needed for safe hand-offs is reliably captured alongside it.
Think longitudinally. Many clinical decisions depend on trend, not point-in-time values. Design your Observation profiles to carry method, body site, device and position when relevant; capture the context (resting vs ambulatory blood pressure), and enforce units using UCUM so that trend charts do not silently mix incompatible measurements. For problem lists, represent status and verification properly; for medications, differentiate between prescribed, dispensed and administered records, and record reasons for change or discontinuation. These details allow downstream systems to make safe inferences and present meaningful longitudinal views to clinicians.
Finally, respect the distinctiveness of mental health, children’s services and social care. Data models for these domains must carefully handle sensitive concepts, legal restrictions and multidisciplinary contributions. Consider data segmentation by sensitivity labels, so that particularly sensitive entries require explicit user action to reveal and are excluded from certain default views. Provide mechanisms to record patient preferences and sharing restrictions at a granular level, and test these behaviours with service users and clinicians to avoid unintended consequences.
Interoperability is a team sport. Most NHS providers rely on a mix of electronic patient record (EPR) platforms, departmental systems and niche tools. Success depends on constructive supplier engagement anchored in clear, testable requirements. Publish conformance packs that define your canonical profiles, example payloads, API expectations, error codes and performance SLAs. Provide self-service validation tools—profile validators, terminology checkers and synthetic data sets—so suppliers can test before coming to joint assurance. This reduces the back-and-forth of manual reviews and accelerates delivery.
Run formal conformance testing with traceable outcomes. For each use case, define objective criteria: payloads accepted and returned, error handling when mandatory fields are missing, response times, subscription behaviours, and how gracefully systems handle version upgrades. Automate as much of this as possible; manual testing does not scale and is easily skewed by test data quirks. Where suppliers demonstrate conformance, record and publish the evidence internally so other projects can leverage it. Where they do not, use the findings to negotiate remediation roadmaps rather than re-litigating requirements.
Be pragmatic about legacy systems. Some cannot emit FHIR or SNOMED today. For these, deploy edge adapters that translate proprietary formats into your canonical model. But set a trajectory: adapters are a bridge, not a permanent destination. Include roadmap commitments in contracts, with incentives for native support of UK Core and PRSB-aligned exchanges. Meanwhile, protect patient safety by ensuring adapters are maintained, versioned and included in your clinical safety cases; an ungoverned adapter is as risky as an unvetted clinical device.
Consider user-centred certification within your ICS. Beyond pure technical conformance, evaluate how interoperable features show up in clinicians’ workflows. Does the GP EHR actually file inbound documents into the right templates? Are allergies de-duplicated sensibly? Can clinicians see provenance and act on it? These pragmatic tests often reveal issues that pure payload validation misses. Closing this loop with suppliers and clinical teams leads to solutions that delight rather than frustrate.
Interoperability exists to improve care, not to satisfy architectural elegance. Define outcome measures linked to the pathways your model serves—time from referral to triage, avoidance of duplicate tests, reduction in harm events related to incomplete allergy data, patient satisfaction with care co-ordination, staff time saved capturing duplicate information. Tie these outcomes to your release plan: when a new profile goes live or a mapping improves, track the expected change. This data is the best argument for continued investment and the surest way to keep diverse stakeholders aligned.
Build a culture of continuous learning. Publish incident reviews where data model or mapping issues contributed to near misses, along with the fixes. Celebrate data quality improvements and show clinicians where their structured documentation has directly improved care transitions or reduced re-work. Encourage analysts and developers to contribute back improvements to value sets and example libraries. Over time, this open, collaborative posture becomes a competitive advantage for your ICS: new projects start faster and deliver more safely because the groundwork is solid.
Sustainability also means talent and tooling. Document your profiles, mappings and governance in accessible repositories. Train clinical informaticians who can bridge between frontline practice and data modelling, and develop engineers who understand both FHIR intricacies and NHS operational realities. Invest in terminology services that handle SNOMED CT and dm+d properly—synonyms, subsumption, post-coordination where appropriate—and integrate those services into both data entry and validation pathways. The result is a self-reinforcing ecosystem where models, systems and people evolve together.
Finally, plan for the next horizon. Patient-generated health data from home monitoring and wearables, genomic data entering routine care, and AI-assisted documentation will all put new demands on your data model. Resist the urge to bolt on ad-hoc fields. Instead, extend the canonical model deliberately, re-use emerging profiles where credible, and pilot with clear clinical safety oversight. When the future arrives, it will integrate cleanly with what you have already built.
Is your team looking for help with digital health interoperability? Click the button below.
Get in touch