Written by Technical Team | Last updated 20.03.2026 | 16 minute read
Digital health interoperability is often described as a technical challenge, but in clinical practice it is first and foremost a safety challenge. When information moves between systems, the central question is not simply whether the message was transmitted, mapped or rendered. The real question is whether the right clinician, in the right context, receives the right information at the right time in a form that supports safe care. That is why interoperability must sit squarely inside the clinical safety case, rather than being treated as a separate architectural or procurement workstream.
Within the NHS and wider health and care ecosystem in England, DCB0129 and DCB0160 provide the foundation for managing these risks. DCB0129 focuses on the manufacturer’s responsibilities in the development of health IT systems, while DCB0160 focuses on the responsibilities of organisations deploying and using those systems. Together, they create a shared safety model: suppliers must design for safe use, and deploying organisations must assure safety within their own operational environment. Interoperability sits at the meeting point between those two responsibilities, because hazards rarely arise from software logic alone. They emerge from the interaction between products, data standards, workflows, local policies, user assumptions and organisational controls.
This matters because modern care pathways are increasingly dependent on digital hand-offs. Primary care, community services, urgent care, virtual wards, diagnostics, prescribing, referral management, patient engagement tools and clinical decision support all rely on data exchange. A product may be clinically safe in isolation and still become unsafe when deployed into a connected environment where data is incomplete, delayed, misfiled, duplicated, transformed incorrectly or displayed without adequate context. Clinical safety cases that fail to address interoperability in depth can therefore look complete on paper while leaving significant residual risk in live use.
A robust approach requires more than listing interface risks in a hazard log. It requires organisations to understand how interoperability changes the hazard landscape, how DCB0129 and DCB0160 divide and connect responsibilities, and how controls should be designed across technology, workflow, governance and assurance. In practice, the strongest safety cases are those that treat interoperability not as a feature, but as a socio-technical system with failure modes that must be anticipated, tested, mitigated and monitored over time.
Interoperability increases clinical utility because it can reduce transcription, improve continuity, support decision-making and make records more accessible across care settings. Yet every new interface also creates new pathways for harm. When data is exchanged between systems with different models, assumptions and user interfaces, clinical meaning can shift. A field may map successfully at a technical level while still being misunderstood by the receiving user. A message may arrive on time but be attached to the wrong workflow queue. A record may appear complete while hiding the absence of critical historical or contextual data. In all of these cases, the interoperability layer becomes part of the clinical safety argument.
This is one reason safety incidents in digital health are often not caused by a single catastrophic software failure. They are more likely to arise from small mismatches across systems. An allergy may be coded differently in the source and destination. A discharge notification may arrive after a medication change has already been actioned. A referral may be accepted by an integration engine but never become visible to the receiving team in a meaningful operational queue. A clinician may assume that a structured feed represents the full record when it only represents a filtered subset. These are classic interoperability hazards because the risk lies in the gap between what the technology does and what the clinical user believes it does.
The safety significance of interoperability is amplified by the pace and scale of digital transformation. Organisations are now expected to connect multiple products, often from different suppliers, across fragmented pathways. Procurement models can encourage modularity, but modularity also fragments accountability unless roles are defined clearly. The clinical safety case must therefore answer difficult questions that go beyond interface testing: who owns the end-to-end safety argument, how is clinical intent preserved across systems, what assumptions are made about source data quality, and how will local services know when a safety control has failed?
Interoperability also changes the nature of latent risk. A defect in a stand-alone application may affect a single user or workflow. A defect in a shared interface, terminology mapping or event-routing logic can propagate widely and silently. It may influence multiple settings, multiple clinicians and multiple patient cohorts before anyone notices. This propagation effect is precisely why interoperability hazards deserve explicit prominence in both DCB0129 and DCB0160 documentation. They are not peripheral implementation details; they are system-level safety risks.
DCB0129 requires manufacturers to apply clinical risk management throughout the lifecycle of a health IT system where there is a potential for patient harm. In interoperability terms, that means the supplier cannot limit its safety analysis to the internal logic of its own application. It must examine foreseeable risks associated with importing, exporting, transforming, displaying and acting upon shared data. The manufacturer’s clinical safety documentation should therefore explain what data the product expects, what constraints apply, what safety assumptions underpin its use, what edge cases are known, and what controls have been built to reduce the risk of harm. That documentation should not merely assert compliance. It should help deploying organisations understand the safe operating envelope of the product.
DCB0160 shifts the focus to the organisation implementing and using the system. Even where a supplier has completed a thorough DCB0129 process, the deploying organisation must still determine whether the product is safe in its own local environment. Interoperability is often where this local context becomes decisive. Local directory structures, role profiles, patient identification processes, business continuity arrangements, workflow ownership, escalation routes, message volumes and operational staffing patterns can all alter the real-world risk. A safe supplier design can become unsafe in a deployment where incoming results are not monitored, reconciliation is weak, terminology is used inconsistently or responsibilities for acting on shared information are unclear.
The most effective interpretation of DCB0129 and DCB0160 is therefore not adversarial or transactional. It is collaborative. The manufacturer should provide a credible clinical safety case, a current hazard log, evidence of mitigations, known limitations, and clear implementation guidance. The deploying organisation should interrogate that material, validate it against local workflows, identify deployment-specific hazards, and produce its own safety case that explains why the technology is safe in context. Interoperability risks should be among the first things reviewed, because they often sit at the supplier-customer boundary where assumptions are easiest to miss.
A mature compliance approach also recognises that interoperability hazards frequently span multiple organisations. A referral platform may depend on coding and routing performed by one supplier, operational triage performed by another organisation, and onward actions by a receiving clinical service. In such pathways, the strongest DCB0160 work is not limited to the host trust, practice or provider. It actively clarifies inter-organisational responsibilities, service-level dependencies and escalation mechanisms. The safety case should show how control is maintained when messages fail, when downstream systems are unavailable, when patient identity confidence is reduced, or when clinical acknowledgements are delayed.
For both standards, the presence of a Clinical Safety Officer is essential, but governance alone is not enough. Interoperability safety evidence should be visible in design artefacts, test scripts, interface specifications, release procedures, training materials and operational monitoring plans. If a hazard control exists only in the narrative of a safety case and not in everyday technical and operational practice, the control is weak. Good compliance means the safety case is reflected in the actual design and use of the service.
Interoperability hazard analysis must be specific, clinically grounded and realistic. Generic statements such as “interface failure may result in harm” do not provide useful assurance. A strong hazard log should describe credible chains of events, foreseeable causes, clinical consequences, existing controls, residual risk and ownership. It should also make clear whether the hazard belongs primarily to the supplier, the deployer, or both.
The most common interoperability hazards usually cluster around a handful of themes:
Each of these themes can produce multiple hazard records, and each record should be framed around patient harm rather than IT failure. For example, the hazard is not simply that a diagnostic result message is delayed. The hazard is that a time-critical result is not reviewed or acted upon, leading to delayed diagnosis, deterioration or avoidable escalation of care. That distinction matters because it forces the safety team to think in clinical pathways, not only technical incidents.
A high-quality interoperability hazard log also distinguishes between source-system hazards, transfer hazards and receiving-system hazards. A problem can originate in the sending system, emerge during transport or transformation, or arise when the receiving application renders the information ambiguously. This layered view is especially valuable for multi-supplier environments because it reduces the temptation to assign blame simplistically. In practice, many serious hazards depend on several small weaknesses aligning across layers.
Consider the example of medication data flowing into a shared care or urgent care setting. The source record may contain historical and current items with different statuses. The interface may pass all entries correctly, but the receiving system may display them in a compact list without enough prominence for status, dates or provenance. The clinician may then interpret an inactive medicine as current, or fail to see that the list is only partial. The hazard is not caused solely by data quality, or solely by the interface, or solely by the display. It is created by the combined design of the interoperable pathway. The hazard log should capture that combined reality.
Equally important is the treatment of assumptions. Many interoperability hazards arise because users infer more from the data than the system can safely support. If a system shows an imported allergy list without clearly signalling whether it is complete, curated, reconciled or date-limited, clinicians may over-trust it. If a dashboard aggregates data from several sources but hides refresh times or source confidence, users may act on stale information. Safety cases should therefore document not just failure scenarios, but also assumption hazards: situations where the system functions as designed but still encourages unsafe interpretation.
Residual risk must also be credible. Interoperability hazards are rarely eliminated entirely, so the safety case should explain why the remaining risk is acceptable and how it is being actively managed. Controls may reduce likelihood, reduce severity, improve detectability or create defence in depth, but they rarely remove the need for vigilance. An honest, well-maintained hazard log is more valuable than a polished but unrealistic one.
Mitigating interoperability risk requires a layered approach. Technical controls are important, but they are never sufficient on their own because many hazards depend on workflow, user cognition and governance. The safest deployments combine strong design principles, disciplined operational controls and ongoing surveillance after go-live.
At the technical level, identity management is one of the most critical defences. Systems exchanging clinical data should use robust patient matching approaches, minimise ambiguity, and make confidence and provenance visible where relevant. When identity cannot be confirmed safely, the workflow should degrade gracefully rather than forcing risky action. Similarly, data contracts should be explicit about message content, optionality, coding, timing, error handling and acknowledgement logic. Safety improves when interfaces are designed to fail visibly rather than silently. A rejected message, prominent exception queue or interrupted workflow may be inconvenient, but it is often safer than a hidden partial failure.
Display design is another major mitigation area that is often underestimated. Imported information should not be rendered as though it were native, current and complete unless that claim is justified. Source system, author, timestamp, verification status, and any known limitations should be intelligible to the clinical user. Where data is summarised or filtered, the system should make this obvious. Good interface design helps clinicians understand what the information means, how much confidence to place in it, and what further checking may be needed before acting.
Workflow controls are equally important. Every significant inbound data flow should have a defined owner, a review process, an escalation route and a recovery process for backlog or failure. It is not enough for a supplier to show that a message arrives in a technical inbox if no local team has clear responsibility for acting on it within clinically acceptable timeframes. DCB0160 safety cases are strongest when they translate interoperability risks into operational accountabilities: who checks exception queues, who reconciles incomplete tasks, who investigates duplicates, who confirms receipt of urgent items, and how these activities are audited.
Training and usability controls also deserve serious attention. Interoperable systems can create subtle cognitive traps, especially where clinicians move between products with different interaction patterns. Training should therefore focus not only on button clicks, but on clinical interpretation: what the shared data includes, what it excludes, where it comes from, how current it is, and what to do when something looks inconsistent. This kind of training is far more effective when built around realistic scenarios and near misses rather than abstract system descriptions.
Strong mitigation plans commonly include the following elements:
Perhaps the most powerful mitigation principle is defence in depth. No single control should be trusted to carry the entire safety burden. If safe action depends entirely on perfect data mapping, that is fragile. If it depends entirely on clinician vigilance, that is also fragile. Better practice layers controls: structured validation, visible provenance, meaningful alerts, human review steps, downtime procedures, audit trails and governance oversight. Each layer compensates for the inevitable weakness of the others.
Another essential point is that mitigation should match clinical criticality. Not every interoperability issue deserves the same degree of control. A delay in a low-risk administrative update may require modest safeguards, while a failure affecting urgent diagnostic communication, medicines information or safeguarding data may require much stronger design and monitoring. Safety teams should therefore prioritise hazards according to likely clinical consequence, detectability and pathway dependence, rather than attempting to treat every interface defect as equal.
A defensible clinical safety case for interoperability is not simply a collection of mandatory documents. It is a coherent argument that explains why the system is acceptably safe for its intended use in a defined environment. That argument should be intelligible to clinical, technical and governance stakeholders alike. It must connect intended benefits, known hazards, implemented controls, residual risks, operating assumptions and assurance activities into one credible narrative.
For manufacturers, the safety case should show that interoperability was considered from the earliest stages of design. That includes intended clinical use, foreseeable misuse, data dependencies, coding assumptions, known interface limitations, expected external system behaviour and safe degradation modes. Where the product depends on third-party services, the documentation should make that dependency explicit. Where risks are transferred to deploying organisations, the transfer should be clear, justified and accompanied by practical implementation guidance rather than vague caveats.
For deploying organisations, the DCB0160 safety case should explain how the product behaves within the local pathway, not merely within the supplier’s generic model. This means documenting local workflows, local role ownership, operational hours, exception management, manual fallback, downstream dependencies and the real constraints of the clinical setting. A safety case for a virtual ward, for example, should look materially different from one for a GP integration, even if the same core platform is used. Context changes risk, and a credible safety case must show that context has been understood.
A particularly important feature of interoperability safety cases is traceability. Hazards should map to controls, controls should map to design or operational artefacts, and residual risks should map to approval decisions and review plans. When a control depends on training, there should be evidence of training design and delivery. When a control depends on monitoring, there should be metrics, thresholds and ownership. When a control depends on local policy, that policy should exist, be current and align with how the service actually operates. Without traceability, the safety case becomes rhetorical rather than evidential.
The best safety cases also stay alive after deployment. Interoperable systems evolve continuously through software releases, pathway redesign, new integration partners, revised coding standards and changing service demand. Each of these can invalidate earlier assumptions. Clinical safety review should therefore be built into change control, release management and incident learning. Hazard logs should be updated when near misses occur, when users report confusion, when queues back up, when data quality anomalies emerge, or when integration rules change. A safety case that is not revised in response to operational reality will quickly drift out of date.
There is also a strategic dimension to this work. As health systems pursue more connected, person-centred care, interoperability will increasingly be associated with automation, decision support and population-level orchestration. That means future clinical safety cases will need to move beyond basic message exchange assurance and address compound risks: algorithmic over-reliance on shared data, automated tasking based on incomplete records, cross-setting escalation logic, and the interaction between human teams and digital workflow engines. Organisations that build strong DCB0129 and DCB0160 foundations now will be much better prepared for those next-generation risks.
Ultimately, digital health interoperability should be seen as a patient safety discipline as much as an architecture discipline. DCB0129 and DCB0160 offer a structured way to manage that reality, but compliance should not be reduced to document production. The real objective is safer care in a connected environment. That requires manufacturers to design honestly for real-world complexity, deploying organisations to examine local pathways rigorously, and both parties to treat hazards as dynamic and shared. When that happens, the clinical safety case becomes more than a governance requirement. It becomes a practical mechanism for preventing harm as information moves across the modern health and care landscape.
Is your team looking for help with Digital Health interoperability? Click the button below.
Get in touch