Written by Technical Team | Last updated 17.10.2025 | 13 minute read
Interoperability and integration are often used in the same breath, but they solve different problems in healthcare. Integration is the act of connecting two or more systems so data can move from A to B for a defined purpose. It’s the wiring—interfaces, adaptors, transformations and workflows—that make a particular exchange work today. Interoperability, by contrast, is a set of enduring capabilities that allow many systems to understand, trust and reuse each other’s data safely over time. Where integration is a project, interoperability is a property of an ecosystem.
The distinction shows up in how products behave in the wild. An integration might map a hospital’s lab results feed into a care-coordination application by transforming each HL7 v2 message and pushing it into the app’s model. If the hospital changes a field or adds a new test code, that bespoke interface often breaks. Interoperability aims to reduce that fragility by using common information models, shared code systems and predictable APIs—so that new data elements can be discovered and handled without rewriting the entire pipeline.
In healthcare, the stakes are unusually high because data is safety-critical, privacy-sensitive and distributed across many actors. An interoperable system is one where a discharge summary from a hospital can be reliably ingested by a GP system; where consent signals travel with the record; where medications are encoded in a standard dictionary; and where applications authenticate securely to request just the data they need. Integration alone can be brittle and expensive; interoperability adds semantic context, governance and scale.
Interoperability gives innovators a compounding advantage: each new connection becomes easier than the last. When your application speaks a widely adopted API and uses shared terminologies (for example SNOMED CT for problems and procedures, LOINC for lab tests, dm+d or RxNorm for medicines), you spend less time building one-off mappings and more time shipping features. That shortens the sales cycle because your implementation risk is lower, and it reduces the total cost of ownership for your customers.
It also improves clinical safety and outcomes. When a remote monitoring platform records an observation, the meaning of that measurement must persist beyond your own database. A blood pressure reading encoded with a recognised code, units and context can be accurately merged into the longitudinal patient record and be interpreted by decision support downstream. Without that semantic clarity, data can be misread or dropped entirely. Interoperability is therefore a patient-safety feature, not only a technical nicety.
From a commercial perspective, interoperability protects you against vendor lock-in and market fragmentation. Many health systems run a patchwork of electronic health record (EHR) vendors and a long tail of departmental systems. If your product relies on a specific vendor’s private interface, you will find expansions constrained by that vendor’s roadmap and licensing. If, instead, you build on open standards—such as the HL7® FHIR® family, IHE profiles, SMART on FHIR authorisation, openEHR archetypes where appropriate—you can serve multiple markets without re-architecting core flows.
Lastly, policy and procurement increasingly reward interoperable solutions. Whether you are selling to the NHS in the UK, an integrated care board, a private hospital group or insurers, buyers look for evidence that you align with published standards, can demonstrate data minimisation, and handle consent and audit correctly. Interoperability becomes part of your compliance story, touching UK GDPR, the Data Protection Act 2018, and clinical risk management obligations (such as DCB0129/0160 in England). Being able to show standard endpoints, clear information governance and traceable mappings earns trust early.
It would be a mistake to treat integration as the “bad guy”. In reality, every interoperable ecosystem contains many integrations. The key is to choose patterns that maximise reuse and resilience. Classic healthcare integrations include HL7 v2 feeds for admissions, discharges and transfers (ADT), orders and results (ORM/ORU), DICOM for imaging and modality worklists, CSV extracts for legacy migrations, and increasingly FHIR REST APIs for resources such as Patient, Encounter, Observation, MedicationRequest and DocumentReference. Each of these solves a concrete problem and will remain in play for years, particularly where capital equipment or older clinical systems are involved.
Consider the scenario of a digital therapeutics company that needs to receive a patient roster from a GP practice, send back weekly outcomes, and surface alerts to a clinician’s dashboard. A pragmatic approach might combine an initial bulk data extract to seed the cohort, a FHIR Subscription or polling to keep the roster fresh, and a results interface that posts structured Observations back to the EHR. Where the EHR cannot accept Observations directly, a lightweight integration engine can transform and route the payload into the available channel—perhaps as a PDF via a documents workflow—while still preserving a standards-based representation within your platform. In this way, integration bridges gaps without losing the interoperable core.
The pitfalls start when an integration encodes your product’s domain logic. Hard-coded mappings between local fields and your internal model, custom enumerations with no external equivalents, and untyped JSON blobs tied to the quirks of a single site will slow you down. Upgrades become risky because you are never sure what else depends on that brittle mapping. You can mitigate this by adopting canonical data structures at your boundaries and isolating site-specific logic into declarative configuration and translation layers. That pattern also makes it easier to test: conformance suites, example payloads and validation rules can be run in CI/CD before anything hits production.
Security and consent deserve particular attention. An integration that bypasses consent controls or relies on shared service accounts can work in a pilot but will not scale safely. A more interoperable approach uses standards-based authorisation (for example, OAuth 2.0 and OpenID Connect), scopes that reflect clinical roles, and an auditable consent model that travels with the record. The operational benefit is significant: rotating secrets, revoking access and proving least-privilege become routine tasks rather than emergency rewrites.
The last pitfall is performance. Healthcare data sets are large and often slow to query in operational systems. Naïvely polling an EHR for every patient every hour will upset both the EHR team and your cloud bill. Better patterns include event-driven feeds, incremental queries using updated-since parameters, Bulk FHIR for population-level extracts, and caching with appropriate expiry. Integration that respects the operational limits of partners earns goodwill and keeps your SLOs intact.
A product roadmap that takes interoperability seriously starts with a crisp definition of the “source of truth” for each concept. Decide early which systems own identity, demographics, clinical narrative, imaging, scheduling and metrics. Where your product creates new data (for instance, algorithm outputs or patient-reported outcomes), model it formally rather than passively dumping it into a free-text note. Even if you must deliver a PDF for now, keep the structured data alongside it so you can promote it later.
Standards selection is a strategic decision. HL7 FHIR is a natural choice for most transactional healthcare workflows because it offers a consistent resource model, REST semantics, search parameters and versioning. Within FHIR, prefer mature resources and stick close to established implementation guides. Where long-term semantic modelling is central—such as clinical content in complex pathways—openEHR archetypes and templates can provide a robust foundation, with FHIR used as the exchange layer. For imaging, DICOM remains the lingua franca; for device feeds, IEEE profiles and FHIR Device/DeviceMetric can complement vendor protocols. The point is not to chase acronyms but to choose standards that your customers already use and that your engineers can implement well.
APIs should be boring and well-documented. Publish a versioned OpenAPI specification for your endpoints, with clear examples and error contracts. Treat validation as a feature: if a caller sends a blood glucose in the wrong units or without a timing context, fail fast with actionable feedback. At the same time, be pragmatic about optionality. Healthcare data is messy, and over-strict schemas can reject real-world records. Aim for “strict on what you send, liberal on what you accept”, and transform inbound data into your canonical model with explicit mappings and audit.
Data governance often gets relegated to the appendix of a proposal, but it is the backbone of trust. Define how you capture, store, process and share data, how you manage retention, and how you respond to data subject rights. In the UK, that means demonstrating alignment with UK GDPR, the Data Protection Act 2018 and relevant NHS information standards. Clinically, conduct hazard analyses for new features, maintain safety cases, and ensure your user interfaces present data provenance so clinicians can judge reliability. Interoperability depends on this scaffolding: without clear governance, partners will be reluctant to connect.
To turn all of this into a roadmap that investors and customers can believe in, tie milestones to genuine interoperability outcomes. Instead of “Finish EHR integration Q2”, commit to “Support FHIR R4 read/write for Patient, Observation and DocumentReference; pass conformance tests; roll out to two pilot sites; publish API docs; complete clinical safety case update.” These statements are measurable, and they anchor interoperability in evidence rather than aspiration.
Establish your interoperability foundation with these practical moves:
Choosing when to integrate and when to invest in interoperability is not a binary call; it’s a series of trade-offs across time horizons. A sensible framework starts with your near-term business goal. If you must prove value in a single ward within six weeks, a narrow integration that delivers a clean, clinician-friendly workflow may be the right call, even if the data lands as a PDF. The important thing is to avoid painting yourself into a corner. Capture structured data alongside the PDF, keep mappings in configuration, and ensure the clinician can trace provenance. Then, as you move from pilot to scale, you can promote those data to standard resources and expand your endpoints.
Cost of change is a second axis. Integrations often look cheaper at the start because they piggyback on whatever interface is available, but every site-specific tweak adds compound interest. Interoperability does carry an upfront cost—choosing and implementing standards, building conformance harnesses, and proving security—but it pays back as you add customers and features. If your go-to-market involves many small providers or multiple regions, that payback arrives sooner.
Risk posture is your third lens. Where data touches diagnosis, prescribing or urgent care, your margin for ambiguity is thin. Interoperability reduces semantic risk by anchoring meaning in shared code systems and consistent models. It also improves operational resilience: if one system changes, your endpoints and validation guardrails help you detect and adapt quickly. Integration without those guardrails can hide silent failures until a clinician spots something is wrong.
Finally, consider the user experience. Clinicians and patients do not care which acronyms you use; they care that the right data is in the right place at the right time. Interoperability enables that seamlessness across settings: a medication entered in pharmacy shows in the ward round list with the same identity; a patient’s wearable data resolves to a recognised vital sign with units; a consent recorded in primary care carries into community services. Integration can glue the pieces together; interoperability makes the glue disappear.
Use this quick diagnostic to decide your next move:
A practical way to stop the interoperability-versus-integration debate from stalling progress is to define a small, visible slice of your product and deliver it end-to-end with good hygiene. Pick a single outcome that matters to users—say, surfacing abnormal home blood pressure readings in the hospital’s EHR inbox with a clear audit trail—then apply the discipline described above. Build the ingestion using a standard resource (Observation), code each measurement with a recognised code and units, implement OAuth scopes that limit access to just the relevant patient cohort, and publish your conformance tests so the hospital’s integration team can validate quickly. If the hospital requires a PDF for now, generate it too, but keep the structured Observation as the canonical record within your platform.
Parallel to that slice, invest in a reference environment that external partners can use without your engineers on a call. Provide sample payloads, test users, and endpoints that mimic production behaviour. Add human-readable docs that explain the core resources, expected search parameters, error handling and versioning. Include a set of negative tests—invalid codes, missing units, out-of-range timestamps—so partners can harden their calls before going live. This saves everyone time and demonstrates professionalism.
Next, treat mappings and transformations as first-class artefacts. Document how local codes at each site map to standard terminologies, who signed them off, and when they were last reviewed. Store mappings in source control, generate reports from them, and test them automatically. When a site changes its local lab catalogue, you will spot the impact early rather than during a weekend on-call. Interoperability thrives on such boring excellence.
You will also need to decide how to manage consent and purpose. In many workflows, consent is implicit through direct care, but for research, secondary use or patient-facing apps, explicit consent and transparent privacy notices are essential. Design your data flows so that consent state travels with the record and is checked at the point of access. Provide patients and clinicians with clear views of who accessed what and when, and give them a route to challenge or revoke. Doing this well pays dividends far beyond compliance: it builds the user trust that all successful digital health products depend on.
Finally, remember that interoperability is as much about people and process as it is about code. Run show-and-tells with the clinical and informatics teams who will rely on your interfaces. Share your roadmap, invite feedback on your resource models, and listen when a ward clerk explains how a small change in your payload will save them ten minutes per patient. This is not just stakeholder management; it is requirements discovery. By taking those workflows seriously, you will make better technical decisions and win allies who champion your product.
Is your team looking for help with digital health interoperability? Click the button below.
Get in touch