Written by Technical Team | Last updated 17.10.2025 | 21 minute read
Digital health is now the front door to care for millions of people. From booking vaccinations and repeat prescriptions to remote monitoring, triage and long-term condition management, the experience patients and clinicians have is increasingly mediated by software. In that context, accessibility is not simply a compliance exercise; it is a clinical safety issue, a question of equity, and a marker of organisational maturity. When a person cannot use your digital service, they are at risk of missed care, delayed diagnoses, medication errors, and avoidable stress. When a clinician cannot use it, the system risks workarounds, reduced data quality and burnout.
This article sets out how to meet the Web Content Accessibility Guidelines (WCAG) alongside the NHS Service Standard and the NHS Design System, translating policy into practical design and delivery. It draws on lived realities from product teams in the UK health system: the rhythms of discovery and alpha, the constraints of legacy EPRs, the ethical scrutiny of research with vulnerable users, and the fact that every service must work in less-than-ideal conditions—on small screens, with variable connectivity, under stress.
Accessibility is often framed as a legal obligation. In the UK, public sector websites and mobile apps must meet accessibility requirements, and most organisations align their targets with WCAG at the AA level. That obligation matters, but focusing only on it can produce a “checklist culture” that misses the clinical and emotional realities of health journeys. Accessibility in digital health is ultimately about safety, trust and dignity. Safety, because inaccessible flows can cause errors in medicines management or consent; trust, because people must feel confident they are being seen and heard; and dignity, because health contexts are intimate, and requiring someone to struggle through a poorly designed interface can be demeaning at a vulnerable moment.
Clinical relevance comes into sharp relief when you consider cognitive load. Many users will encounter your service while unwell, anxious, or caring for someone else. They may be in pain. They may be operating a screen reader, using a switch device, or navigating with a keyboard as a result of fine motor limitations. Others will have temporary or situational impairments: a clinician using a dim workstation on a night shift, a patient in a bright A&E waiting area, a parent holding a baby while completing an online questionnaire one-handed. Accessibility is what ensures your service remains robust across all of these contexts.
There is also an operational case. Accessible design reduces support calls, improves completion rates, and lowers the cost of change by catching issues early. Clear content, consistent components, semantic markup and inclusive research lead to fewer edge cases and a simpler architecture. When your user interface and content are rigorous, your analytics become more trustworthy because behaviour is less “noisy” and more representative. That, in turn, strengthens your ability to iterate safely and quickly—critical in a domain where policy and clinical practice can change.
For suppliers, accessibility is a differentiator. Commissioners and trusts increasingly assess products not just on features but on how safely and equitably those features can be used by diverse populations. Strong accessibility practices, evidenced with meaningful artefacts—like research insights with disabled users, conformance statements mapped to user journeys, and test coverage across assistive technologies—can be decisive in procurement. In other words, accessible products are easier to buy, easier to deploy, and easier to keep.
WCAG provides testable success criteria organised around four principles: perceivable, operable, understandable and robust. In digital health, the value lies in translating those criteria into repeatable patterns that survive the complexity of clinical journeys. Rather than treat WCAG as a set of atomised rules, anchor them to your core tasks: onboarding, identity verification, consent, symptom capture, results viewing, messaging, prescribing, task lists and handovers. This reframing lets teams reason about actual risks and outcomes, not just boxes to tick.
Start with semantics and structure. Use proper headings, lists, landmarks and labels so that assistive technologies can provide a coherent map of the page or the view. In a triage flow, for instance, ensure each step has a clear heading that describes the action (“Tell us about your symptoms”) and that in-page progress is announced to screen readers. Input components must include programmatic labels; placeholder text is not a label. Validation messages need to associate with the relevant fields and be announced when they appear. This is where WCAG’s focus on name, role and state becomes concrete: if the role of a component is non-standard or mis-declared, assistive technology cannot convey it accurately, and a user may misunderstand a critical step.
Colour and contrast take on added importance in clinical contexts where your interface competes with environmental variables—harsh lighting, privacy screens, and stress. Meeting contrast ratios for text and interactive elements is the baseline; designing for dark mode, high contrast mode and forced colours is the reality of supporting users in hospitals and at home. Icons should come with text or accessible names that disambiguate their meaning. “Flag” or “warning” without context can be ambiguous; “Allergy recorded: penicillin (severe)” is not only accessible but clinically safer.
Keyboard operation is non-negotiable. Many clinical systems are used at speed with keyboards due to efficiency and hygiene. Every interactive element must be reachable and operable via keyboard with a visible focus indicator. Focus order should follow the visual and logical structure—especially in forms and dashboards where modal dialogues or expandable panels are common. When a modal opens, focus should move into it, and when it closes, return to a logical point. If your interface traps focus, you force users to reload or abandon the task, which is both frustrating and risky during time-critical work.
Error handling requires special care in health services where mistakes can have consequences. Error messages must be specific, respectful and helpful. Avoid generic statements like “Something went wrong.” Instead, provide an accessible error summary that appears before the form, lists the issues with direct links, and is announced to assistive technologies. Inline messages should explain how to fix the problem, not just that one exists. For example, “Enter your NHS number in the format 123 456 7890” is far more useful than “Invalid NHS number.” Aligning validation with how humans actually enter data—accepting spaces, hyphens and different cases—removes friction without compromising integrity.
To ground these ideas in day-to-day work, connect common WCAG success criteria to the components and flows you ship:
Patterns matter because they scale. When you embed these behaviours into your design system, your teams inherit accessibility by default rather than reinventing it for every feature. That is how you sustain quality across releases and vendors.
The NHS Service Standard sets the bar for building and running healthcare services, covering discovery, design, technology, data, and operational considerations. Accessibility is threaded throughout: conduct inclusive research; design with data; make sure everyone can use the service; and operate a service that can be sustained. The NHS Design System complements this with production-ready components, patterns and guidance that already embody inclusive principles. Using them is not only efficient but reduces risk by building on tested, familiar interactions.
In practice, aligning with the Service Standard means treating accessibility as a workstream across discovery, alpha, beta and live. In discovery, you are learning whether the service could meet user needs at all. That includes understanding the accessibility barriers people face today: inaccessible letters, noisy waiting rooms, locked-down devices, or low digital confidence. In alpha, you are exploring solutions; this is the moment to test multiple patterns with assistive technology users and to validate whether your core tasks remain usable under constraints. In private and public beta, you harden the patterns, extend test coverage across devices and assistive tech, and build the operational muscles to fix issues quickly. In live service, you continue to gather evidence, track regressions, and maintain the habit of shipping accessible changes.
The NHS Design System provides components like buttons, radios, checkboxes, date inputs, notification banners, task lists and pagination, with guidance on content, keyboard behaviour and error handling. Adopting these components gives you a head start because they are designed for clarity and accessibility. But adoption is not a guarantee. Teams must still integrate components correctly, maintain semantic structure around them, and resist the temptation to “fork” components with custom behaviour that breaks accessibility. Where you do introduce bespoke elements—say, a complex results viewer or a clinician handover board—treat them as first-class components: define the name, role and state, ensure keyboard support, and document usage in your local design system.
The Service Standard also expects transparency. Public sector services should publish an accessibility statement that honestly reflects the current level of conformance, known limitations, and a route for users to get support. In health, do not bury this statement; link it from your footer, make it readable, and keep it current as you fix issues. This is not a legal nicety; it is part of building trust, especially for people who have been failed by digital services before. If you know a particular component does not work with a specific screen reader on a certain browser, say so and provide alternatives where possible.
Integrations and device variability are realities in health. Your service may sit within a trust intranet, launch from a clinical portal, or be framed inside another vendor’s product. Some environments restrict browsers, block scripts, or enforce unusual zoom settings. Mobile users may be on older devices with reduced processing power. Meeting the standard in practice means testing where your users actually are. That might mean setting up test rigs with virtual desktops that match trust configurations, testing with forced colours and high contrast modes, and exercising your service on low-end Android phones used by field workers or carers. Accessibility is not just a lab exercise; it is a field sport.
Inclusive research is the foundation of accessible digital health services. It ensures you are solving real problems for real people, not designing for an imagined “average” user. Crucially, it must respect ethical and safeguarding considerations while still being practical and timely for delivery teams. In health, research often happens under pressure; clinics and wards are busy, participants may be fatigued, and appointments cannot be delayed. Thoughtful planning, clear protocols and humane methods make inclusive research both possible and safe.
Start with purposeful recruitment. If your service will be used by people with low vision, cognitive differences, limited literacy, or who rely on screen readers, then those people must be in your research from the outset and throughout iterations. The same is true for clinicians with dyslexia or RSI, pharmacists working in noisy dispensaries, and community nurses using devices outdoors. Recruitment is not about quotas; it is about ensuring that the needs and behaviours that most stress your design are represented, so your patterns are resilient.
Consent and safeguarding are paramount. Information sheets and consent processes should be accessible and available in multiple formats, with opportunities for questions and breaks. Be explicit about what will be recorded, how data will be used, and how to withdraw. In moderated sessions, particularly with vulnerable users, pace the activities and allow for silence. Many accessibility issues surface only when people have time to think aloud while they navigate.
Mixed-method approaches work well in health settings. Diary studies can surface the realities of medication management or symptom tracking over time; contextual inquiry reveals the environmental constraints that lab testing cannot capture; unmoderated tests can scale quick checks of content variants; and moderated sessions with assistive technology users provide depth that analytics alone can’t. Consider pairing qualitative evidence with telemetry that respects privacy: completion rates, error frequencies, and the time people spend on critical steps can indicate where cognitive load is too high.
To embed inclusion into day-to-day practice, institutionalise a small set of repeatable habits and artefacts:
Co-design goes a step further by involving participants in creating solutions. Workshops where patients, carers and clinicians sketch flows, rearrange components on screens, and rewrite content can unlock perspectives that designers and product managers miss. In digital health, co-design also helps negotiate trade-offs between safety checks and usability. For instance, patients might prefer fewer steps during symptom capture, but clinicians may need structured data for triage; together, you can prototype patterns that collect the right information while minimising burden, such as progressive disclosure and smart defaults.
The emotional dimension of health services means language matters. Content designers should test microcopy carefully—not just for comprehension but for tone. Words like “fail”, “invalid” or “error” can be discouraging or even shaming in a health context. Encourage content that explains what will happen next, offers reassurance, and provides routes to human support when needed. Accessibility is as much about how something is said as it is about where focus goes after a modal closes.
Accessible digital health services are the product of ongoing governance rather than heroic one-off efforts. Governance, in this sense, is not bureaucratic overhead but the system by which teams make and keep promises to patients, clinicians and commissioners. It clarifies who is accountable, how evidence is gathered and used, and how accessible quality is protected as your service evolves. Without it, accessibility regresses as soon as deadlines bite or staff turn over.
Start with ownership. Every product or service should have a named accessibility owner, usually the product manager, who ensures accessibility is considered in roadmaps, sprint goals and definition of done. Designers, engineers and testers carry their craft responsibilities, but the product owner is accountable for balancing priorities and ensuring trade-offs are made consciously. The right kind of governance creates a cadence: accessibility considerations appear in product triage; design critiques include an accessibility lens; code reviews check for semantic correctness; and release checklists include assistive technology smoke tests.
Tooling amplifies the team’s attention. Automated checks, linters and CI pipelines can catch common issues early: non-unique IDs, missing form labels, insufficient colour contrast, ARIA misuse, inaccessible names on controls. They are necessary but not sufficient. Layer them with manual testing against critical user journeys using real assistive technologies—screen readers, voice control, magnifiers and keyboard-only navigation—on the browsers and devices your users actually have. Record short videos of expected behaviour for complex components to make manual testing faster and more consistent. Where your service integrates with clinical systems or sits behind single sign-on, build test paths that mirror those contexts so accessibility isn’t broken by integration points.
Measure what matters. Instead of counting the number of issues raised by an automated tool, track metrics that reflect user risk and operational reality. Examples include the proportion of critical journeys with complete assistive technology coverage, the time to remediate accessibility regressions after release, the percentage of content patterns authored with plain language principles, and user-reported satisfaction from people using assistive technologies. Pair these with qualitative artefacts: annotated screenshots that show name/role/state mappings, videos of keyboard flows, and excerpts from research that highlight where cognitive load was reduced. These artefacts become living documentation for onboarding and audits.
Finally, plan for continuity. Accessibility knowledge can be fragile when it lives in individuals. Create a home for your local design system that extends the NHS Design System with your bespoke components, and treat accessibility notes as first-class documentation. Include guidance on when to use a component, how to label it, and how it behaves with keyboard and screen readers. Provide code examples and content guidelines side by side. When teams ship features, require them to add or amend component docs if they changed behaviour. Continuous assurance also means scheduling periodic audits that focus less on pass/fail and more on risk-based testing of end-to-end journeys. In health, the journey boundaries—identity verification, consent, clinical decision points, and handover—are where missteps can have the greatest impact.
Health services often share a set of core flows. Designing these with accessibility at the centre reduces risk and sets a strong foundation for everything else.
Many health services rely on multi-factor authentication, NHS login, or trust-managed identity providers. Ensure that authentication steps are operable via keyboard, that one-time passcodes can be entered flexibly (with or without spaces), and that error messages clearly explain what to do if codes expire or devices are unavailable. Provide clear alternatives for users who cannot use a mobile device for codes, and consider rate-limiting patterns that do not block people with cognitive or motor impairments who may need extra attempts.
Consent flows should be understandable without legal training. Use plain language that explains what data will be collected, who will see it, and how to withdraw consent. Support screen readers by ensuring that expandable sections (for more detail) are programmatically associated with their controls. For time-sensitive care, adopt patterns that allow partial consent with clear prompts to revisit decisions later, reducing abandonment without compromising autonomy.
Complex forms are common in triage and long-term condition monitoring. Break long forms into steps with clear, descriptive headings and a progress indicator that communicates position to assistive technologies. Offer examples, hints and units next to fields. For questions that might trigger safeguarding concerns, provide sensitive copy and a clear path to human support. Use conditional questions judiciously and ensure that revealed content follows immediately in the DOM so users using screen readers or keyboard navigation perceive the change.
When presenting lab results or messages, prioritise clarity and context. Use tables only when they communicate relationships; otherwise, favour lists with strong headings. Provide readable ranges and plain-English explanations for clinical terms where appropriate. Ensure that notifications about new results or messages respect user preferences and are accessible. For example, email notifications should include informative subject lines and content that makes sense when read aloud, and in-app banners should be focusable and dismissible.
Clinician dashboards often display dynamic queues of tasks. Make filters, sorting and pagination accessible via keyboard, with clear headings and live region announcements that are polite and do not steal focus unexpectedly. Ensure that bulk actions are labelled and can be operated without a mouse. Provide visible and programmatic indicators of priority and status that do not rely solely on colour.
Plain language is sometimes dismissed as a stylistic preference. In health, it is a safety mechanism. People make better decisions when they understand what is being asked and what will happen next. Content designers should craft microcopy, labels, hints and help content that are concise and purposeful. Avoid creating cognitive cliffs where a single misunderstood word causes abandonment. Favour short sentences, concrete verbs, and active voice. When technical terms are unavoidable, provide definitions or examples nearby.
Good content also means good information architecture. Do not bury key actions under ambiguous headings. Group related tasks logically and use descriptive link text that makes sense out of context. For instance, “View blood test results for April” is better than “View results”. Ensure that content adapts gracefully to larger text sizes and reflows without loss of functionality. If your service is available in multiple languages, plan localisation alongside accessibility: different languages may expand text by 20–30 per cent, and some scripts require careful consideration of line length and spacing for readability.
Error messages and confirmations deserve the same care. Replace “There was a problem” with specific, kind guidance that focuses on the task: “We couldn’t check your details, possibly due to a slow connection. Try again, or save and return later.” Confirmation pages should do more than celebrate success; they should provide next steps and ways to get help if something doesn’t look right. In all cases, write for screen readers as you write for sighted users—avoid duplicating information unnecessarily, and ensure that visually hidden text adds value rather than noise.
Engineering decisions underpin accessible experiences. On the front-end, choose frameworks and libraries that support semantic HTML and accessible patterns, rather than libraries that prioritise aesthetics over behaviour. Enforce linting rules and pre-commit hooks that catch basic issues. Prefer native HTML elements for controls—buttons, links, inputs—over custom components, and when custom components are necessary, implement them according to the ARIA Authoring Practices with careful attention to keyboard interaction models.
On the back-end, design APIs that support accessible front-ends: provide consistent validation messages, meaningful error codes, and predictable response shapes. Performance matters to accessibility. Slow interfaces increase cognitive load and lead to timeouts for users who need extra time to complete tasks. Optimise for fast first meaningful paint, minimise cumulative layout shift, and avoid long tasks that block the main thread—especially on lower-spec devices common among carers and community staff. Provide fallbacks for JavaScript-heavy interactions so that critical tasks remain usable if scripts fail or are restricted.
Testing should reflect the reality of your stack. Unit tests can verify ARIA attributes and focus behaviour for components; integration tests can simulate keyboard navigation through flows; end-to-end tests can assert the presence of accessible names and roles for critical controls. However, do not assume automated tests capture everything. Manual checks with screen readers, voice control and magnification remain essential, particularly for custom controls and dynamic updates.
Digital health ecosystems are rarely built by a single team. Trusts and ICBs procure and integrate multiple systems—EPRs, LIS, e-prescribing, appointment systems, portals and apps. Accessibility must therefore be part of procurement and contract management. Specifications should require conformance to WCAG at the agreed level, but also insist on evidence: accessibility statements, results of independent audits, descriptions of assistive technology coverage, and plans for remediation. Importantly, contracts should make accessibility a living obligation with service levels, not a one-off deliverable.
When integrating systems, insist on accessible SSO flows and embedded components. A vendor’s standalone product may be accessible, but its embedded version inside your portal may not inherit the necessary semantics or focus management. During pilots and rollouts, include accessibility checks in entry and exit criteria and in “go/no-go” gates for release. Work with vendors to build shared roadmaps for accessibility improvements, and recognise that progress is faster when teams collaborate on design and code, not just documents.
Meeting WCAG and NHS standards is the starting line, not the finish. To sustain accessibility across the life of a digital health service, treat it as a product capability that compounds over time. Document what works, codify patterns, and make accessible defaults the easy path. Invest in inclusive research and keep your panels warm. Publish honest accessibility statements and update them as you improve. Integrate accessibility checks into your pipelines and rituals so that regressions are caught close to the work. When you procure or integrate, hold partners to the same standard, and collaborate on improvements rather than attempting to police them from the sidelines.
Most importantly, keep the people who rely on your service at the centre of your decisions. Accessibility is not a checklist; it is a commitment to design for the full range of human ability and circumstance. In digital health, that commitment is the difference between a product that merely exists and a service that genuinely delivers care.
Is your team looking for help with digital health design? Click the button below.
Get in touch