Rapid, mass adoption of digital health technology

Rapid, mass adoption of digital health technology

“Fools rush in where angels fear to tread” – thoughts on rapid, mass adoption of digital health technology with law firm Bevan Brittan LLP.

This year has been an extraordinary year – a true annus horribilis for many, as the COVID-19 pandemic has ravaged populations, tanked economies, and hamstrung personal liberties. People have had to endure loss, hardship, and restrictions not seen since the Second World War and its aftermath. Yet, in one respect, 2020 could also be described as an annus mirabilis. For just as the poet Dryden in his ‘year of miracles’ was able to see beyond the great tragedies of 1666 – the Fire of London and the Great Plague – to focus on the military successes of England’s naval war against Holland, so too should it be possible for us to look beyond the unquestionable awfulness of the pandemic, to focus on some of the minor wonders that have taken place this year.

Not least of these has been the truly remarkable speed with which healthcare systems have adopted and disseminated large scale digital transformation. While the revolutionary promise of digital health has been evangelised for a number of years now, particularly in the fields of primary and outpatient care, wide-scale uptake has generally been slow, arduous, and resisted by what, at times, has felt nothing short of the forces of twenty-first century Ludditism.

In the UK, for all the ambitions of the Wachter report, ‘digital first’ being put at the heart of the NHS Long Term Plan and the creation of NHSX, for anyone trying to get an appointment with their GP on a Monday morning (the tyranny of the 8am telephone race to get through to switchboard) or being asked to present at their pharmacist with a hard copy of a repeat prescription, wholescale digitisation of healthcare would have seemed like castles in the sky.

However,  that was all B.C. (before COVID), as we now say. For the pandemic changed all this in a matter of weeks. Necessity, of course, is the mother of invention, and when people are literally risking their health, if not their lives, simply by being in close proximity to one another, the benefits of remotely delivered, digital healthcare seem obvious: patients, healthcare workers, and the broader community are all afforded a degree of protection that cannot be easily achieved when care is delivered via an in-person model. Hence, such staggering statistics as:

  • A drop in face-to-face GP appointments from 80% in 2019 to just 7-8% in mid-April 2020, with 100% remote triage1
  • A 111% rise in registrations to the NHS app in March 20202
  • A 97% rise in the number of repeat prescriptions requested via the NHS App in March 20203

There can be no doubting the transformative impact of the pandemic and the rush to digitisation it has incited. But as with all rushes – gold, oil, tech – there will be winners and losers, successes, and mistakes.

What I want to do in this article is consider some of the potential problems associated with rapid, mass adoption of digital healthcare technology. I should say here that, far from being a naysayer of new healthcare technology, I consider myself an early adopter in terms of my own personal consumption habits, as well as someone who in my professional life works closely with digital health companies, advising them on many aspects of their operations, in particular clinical risk and patient safety issues. So, my observations below come from the standpoint of an enthusiast, but one who has a cautious scepticism about unbridled digitisation of healthcare and the impact this might have on broader society.

The limitations of remote care

 Another revealing statistic to come out of the pandemic is that there was reported to be a 42% drop in A&E attendances in May 2020 as compared to May 2019.4 There are likely a multitude of reasons for this, including less trauma because of, for example, significantly reduced vehicle use and less contact sports being played, as well as less (non-COVID) illness and disease circulating in the community because of reduced societal contacts. However, there is undoubtedly a cohort of people who will have been too frightened to attend A&E and who may be finding other ways of getting treated without going to hospital, for example via video consultations with their GP.

Here we encounter perhaps one of the most obvious limitations of digital healthcare: the digital GP cannot palpate, percuss, or auscultate, much less carry out blood pressure, pulse, and oximetry readings. True it is that so much medicine relies on the history of symptoms and what is observable to the doctor the instant that a patient enters the consultation room (whether it be a physical or a virtual one), but the digital doctor is doubly reliant on, and perhaps doubly hampered by, the visual and the verbal. Objective signs of illness are much more difficult to assess in a remote setting. Most current guidance5 suggests that remote consultations are suitable only for people who do not need a physical examination and most independent remote primary care platforms specifically provide that they are not suitable for use in medical emergencies such as where a patient has chest pain, severe shortness of breath, suspected stroke, or severe bleeding for example. However,  there are bound to be cases where it not obvious that a patient is suffering a medical emergency, and the limitations of a digital consultation might mean that this is not picked up as easily as when a straightforward test or examination could be performed in person. Reassurance might be given, but that reassurance may prove unwarranted, sometimes fatally unwarranted. Of course, as technology advances and remote consultations become increasingly supported by biometric data that patients can upload from wearables, the data available to support diagnoses will improve, but we are not there yet and in the meantime the potential for incorrect or missed diagnoses remains.

Remote consultations also present challenges with respect to confidentiality. Put shortly, when consulting with patients remotely, it is difficult to know who else is in the room.6 While digital doctors can ask the patient whether they are alone, they cannot guarantee the answer that is given will be truthful. Consider, for example, the female patient who is unknowingly asked to recount her prior obstetric history in front of a new, coercively controlling partner who may be sitting just out of shot of the webcam. Such situations could cause untold harm, likely never even to be revealed to the digital doctor.

Digital exclusion

11 million people (20% of the UK population) lack basic digital skills, or do not use digital technology at all. Many of these are likely to come be older, less educated, and in poorer health than the rest of the population.7 Any rapid move to a digitised health system is necessarily going to leave some people behind, whether because they do not have the necessary skills, lack access to IT infrastructures, or simply do not have the confidence to navigate the online world. There is a real risk of a two-tier system here. The very groups who are less likely to engage with digital healthcare are also those who are more likely to suffer chronic health problems such as diabetes, cardiopulmonary disease, hypertension, obesity, as well as poor mental health. For such groups, health inequalities and increased isolation may well be exacerbated by the rush to digital as a visit to the GP becomes ever more complicated, if not impossible. For many elderly patients, the weekly visit to the GP practice or the pharmacy is not just a medical necessity but a social one, without which people can feel incredibly isolated.  It is vital that innovative strategies are put in place to mitigate these problems.

Commercialising data

 At the heart of all digital health technology is data, and while data driven health tech can undoubtedly promise huge benefits to patients, healthcare professionals and society as a whole, it also raises vital questions about security, trust, and equity. While patient data has been described in the UK as a ‘unique source of value for the nation’,8 the commercialisation of such data is a controversial topic. Opportunities for revenue generation must be balanced against the risks of cyber threats (such as WannaCry), state surveillance (as has been alleged during the development of the coronavirus vaccine9), and privacy breaches leading to harm of individuals or institutions.10 In research11 which included discussion with patient advocacy groups, citizens’ juries, and a nationally representative survey of over 2,000 people, key findings included that:

  • All partnerships between the NHS and third parties that include access to data held by the NHS, must aim to improve health and care for everyone
  • NHS bodies need support and guidance to negotiate fair terms for agreements with third parties
  • Public accountability, good governance, and transparency are crucial to maintaining public confidence
  • The public should have a say in how NHS data is used

The message is clear. While patients may be prepared to accept the use and commercialisation of anonymised health data which is for the benefit of broader society, they will be less accepting or forgiving of data use that is lacking in transparency and intended to advance corporate interests and shareholder returns. Another red line concern for patients is the potential for their data to be used by insurance companies12 or in situations where it might result in other forms of data-driven discrimination, to which I turn next.

Data driven discrimination

 Some might assume that the algorithms and machine learning that underpin many digital health solutions would be inherently more objective and less likely to be tarnished by grubby human prejudice. Not so. Broadly speaking, algorithms are no more than encoded procedures or instructions, but it is data that is the foundation of everything and data can discriminate just as well as people – because, for example, it is incomplete, poorly selected, unrepresentative, outdated, or just plain wrong.13 And algorithms can even perpetuate biases.

A learning algorithm is designed to identify patterns in training data, but if the training data already incorporates and reflects existing social biases, the algorithm is likely to learn them and sustain them. An example of this can be found in AI solutions designed to reduce ‘no shows’ or ‘DNAs’. Inputs into such algorithms can include personal characteristics such as ethnicity, socio-economic class, religion, and body mass index, as well as previous ‘no show’ history. But if a particular patient falls into a group that is statistically more likely than not to not show up to medical appointments, is identified as such by AI and then the clinic into which the patient is booked is predictively overbooked to maximise clinic use, this has the potential for harm: if both the original scheduled patient and the overbooked patient do attend (i.e. the AI prediction is wrong) then clinic time has to be stretched to accommodate two patients in a time allocation meant for one patient.14 To compound the problem, those who have previously experienced stigma and discrimination in health care settings are more likely to distrust new digital health services.15

Legal liability

A thorny issue for digital health is what happens when a decision is taken or informed by AI which causes patient harm. This is an issue I have long been concerned about.16  Ultimately, where should responsibility for unintended harm lie? Should fault be attached to those who provide or curate the data sets on which the AI relies; those who build and code the AI; those who validate it; who operate it; or those clinicians whose decisions are supported by it?

The English law of liability has developed over centuries and has had to contend with every conceivable technological development along the way, whether that be railways, air travel, organ transplantation, or the latest endovascular stenting grafts. It is often said that the great beauty and elegance of the common law is its fluidity and ability to adapt to the mores and problems of each age. So, in principle the law of liability should also be able to cope with emerging digital health technologies. However, new health technologies undoubtedly represent a step change. Because of their complexity, opacity, ongoing self-learning, and intelligent adaptability, as well as autonomy – it can sometimes seem almost impossible to determine why a harm has occurred and who should be held responsible for it. There are also questions about scale and replicability of harm. If an individual doctor interacting with an individual patient makes a negligent diagnosis or recommends the incorrect treatment, the harm (albeit sometimes catastrophic) will be limited to one individual. But the same cannot be said about an algorithm. It is little wonder then that some have questioned the ability of existing legal frameworks to get to grips with these issues.17

Conclusion

Digital health is transforming the way healthcare is delivered and consumed, and COVID-19 has accentuated some of the benefits and possibilities of innovation in this field. Yet these new media for delivering healthcare raise novel issues, risks, and challenges that cut to the very heart of who we are as people and societies, and we would be wise not to lose sight of some of the limitations of digital care, and the need to bring everyone along on this journey. For without trust and buy-in from those most likely to be digitally excluded or face digital discrimination, digital health risks becoming yet another instrument of division and distrust – and there has been far too much of that lately. So, think back to those early days of the pandemic when we literally applauded the ‘angels’ of society from our doorsteps and our balconies each week. Surely, we owe these ‘angels’ more than to be errant fools who rush in. None of the problems discussed in this article are insurmountable – and many innovators and policy makers are already tackling them in – but they do require thought, cautiousness, and open eyes.

References

  1. P. Lynch and D Wainwright, ‘Coronavirus: how GPs have stopped seeing most patients in person’ 11 April 2020: https://www.bbc.co.uk/news/uk-england-52216222
  2. NHS Digital: https://digital.nhs.uk/coronavirus/nhs-digital-tech-analytics
  3. Ibid.
  4. J. Davies, ‘NHS performance summary: April – May 2020’, 11 June 2020, Nuffield Trust news release.
  5. See, for example, NHS England and NHS Improvement, ‘Clinical guide for the management if remote consultations and remote working in secondary care during the coronavirus pandemic’, 27 March 2020.
  6. P. Whittaker, ‘Remote consultations may result in a diagnosis, but they can come at a personal cost to the patient’ 3 July 2019, New Statesman.
  7. NHS Digital, ‘Digital inclusion guide for health and social care’, revised version July 2019.
  8. House of Lords Select Committee on AI, ‘AI in the UK: ready, willing and able?’ 16 April 2018
  9. C. Fox and L. Kelion, ‘Coronavirus: Russian spies target COVID-19 vaccine research’ 16 July 2020: https://www.bbc.co.uk/news/technology-53429506
  10. BBC News, ‘Google DeepMind NHS app test broke privacy law’ 3 July 2017: https://www.bbc.co.uk/news/technology-40483202
  11. Understanding Patient Data & Ada Lovelace Institute, ‘Foundations of fairness: where next for NHS health data partnerships? March 2020
  12. J. Baggenal, A. Naylor, ‘Harnessing the value of NHS patient data’ The Lancet, 16 November 2018
  13. J. Niklas and S.P, Gangadharan, ‘Data-drive discrimination: a new challenge for civil society’ 10 July 2018: https://blogs.lse.ac.uk/impactofsocialsciences/2018/07/10/data-driven-discrimination-a-new-challenge-for-civil-society
  14. S. Murray et al, ‘Discrimination by AI in a commercial HER – a case study’ Health Affairs’, 31 January 2020: https://www.healthaffairs.org/do/10.1377/hblog20200128.626576/full/
  15. C. Newman et al, ‘Understanding trust in digital health among communities affected by BBVs and STIs in Australia’, UNSW Centre for Social Research in Health 2020
  16. D. Morris, ‘When health tech goes wrong: who pays for patient harm in the world of health apps?’ Digital Health Legal, April 2018.
  17. D. Morris, ‘Who’s to Blame? Digital healthcare and issues about liability when things go awry’, Bevan Brittan LLP, 17 April 2020: http://www.bevanbrittan.com/insights/articles/2020/whos-to-blame-digital-healthcare-and-issues-about-liability-when-things-go-awry/

Dan Morris
Partner and Digital Health Lead
Bevan Brittan LLP

Special Report Contact Details
Contact: Dan Morris
Organisation: Bevan Brittan LLP
Website: Visit Website

LEAVE A REPLY

Please enter your comment!
Please enter your name here