In his newsletter today (The Top 10 Dangers of Digital Health), the medical futurist, Bertalan Meskó, raises some very topical questions about the dangers of digital health. As a huge advocate of the benefits of digital health, I am aware of most of these but tend to downplay the negative aspects as I generally believe that in this domain the good outweighs the bad. However, as I was reading his article, I realised that it was written very much from the perspective of a clinician and, to some extent, a healthcare organisation too. The patient perspective was included but not from a patient safety angle. Many of the issues that he raises do have significant patient safety issues associated with them which I’d like to share in this blog.
Where an AI tool quickly adapts to reflect its environment and the context in which it operates, the AI may “reinforce those harmful biases such as discriminating based on one’s ethnicity and/or gender”. These will further exacerbate existing health inequalities and place certain patients at a disadvantage. It is important that the ground rules for these AI tools include firm parameters that seek to prioritise patient safety. A bit like Asimov’s Zeroth Law, ”a robot may not harm humanity, or, by inaction, allow humanity to come to harm”.
The idea that hackers might target people's implantable cardiac devices was popularised in a 2012 episode of the US television drama ‘Homeland’, in which terrorists hacked a fictional vice president's pacemaker and killed him. It is not just VIPs (or VPs) who need to worry about this. Potentially anyone with an implanted device could have it hacked and be held to ransom. Medical device manufacturers should take far more care in the security that they build into their devices to protect patients from unwarranted attacks on them. Frankly, when large healthcare organisations are procuring these types of devices, this is one of the key areas that they should be interrogating their potential suppliers about.
This is a difficult one because if we want digital systems to really understand us and provide advice or treatment personalised to us, then those digital tools must have access to our confidential medical data. However, privacy is still very much a high priority for most patients and they (rightly) want to know what is happening to their data – who is using it, how long is it being held, is it being passed on to third parties without the patient’s explicit consent? People often forget who they have given access to their data, for what purpose and sometimes stop using a digital tool without realising that all of their data is still being held (and possibly collected via an active API) by the digital tool’s supplier. It would be helpful if our mobile phones and PCs could highlight:
Data that is used for purposes other than those intended by the patient are potentially a safety risk to that patient and should be treated as such.
Yes, this is awful for the hospital, and yes, it may cost them money; however, let’s not forget whose data has been stolen, the patients’! Are they sufficiently alerted to this, told what is happening, given ways to mitigate any issues to them personally? In an ideal world they are, but in reality the hospital is probably in panic mode and communicating transparently with patients is low down on its priority list. As the Medical Futurist says: “The average patient should demand more security over their data” – but how do they do this? What can a single patient do to ensure that the hospitals who have stewardship over their data (not ownership in my opinion) make it as secure as possible.
This brings me back to an idea that my sadly departed friend, Michael Seres, had many years ago. On each hospital exec team (not Board) there should be a Chief Patient Officer, whose job it is to push for patient interests in operational matters (which is why they shouldn’t be a non-exec member of the Board). That is the person whose job it should be to hold their organisation to account over the security of their patients’ data.
Dr Google has been an issue for some years, and people’s off-the-shelf devices that monitor their vital signs are not necessarily medical grade, nor do their users generally have the skill to interpret the outputs from them. However, doctors should embrace patients who are keen to manage their own chronic conditions and support them in doing so. This ‘shared accountability’ has to be the model for improved population health and doctors not willing to work with their patients shouldn’t have any.
A bit exotic this one and certainly not a near-term risk when looking at the sorts of things described in the newsletter. However, in a world that is still dealing with a pandemic, and reliant on vaccines to gain some normality back into our everyday lives, the security of (for example) that supply chain is critical.
What if a batch was intentionally sabotaged or in some way its efficacy reduced? In exactly the same way that medical products (especially implants) should be made as safe and secure as possible, the same is true for the medicines that we rely on.
The newsletter makes the case for issues related to how staff use the AI, but PLEASE… test this with patients first! Safety in use is critical and only feedback involving patients will help developers to optimise these digital tools to be as safe as possible.
This is a very personal issue for me. Why should my doctor have to send me for tests when I can give him/her perfectly reasonable data that I have gathered myself from a device that has been CE marked and approved by the FDA/MHRA etc.? Electronic Medical Record vendors are incredibly reticent to allow anyone other than the authorised doctor to enter anything into a patient’s record. There are some good reasons for this. However, I’ve long thought that there could be an annexe to the record that is patient-controlled where they can enter a new address, add data from their own blood pressure device and over-the-counter drugs or remedies that they are taking. That way, doctors would have an up to date, (hopefully) reliable set of data to have a more informed discussion with their patient and it could accelerate the time between consultation and referral/treatment.
I’m less worried by this in principle; however, I am interested to know how the data generated will be used and the security around it. If it is only used by the hospital to optimise patient flow, or remotely detect symptoms that are then used to help patients either directly or indirectly, then fine. If it is shared with others for more sinister purposes, then I would be concerned.
This is less relevant to the UK – only 11% of us have private health insurance. Again, this boils down to who collects data on patients, for what purposes, is explicit consent gained from the patient to share their data and how may those third parties use it?
There are both negative and positive connotations to the gathering of a person’s health data by their health insurance company, but given that they already ask for access to all GP and secondary care records, having access to health wearable data (as Vitality Health already does) is not a big step.
I still believe that the benefits of digital health outweigh the risks, but the risks outlined above are not inconsequential. Many of the negative aspects are predicated on poor management and control of patient data. One of the ways that this should be mitigated is to have one or more patient representatives at an exec (not non-exec) level who hold healthcare organisations to account over this important aspect of care provision.