Artificial intelligence? Honestly any kind might be an improvement.

Posted by

AI, as with social media and every other invention before it, will be another tool in the arsenal of health care administrators and providers that can be deployed for good or bad. In the end it will depend on who is using the tool, what they are motivated to achieve, and what their relative power is in the system. Change is coming and the data holders will want to manage that change.

Put simply AI neural networks can quickly deal with large data, detect patterns and make decisions. Instead of being programmed on an algorithmic basis they mimic the learning of a brain and just like humans often cannot easily reveal how or why they make a particular decision. Just like a chef, musician or anaesthetist, the black box just “knows” what will work, from experience.

Being able to collect and deal with large data sets means that doctors who work in diagnostics and investigations can use these tools to improve efficiency and automate for example reading of scans or pathology slides. Of course doctors and researchers can only teach the AI what we do today, but as ideas change and evolve then diagnostic decisions need to be modernised constantly. The AI can teach itself, but closing the loop to decide if it is correct will require human guidance as it can be spectacularly wrong wrong wrong. We have brains so much better than AI is now, so you know why I used the word three times while my computer thinks it is a grammatical error.

One of the reasons that pathology and radiology have come first is that the data is easy to obtain. It is easier to anaylse blood chemistry than patient answers to directed questioning. However the Electronic Health Record is now being data mined in many places, so for example there is AI that can look at historical records and predict which mental health patients were more prone to suicide attempts in the year after admission to hospital. There are moves to record all consultations – more data which in the past was too massive to be utilised but now can be analysed efficiently. For example an AI could decide if consent conversations were comprehensive in clinics and tele-health and even match that to whether there were signs of engagement in facial movements.

Futurist predictions often explain amazing new technology like this but then take a leap off the diving board of speculation into “what this could mean”. Because that idea is associated with something shiny like AI we tend to accept it we should be sceptical. Bladerunner famously depicted the dystopian future of the year 2019. There are hover cars – still waiting – but people had to pull over in the rain to use a phone box – no smart phones.

What is stunningly predictable though is human behaviour in organisations, and what should give us pause is the industrial and workplace implications of having an administration that can not only collect but analyse large data on every aspect of their employees and patients.

Every moment of how we spend out time, every conversation, everyone who has interacted with a patient, what they said and what that patient’s outcomes are could all be readily accessible for reporting. Patterns will be detectable but the people running the system may not be interested so much in clinical care as funding, and control. Where we have poor culture in hospitals already AI will not improve it but will be used by humans to continue to behave badly on a larger scale. That is the course of human history. Where insurers are interested in improving margins through managed care they will use AI to suit that agenda.

Doctors need to be ready not just for the clinical workforce changes that AI might usher in, but for the cultural problems of bullying and political manipulation to be amplified many fold.

Industrial organisation is the only effective pushback against such abuse by companies and governments with big data, and the only ones who will be standing up for the scientific method and for patients will be their physicians.

Footnote : a BMJ article in 2018 is an excellent primer for the areas that AI is already changing in medicine and where it routinely exceeds human performance already.

And an example of AI being wrong wrong wrong.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.