How intelligent machines could do a better job of diagnosing patients than humans – StartupSmart
Until now, medicine has been a prestigious and often extremely lucrative career choice. But in the near future, will we need as many doctors as we have now? Are we going to see significant medical unemployment in the coming decade?
Dr Saxon Smith, president of the Australian Medical Association NSW branch, said in a report late last year that the most common concerns he hears from doctors-in-training and medical students are, “what is the future of medicine?” and “will I have a job?”. The answers, he said, continue to elude him.
As Australian, British and American universities continue to graduate increasing numbers of medical students, the obvious question is where will these new doctors work in the future?
Will there be an expanded role for medical professionals due to our ageing populations? Or is pressure to reduce costs while improving outcomes likely to force the adoption of new technology, which will then likely erode the number of roles currently performed by doctors?
Driving down the costs
All governments, patients and doctors around the world know that healthcare costs will need to reduce if we are to treat more people. Some propose making patients pay more, but however we pay for it, it’s clear that driving the cost down is what needs to happen.
The use of medical robots to assist human surgeons is becoming more widespread but, so far, they are being used to try and improve patient outcomes and not to reduce the cost of surgery. Cost savings may come later when this robotic technology matures.
It is in the area of medical diagnostics where many people see possible significant cost reduction while improving accuracy by using technology instead of human doctors.
It is already common for blood tests and genetic testing (genomics) to be carried out automatically and very cost effectively by machines. They analyse the blood specimen and automatically produce a report.
The tests can be as simple as a haemoglobin level (blood count) through to tests of diabetes such as insulin or glucose levels. They can also be used for far more complicated tests such as looking at a person’s genetic makeup.
A good example is Thyrocare Technologies Ltd in Mumbai, India, where more than 100,000 diagnostic tests from around the country are done every evening, and the reports delivered within 24 hours of blood being taken from a patient.
Machines vs humans
If machines can read blood tests, what else can they do? Though many doctors will not like this thought, any test that requires pattern recognition will ultimately be done better by a machine than a human.
Many diseases need a pathological diagnosis, where a doctor looks at a sample of blood or tissue, to establish the exact disease: a blood test to diagnose an infection, a skin biopsy to determine if a lesion is a cancer or not and a tissue sample taken by a surgeon looking to make a diagnosis.
All of these examples, and in fact all pathological diagnoses are made by a doctor using pattern recognition to determine the diagnosis.
Artificial intelligence techniques using deep neural networks, which are a type of machine learning, can be used to train these diagnostic machines. Machines learn fast and we are not talking about a single machine, but a network of machines linked globally via the internet, using their pooled data to continue to improve.
It will not happen overnight – it will take some time to learn – but once trained the machine will only continue to get better. With time, an appropriately trained machine will be superior at pattern recognition than any human could ever be.
Pathology is now a matter of multi-million dollar laboratories relying on economies of scale. It takes around 15 years from leaving high school to train a pathologist to function independently. It probably takes another 15 years for the pathologist to be as good as they will ever be.
Some years after that, they will retire and all that knowledge and experience is lost. Surely, it would be better if that knowledge could be captured and used by future generations? A robotic pathologist would be able to do just that.
Radiology, X-rays and beyond
Radiological tests account for over AUS$2 billion of the annual Medicare spend. In a 2013 report, it was estimated that in the 2014-15 period, 33,600,000 radiological investigations would be performed in Australia. A radiologist would have to study every one of these and write a report.
Radiologists are already reading, on average, more than seven times the number of studies per day than they were five years ago. These reports, like those written by pathologists, are based on pattern recognition.
Currently, many radiological tests performed in Australia are being read by radiologists in other countries, such as the UK. Rather than having an expert in Australia get out of bed at 3am to read a brain scan of an injured patient, the image can be digitally sent to a doctor in any appropriate time zone and be reported on almost instantly.
What if machines were taught to read X-rays working at first with, and ultimately instead of, human radiologists? Would we still need human radiologists? Probably. Improved imaging, such as MRI and CT scans, will allow radiologists to perform some procedures that surgeons now undertake.
The field of diagnostic radiology is rapidly expanding. In this field, radiologists are able to diagnose and treat conditions such as bleeding blood vessels. This is done using minimally invasive techniques, passing wires through larger vessels to reach the point of bleeding.
So the radiologists may end up doing procedures that are currently done by vascular and cardiac surgeons. The increased use of robotic assisted surgery will mean this is more likely than not.
There is a lot more to diagnosing a skin lesion, rash or growth than simply looking at it. But much of the diagnosis is based on the dermatologist recognising the lesion (again, pattern recognition).
If the diagnosis remains unclear then some tissue (a biopsy) is sent to the laboratory for a pathological diagnosis. We have already established that a machine can read the latter. The same principle applies to the recognition of the skin lesion.
Once recognised and learnt, the lesion will be able to be recognised again. Mobile phones with high-quality cameras will be able to link to a global database that will, like any other database with learning capability, continue to improve.
It’s not if, but when
These changes will not happen overnight, but they are inevitable. Though many doctors will see these changes as a threat, the chance for global good is unprecedented.
An X-ray taken in equatorial Africa could be read with the same reliability as one taken in an Australian centre of excellence. An infectious rash could be uploaded to a phone and the diagnosis given instantly. Many lives will be saved and the cost of health care to the world’s poor can be minimal and, in many cases, free.
For this to become a reality, it will take experts to work with machines and help them learn. Initially, the machines may be asked to do more straightforward tests but gradually they will be taught, just as humans learn most things in life.
The medical profession should grasp these opportunities for change, and our future young doctors should think carefully where the medical jobs of the future will lie. It is almost certain that the medical employment landscape in 15 years will not look like the one we see today.
Ross Crawford, Professor of Orthopaedic Research, Queensland University of Technology; Anjali Jaiprakash, Post-Doctoral Research Fellow, Medical Robotics, Queensland University of Technology, and Jonathan Roberts, Professor in Robotics, Queensland University of Technology
This article was originally published on The Conversation. Read the original article.