Medical Machines and AIs to Help Doctors
When human experts get baffled, they turn to AI for help.
Months of cancer treatment that doesn't work can get doctors to question their diagnoses and seek further in depth analysis from an AI, like IBM's Watson, in order to shed light as to what they are missing. In the case of 60 year old woman in Tokyo dying, the data was analyzed by Watson, and determined that she had a rare secondary form of leukemia. Treatment was changed, and she was eventually out of the hospital.
An AI can crunch data like no human and spot something that a human could take weeks or could have missed entirely. In some cases, AI does actually save lives, just by being an AI and doing what it does. An AI can help diagnose, suggest treatment and even predict the change of a patients health based on past behavior.
AI's offer speed, yes, but also precision. Medical error is the 3rd leading cause of death in the U.S., and a large portion from misdiagnoses. The amount of health knowledge to go through in enormous, and it keeps growing. Our knowledge has exceeded the capacity for doctors to know about it all. Where once the doctors advised in training the IMB Watson AI, now the roles are reversed.
Doctors now imagine a future where an AI can accurately analyze health records and calculate the risk of a person getting something, developing MS, whether it's 0.5% or 5%. The machine makes a recommendation, then the human gets involved. It's a complex task, as there are many layers to the huamn health spectrum, but by building each diagnostic as a block on its own, one question at a time, eventually each layer of human diagnostics can be done accurately and quickly.
AI are not being trained to learn from image slides of cancerous lung tissue. THe computer can distinguish between cell sizes, shapes and texture, and also tell from the samples who had lives a few months and who had gone on to live for many years.
Another AI developing team in Los Angeles have a new algorithm to detect seizures, predict kidney or heart disease progression, and pick out pregnancy anomalies by listening to the infant heartbeats.
Microsoft is trying out an algorithm that can predict if you have pancreatic cancer based on the web searches you perform. Google's DeepMind is used massive anonymized data to spot eye disease early on.
Before any of these changes to the medical establishment get adopted, there needs to be more studies and reassurances that they can be relied upon for predictions and diagnostics to improve overall health outcomes. SOme fear the AI doctor may backfire, with an automation bias leading to doctors overdiagnosing and overtesting patients.
Then there is how to integrate the AI's into existing medical practice so that everything works as smooth, if not better. Some envision the AI being directly linked into all medical data, so that insights into any patient can be obtained at any time.
Google Glass already has an app called Isabel to offer help diagnosing. But this process is cumbersome,a s doctors need to input the data to use it. AI diagnostics will only be viable in the medical industry when the pressure of time is removed from using the AI methodologies.
Another aspect of adoption, is ego. Doctors may not want to admit they can be wrong. If you don't think you can be wrong at times, you won't want to seek the help of some machine, since you think using a machine to trump your medical training is bellow you and will embarrass you.
In the future, it's possible the doctors will be like a captain on a ship, directing things to happen, dealing with the most important tasks himself, but letting routine daily tasks to AI robots. At first, they need to cede some control of their environment to let the AI in and learn.
Doctors won't be replaced by the AI's, but by having AI's assist, it will allow humans to do more of what we do best (advanced surgery, patient care, etc), and the AI can do what it does best: crunch numbers and compile data to arrive at complex decisions.
If you appreciate and value the content, please consider:
Upvoting , Sharing and Reblogging below.
Author: Kris Nelson / @krnel
Contact: [email protected]
Date: 2016-11-13, 7:07am EST
Take it easy.
One of the main problems with big data and AI is that we often do not know about the quality of the sources. And in other cases, we need to trust the results without knowing how we got there.
So, Watson helped in one documented case. Great. I would like to see the data to check whether this is more than good PR for IBM.
Watson may be a big issue in the US for another reason: The US-american health system is one of the most inefficient systems I know. And I have studied a bit on that. However, the combination of cartells and massive red tape may foster the use of AI.
We should, however, keep in mind that IT could long since have taken over simple jobs like cleaning. Simple stuff. Like toilets. And it did not. For having the algorithms is one thing and getting the hardware and interfaces right, is a completely different one.
We will need AI in tomorrow's hospitals. Definetly. But it may - at least during an initial phase - focus on extremely complex cases and orphan deseases (very rare deseases, few doctors actually know in detail). And - sorry for stepping across some medics' toes - the rest is like toilet cleaning: Its simply faster and more cost efficient to apply manual labour.
In both cases, the degree of standardization is limited.
Plus: Have the first patient die from a AI-based diagnosis and people may want to reconsider trusting AI completely.
AI-support of medical decisions, however, will gain some popularity, but as it is only likely to add costs instead of making anything cheaper in most cases, AI may not go skyrocketing in hospitals straight away.
"Microsoft is trying out an algorithm that can predict if you have pancreatic cancer based on the web searches you perform."
Even MS and Google can not possibly know whether you may be suffering from pancreatic cancer yourself. They can merely relate to specific search profiles, along with FB-data and a couple of other things. Which may not nevessarily be made by the patient.
That would be like assuming that people who run or click around on terrorist websites are terrorists. A lot of them may be. Others may be journalists. Or they are called John Smith and need to turn up the screen brightness to the max because they will not take of their dark sunglasses while posting.
I agree, it's not accurate. Someone can search for things based on symptoms of a disease, and not have it either. It's an indicator at the most. Good luck to them to sell it lol.
This post has been ranked within the top 25 most undervalued posts in the second half of Nov 13. We estimate that this post is undervalued by $12.29 as compared to a scenario in which every voter had an equal say.
See the full rankings and details in The Daily Tribune: Nov 13 - Part II. You can also read about some of our methodology, data analysis and technical details in our initial post.
If you are the author and would prefer not to receive these comments, simply reply "Stop" to this comment.
This post has been linked to from another place on Steem.
Learn more about and upvote to support linkback bot v0.5. Flag this comment if you don't want the bot to continue posting linkbacks for your posts.
Built by @ontofractal