Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One potential problem with this - the question of liability, and who is responsible for diagnostic accuracy? In this case, for some "Lab on a Chip" device providing a patient directly with diagnostic information without the vetting of a human doctor, liability would sit with the company.

IBM's Watson at MD Anderson Cancer center did not work out real well for them. In other words, using AI in the realm of medical diagnostics is very difficult.



What about the treatment side? Once you have a diagnosis, could we use AI to review the patient's medical record, compare outcomes of past patients with the same diagnosis and similar histories, and suggest adjustments of personalized treatments to optimize outcomes?

Overall, of course, you're right. Liability is the problem with my suggestion. Doctors prescribe to treat, they also prescribe to meet the legally mandated standard of care and minimize second-guessing later. Looking at each patient as a unique snowflake-- or at least, part of a thinner-sliced group-- helps with the first, but directly undercuts the second goal. Such an approach would probably need to originate outside the U.S.


Fair points.

Extracting data from the EMR is very difficult because all EMR was originally intended to only be a storage place for data - not designed to output data back to a user.


I agree that the current excitement over medical AI seems wildly optimistic, but the problems with Watson at MD Anderson were more due to gross mismanagement on the MD Anderson side. https://arstechnica.com/science/2017/02/ibms-watson-proves-u...


This seems like an easy question to answer. The liability lies with the user until they seek the professional advice of a doctor.

Say a user decides to self-diagnose and damages themselves (either through inaction or self-medication or whatever). What difference does it make whether they diagnosed themselves by browsing symptoms on wikipedia or using an advanced diagnosis AI on their phone?


It's plausible that providing an advanced diagnosis AI on the phone (unlike purely passive information e.g. wikipedia) falls under existing laws that regulate medical services, and would make the provider liable for violating these regulations even if noone has damaged themselves yet.


Watson at MDACC isn't a good example, it's not the technology that the (main) problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: