We may have once categorized melanoma simply as a type of skin cancer. But it’s starting to sound as outdated as calling pneumonia, bronchitis, and hay fever a “cough.” Personalized medicine will help more oncologists gain a more sophisticated understanding of a particular cancer, for example one of many mutations. If properly combined, compared, and analyzed, the scanned records could indicate which combination of chemotherapy, radioimmunotherapy, surgery, and radiation therapy has the best results for that particular cancer subtype. It is the aspiration at the heart of “learning health care systems”, designed to optimize medical interventions by comparing the results of natural variations in treatments.
For those who dream of a “Super Watson” moving from conquering Jeopardy to running hospitals, each of these advancements may seem like steps towards machine-implemented cookbook medicine. And who knows what awaits us in a hundred years? In our lives, what matters is how all of these data flows are integrated, how much effort is made to achieve that goal, how participants are treated and who has access to the results. These are all tough questions, but no one should doubt that juggling all the data will require skilled and careful human intervention – and plenty of good legal advice, given the complex rules on privacy protection and research. human subjects.
To dig deeper into radiology: imaging of body tissues is progressing rapidly. We have seen the advancement of X-rays and ultrasound towards nuclear imaging and radiomics. Scientists and engineers are developing more and more ways to report on what is going on inside the body. There are already unmanageable pill-cams; imagine much smaller, injectable versions of the same thing. The resulting data streams are much richer than what was before. Incorporating them into a judgment on how to alter or entirely alter treatment regimens will require creative and unsystematic thinking. As radiologist James Thrall argued,
The data in our. . . information system databases are “dumb” data. [They are] usually accessed one picture or one fact at a time, and it is up to the individual user to integrate the data and extract conceptual or operational value from it. The goal for the next 20 years will be to transform dumb data from large and disparate data sources into knowledge and use the ability to quickly mobilize and analyze data to improve the efficiency of our work processes.
Richer lab results, new and better forms of imaging, genetic analysis and other sources will need to be incorporated into a cohesive picture of a patient’s disease state. In Simon Head’s thoughtful distinction, optimizing medical responses to new volumes and varieties of data will be a matter of practice, not a predetermined process. Diagnostic and interventional radiologists will have to take over difficult cases, and not as simple sorting exercises.
Considering all the data streams currently available, one might assume that a rational health policy would deepen and broaden the professional training of radiologists. But it seems that the field is moving more towards commoditization in the United States. Ironically, the radiologists themselves have a great deal of responsibility here; to avoid night shifts, they began to contract with remote nighthawk services to review the footage. This, in turn, led to a ‘dayhawking’ and pressure on cost-conscious healthcare systems to find the cheapest radiological expertise available – even though best medical practice recommended closer consultations between radiologists. and other members of the healthcare team for research purposes. Government reimbursement policies have also not done enough to promote advances in radiological AI.
Many judgments have to be made by imaging specialists who encounter new data streams. Currently, robust private and social insurance covers widespread access to radiologists who can attempt to meet these challenges. But can we imagine a world in which people are lured into cheaper insurance plans to get “last year’s drugs at last year’s price”? Absolutely. Just as one can imagine that the second level (or third, fourth or fifth level) of medical care will probably be the first to include purely automated diagnoses.
Those at the top level may be happy to see the resulting drop in health care costs in general; they are often the ones who pay the taxes necessary to cover the uninsured. But no patient is an island in the learning health care system. Just as ever-cheaper drug production patterns have left the United States with persistent shortages of sterile injectables, excluding a significant portion of the population from high-tech care will make it more difficult for those with access to it. these care to understand whether it is worth a try. . A learning health system can make extraordinary discoveries, if a comprehensive data set can feed into observational research on cutting-edge clinical innovations. The less people have access to such innovations, the less opportunity we have to learn how well they work and how they can be improved. Prioritization can solve the drug cost crisis right now, but delays future medical advancements for all. Thus, there is a great road to advancements in medical AI, emphasizing better access for all to improving the quality of care, and a low cost road, which simply focuses on improving the quality of care. reproduction of what we have. Doctors, hospital directors and investors will implement the high way, the low way or an intermediate way. Their decisions, in turn, are shaped by a changing legislation and political landscape.
For example, consider the tensions between tradition and innovation in malpractice law. When something goes wrong, doctors are judged by a standard of care that largely refers to what other doctors are doing at the time. Problems of professional misconduct thus frighten certain doctors in conformism and traditionalism. On the other hand, the threat of litigation can also accelerate the transition to significantly better practices. No doctor today could be content with simply palpating a large tumor to diagnose whether it is malignant or benign. Samples usually need to be taken, pathologists consulted, and expert tissue analysis performed. If AI diagnostic methods become sufficiently advanced, it will be malpractice not to use them as well.
On the other hand, advanced automation may never be successful if third party payers, whether government or insurers, refuse to pay for it. Insurers often try to limit the range of care covered by their plans. Patient rights groups are fighting for mandatory benefits. Budget cutters resist, and when they succeed, health systems have no choice but to reject expensive new technologies.
Other regulatory regimes are also important. Medical boards determine the minimum acceptable level of practice for physicians. In the United States, the Centers for Medicare and Medicaid Services help set the conditions for higher medical education through grants. Well funded, they can design collaborations with bioengineers, computer scientists and statisticians. Poorly funded, they will continue to produce too many doctors who ignore the statistical knowledge necessary to do their current job well, let alone critically appraise new AI-based technologies.
The law is not just another set of hurdles to be crossed before engineers can be released to cure the ailments of mankind. The main reason why employment in the health sector has actually increased as a sector over the past decade is the legal mandates giving large sections of the population guaranteed purchasing power, regardless of their wages or their wealth. At best, these legal mandates also guide the development of a health care system towards continuous innovation and improvement.