Medical image datasets are an important clinical resource. Effectively referencing patient images against similar related images and case histories can inform and produce better treatment outcomes. Labeling and identifying disease features and relations between images within a large image database has not been a task capable of automation. Rather, it is a task that must be performed by highly trained clinicians who can identify and label the medically meaningful image features. The time constraints on clinicians versus the size of these data sets greatly limits the ability to analyze, annotate, and interrelate images in such a database, so these efforts are not routinely completed for large datasets.
Researchers from the National Institutes of Health - Clinical Center (NIH-CC) have developed a technology that automatically detects disease features and applies labels & annotation of clinical findings in a chest x-rays’ images using deep learning methods. A training set of image annotations is mined for disease names to train convolutional neural networks (CNNs). Recurrent neural networks (RNNs) are then trained to describe the context of a detected disease’s features, building on top of the deep CNN features. Feedback from the trained pair of CNN/RNN can then be used to infer the joint image/text contexts for composite image labeling (location, severity, and the affected organs). The capability for automation of methods to search and extract meaningful medical information from images might transform the usefulness of medical image databases, making them searchable and usable for detection and diagnostic applications in cloud-based services.
- Automated diseases detection
- Ability to automatically annotate contexts
- Computer assisted diagnostic
- Medical image data set mining
- Cloud-based services