Detecting and Labeling Diseases in Chest X-Rays with Deep Learning

Researchers from the National Institutes of Health in Bethesda, Maryland are using NVIDIA GPUs and deep learning to automatically annotate diseases from chest x-rays.

Accelerated by Tesla GPUs, the team trained their convolutional neural networks on a publicly available radiology dataset of chest x-rays and reports to describe the characteristics of a disease, such as location, severity and the affected organs.

Labeling Chest X-rays with GPUs
Examples of annotation generations (light green box) compared to true annotations (yellow box) for input images in the test set.

The researchers mention this is the first study (to the best of their knowledge) that mines from a publicly available radiology image and report dataset, not only to classify and detect disease in images, but also to describe their context similar to how a human observer would read.

Read the research paper >>