To help diagnose and monitor people with sleep disorders, researchers from MIT and Massachusetts General Hospital developed a device that uses an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).
“Imagine if your Wi-Fi router knows when you are dreaming, and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation,” says Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, who led the study. “Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way.”
Using TITAN X GPUs and the cuDNN-accelerated TensorFlow deep learning framework, the researchers trained their deep neural networks to extract and analyze information from complex datasets, such as the radio signals obtained from the researchers’ sensor.
“The opportunity is very big because we don’t understand sleep well, and a high fraction of the population has sleep problems,” says Mingmin Zhao, an MIT graduate student and first author of the paper. “We have this technology that, if we can make it work, can move us from a world where we do sleep studies once every few months in the sleep lab to continuous sleep studies in the home.”