Spurred by the need for neural networks capable of tackling vast wells of high-res satellite data, a team from the NASA Advanced Supercomputing Division at NASA Ames and Louisiana State University have sought a new blend of deep learning techniques that can build on existing neural nets to create something robust enough for satellite datasets.
Deep belief networks are on offshoot of the larger tree of neural networks–not the same thing, but from the same family. They can be trained in an unsupervised manner and using the model developed, can reconstruct a similar pattern on a new dataset, or be used as a classifier in supervised learning. While this definition only scrapes the surface of how these work, the relevant bit is that the networks are trained one layer at a time, and can be used across very large and noisy datasets. For example, something as small as a piece of handwriting means a very large deep learning to get to the point of handwriting recognition—something that deep belief networks are well-equipped to handle.
There is quite a bit for the system to train and learn from the main dataset the training datasets were pulled from–a massive survey of 330,000 scenes across the U.S.. The average image tiles are around 6000 pixels wide and 7000 in height, which means each weighs in around 200 MB each. This adds up quickly—the entire dataset for this sample was close to 65 terabytes with ground sample distance of one meter. These were condensed down to 500,000 image patches for one of the two deep learning sets with a range of landscape features, of which one-fifth was used for the training dataset. The training dataset was then put through both supervised and unsupervised training before working against the newly created NASA datasets using the NASA Ames Pleiades Supercomputer GPU cluster which which is equipped with Tesla GPUs with 217,088 CUDA cores.