Leading stock photo company Shutterstock unveiled a new deep learning-based tool that lets users search photos by their composition.
“Built on our next generation visual similarity model, this tool helps you find the exact image you need by placing keywords on a canvas and moving them around where you want subject matter to appear in the image,” mentioned Kevin Lester, VP of Engineering at Shutterstock in a related blog. “The patent-pending spatially aware technology will find strong matches based not only on your search terms, but also on the placement of your search terms.”
Using TITAN X GPUs and the cuDNN-accelerated Torch deep learning framework, the researchers trained their visual model on their own internal image dataset and the language model to match a textual query to the embedding of a corresponding image. Once trained, they leverage Tesla GPUs on the Amazon cloud to give users total control over the image composition on any project – such as being able to use search terms like “wine” and “cheese” and being able to drag it around so photos of “wine” are on the left and “cheese” on the right.
The Composition Aware Search tool is currently in beta and available for anyone to try out.