It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. Our work investigates the use of such descriptors (neural codes) within the image retrieval application. In experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task. We also show how this performance can be improved using a special purpose collection of landmark photographs, and assess the amount of this improvement.
We further evaluate the performance of the compressed neural codes and show that a simple PCA compression provides short codes that give state-of-the-art accuracy on a number of datasets. Overall, our quantitative experiments demonstrate the promise of neural codes as visual descriptors for image retrieval.