Skip to content

Visual place recognition using landmark distribution descriptors

Research output: Working paperWorking paper and Preprints

Original languageEnglish
Publisher or commissioning bodyarXiv.org
Number of pages14
Volume1608.04274
StatePublished - 15 Aug 2016

Abstract

Recent work by Suenderhauf et al. [1] demonstrated improved visual place recognition using proposal regions coupled with features from convolutional neural networks (CNN) to match landmarks between views. In this work we extend the approach by introducing descriptors built from landmark features which also encode the spatial distribution of the landmarks within a view. Matching descriptors then enforces consistency of the relative positions of landmarks between views. This has a significant impact on performance. For example, in experiments on 10 image-pair datasets, each consisting of 200 urban locations with significant differences in viewing positions and conditions, we recorded average precision of around 70% (at 100% recall), compared with 58% obtained using whole image CNN features and 50% for the method in [1].

Additional information

13 pages

    Research areas

  • place recognition

Download statistics

No data available

Documents

Documents

  • Full-text PDF (accepted author manuscript)

    Rights statement: This is the accepted author manuscript (AAM). The final published version (version of record) is available online via arXiv.org at http://arxiv.org/pdf/1608.04274.pdf. Please refer to any applicable terms of use of the publisher.

    Accepted author manuscript, 3 MB, PDF-document

View research connections

Related faculties, schools or groups