Privacy-Preserving Visual Localization with Event Cameras
arXiv Preprint 2022

  • 1Seoul National University
  • 2Snap Inc.
  • *Work done during an internship at Snap Research
    Co-corresponding authors

Abstract

We consider the problem of client-server localization, where edge device users communicate visual data with the service provider for locating oneself against a pre-built 3D map. This localization paradigm is a crucial component for location-based services in AR/VR or mobile applications, as it is not trivial to store large-scale 3D maps and process fast localization on resource-limited edge devices. Nevertheless, conventional client-server localization systems possess numerous challenges in computational efficiency, robustness, and privacy-preservation during data transmission. Our work aims to jointly solve these challenges with a localization pipeline based on event cameras. By using event cameras, our system consumes low energy and maintains small memory bandwidth. Then during localization, we propose applying event-to-image conversion and leverage mature image-based localization, which achieves robustness even in low-light or fast-moving scenes. To further enhance privacy protection, we introduce privacy protection techniques at two levels. Network level protection aims to hide the entire user's view in private scenes using a novel splitted inference approach, while sensor level protection aims to hide sensitive user details such as faces with light-weight filtering. Both methods involve small client-side computation and localization performance loss, while significantly mitigating the feeling of insecurity as revealed in our user study. We thus project our method to serve as a building block for practical location-based services using event cameras.

Video Demo

User Study (Full Version)

As mentioned in Section VII-A in the main paper, we conduct a user study to evaluate how the general public feels about our privacy protection algorithms. For now, we only share the questions along with a few exemplary images but during the user study we showed each user videos of multiple privacy protection results for every question.

Citation

Acknowledgements

The authors thank Dejia Xu, Fangzhou Mu, Qijia Shao, William Xie, Rui Yu, and the Spectacles team for the fruitful discussions. Also, the authors express their gratitudes toward the volunteers for the user study and human data capture. The website template was borrowed from Michaël Gharbi.