This work introduces VISY-REVE: a novel pipeline to validate image processing algorithms for Vision-Based Navigation. Traditional validation methods such as synthetic rendering or robotic testbed acquisition suffer from difficult setup and slow runtime. Instead, we propose augmenting image datasets in real-time with synthesized views at novel poses. This approach creates continuous trajectories from sparse, pre-existing datasets in open or closed-loop. In addition, we introduce a new distance metric between camera poses, the Boresight Deviation Distance, which is better suited for view synthesis than existing metrics. Using it, a method for increasing the density of image datasets is developed.
Our key contributions are the introduction of a real-time view synthesis pipeline and alongside this a novel distance metric between poses that is highly predictive of the quality of synthesized images.
Our pipeline can be used to enable closed-loop testing from existing, sparse datasets, thereby avoiding time-consuming synthetic rendering or tedious setup of closed-loop operations in a facility.
It can also be used to densify datasets, that is, increasing their sampling density offline after they have been acquired / generated. This serves to fill "holes" that could not be sampled during the acqusition process due to computing or facility constraints. The synthesized samples can be chosen in a way to specifically fulfill certain quality requirements.
We use traditional, analytic view synthesis methods to transform source views of an existing dataset into novel views. We propose two methods, one that favors fast computation while the other favors accurate synthesis.
The novel distance metric that we introduce is called the Boresight Deviation Distance. It has been designed to only consider the important degrees of freedom that most effect the quality of synthesized views. This has two main advantages: 1. It enables choosing optimal nearest-neighbor views to perform view synthesis; 2. It enables creating more liberal performance models.
Below you can see the intuition for this new metric as well as its 3D plot.
In the future we would like to expand our view synthesis methods to include more modern approaches such as NeRF or Gaussian Splatting. Both of these would allow to synthesize not only geometric but also radiometric (i.e. lighting) changes associated to pose changes. This would prove especially useful for enhancing image datasets offline.
Feel free to see the paper for more information, thank you for reading!
@misc{neuhalfen2025enablingrobustrealtimeverification,
title={Enabling Robust, Real-Time Verification of Vision-Based Navigation through View Synthesis},
author={Marius Neuhalfen and Jonathan Grzymisch and Manuel Sanchez-Gestido},
year={2025},
eprint={2507.02993},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.02993},
}