Preview

«System analysis and applied information science»

Advanced search

Terrain relative navigation based on deep feature template matching and visual odometry

https://doi.org/10.21122/2309-4923-2025-1-4-19

Abstract

The main hurdle for terrain relative navigation systems is the incongruity of visual features between a patch of a satellite reference map and a view from an onboard UAV camera. Images are taken during different time of year, under different weather, vegetation and lighting conditions, with different angles of observation. This work proposes the usage of deep feature template matching, where features are extracted during unsupervised training using a triplet loss. It provides semantic understanding, agnostic to terrain transformations. In order to overcome struggling to navigate over featureless terrains, the work proposes additional usage of visual odometry with the procedure of sticking to the map after encountering enough features, with the procedure of hypothesizing over possible locations. Passing a fragment of the reference map through the trained feature extractor, applying an entropy filter and then a path-finding algorithm allows planning a flying path over areas rich of features relevant for navigation.

About the Author

E. V. Rulko
Military Academy of the Republic of Belarus
Belarus

Eugene Rulko, РhD, associate professor in computer science. The head of the research laboratory of military operation simulation

Minsk



References

1. Image Transformation Learning for Drone Navigation without GPS: [Electronic resource] // HuCE – cpvrLab. URL: https://www.youtube.com/watch?v=5JEFe2_L4So. (Date of access: 23/10/2024).

2. Anthony T. Fragoso et al. A seasonally invariant deep transform for visual terrain-relative navigation. Science robotics. 2021. Vol. 6, № 55.

3. Autonomous Navigation with Improved Visual Terrain Recognition: [Electronic resource] // Caltech. URL: https://www.youtube.com/watch?v=U5Kr0YI3sec. (Date of access: 23/10/2024).

4. Jiaming Sun et al. LoFTR: Detector-Free Local Feature Matching with Transformers. 2021. arXiv: 2104.00680.

5. ORB (Oriented FAST and Rotated BRIEF): [Electronic resource] // OpenCV. URL: https://docs.opencv.org/4.x/d1/d89/tutorial_py_orb.html. (Date of access: 23/10/2024).

6. ORB vs MLaided Visual TRN: [Electronic resource] // KEF Robotics. URL: https://www.youtube.com/@kefrobotics6924. (Date of access: 12/01/2024).

7. Jiexiong Tang et al. Neural Outlier Rejection for Self-Supervised Keypoint Learning. 2019. arXiv: 1912.10615.

8. Paul-Edouard Sarlin et al. SuperGlue: Learning Feature Matching with Graph Neural Networks. 2020. arXiv: 1911.11763.

9. Yifan Wang et al. Efficient LoFTR: Semi-Dense Local Feature Matching with Sparse-Like Speed. 2024. arXiv: 2403.04765.

10. Philipp Lindenberger et al. LightGlue: Local Feature Matching at Light Speed. 2023. arXiv: 2306.13643.

11. Rémi Pautrat et al. GlueStick: Robust Image Matching by Sticking Points and Lines Together. 2023. arXiv: 2304.02008.

12. Jonghee Kim et al. Robust template matching using scale-adaptive deep convolutional features. APSIPA Annual Summit and Conference. 2017.

13. Jiaxin Cheng et al. QATM: Quality-Aware Template Matching For Deep Learning. 2019. arXiv: 1903.07254.

14. ImageNet: [Electronic resource] // URL: https://www.image-net.org. (Date of access: 23/10/2024).

15. Schroff et al. FaceNet: A unified embedding for face recognition and clustering. 2015. arXiv: 1503.03832.

16. Weights & Biases: [Electronic resource] // URL: https://wandb.ai/site. (Date of access: 23/10/2024).

17. Examples. Filtering and restoration. Entropy: [Electronic resource] // Scikit-image. https://scikit-image.org/docs/ stable/auto_examples/filters/plot_entropy.html. (Date of access: 23/10/2024).

18. A* Search Algorithm: [Electronic resource] // https://www.geeksforgeeks.org/a-search-algorithm. (Date of access: 23/10/2024).

19. Annotated high-resolution satellite imagery for building damage assessment: [Electronic resource] // xBD Dataset. URL: https://xview2.org/dataset. (Date of access: 23/10/2024).

20. Mathilde Caron et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments. 2021. arXiv: 2006.09882.

21. Lihe Yang et al. Depth Anything V2. 2024. arXiv: 2406.09414.

22. Dioram Visual Navigation(based on Dioram SLAM One) point-cloud mapping in a city scale: [Electronic resource] // Dioram: Computer Vision, Machine Learning, SLAM. URL: https://www.youtube.com/watch?v=DwAT46MdyXk. (Date of access: 23/10/2024).

23. Awesome-lane-detection: [Electronic resource] // Github.com. URL: https://github.com/amusi/awesome-lanedetection?tab=readme-ov-file#2023. (Date of access: 23/10/2024).

24. J. Oliver, J. Hagen. Designing the Elements of a Fuzzy Hashing Scheme. Science robotics. 2021 IEEE 19th International Conference on Embedded and Ubiquitous Computing (EUC), Shenyang, China, 2021, pp. 1-6.

25. Real-World 3D Geospatial Capability for Unity: [Electronic resource] // Cesium.com. URL: https://cesium.com/platform/cesium-for-unity. (Date of access: 23/10/2024).


Review

For citations:


Rulko E.V. Terrain relative navigation based on deep feature template matching and visual odometry. «System analysis and applied information science». 2025;(1):4-19. https://doi.org/10.21122/2309-4923-2025-1-4-19

Views: 351


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2309-4923 (Print)
ISSN 2414-0481 (Online)