Abstract
Hazard detection is the key technique for autonomous landing during a planetary exploration mission. This paper proposed an end-to-end spatiotemporal network to detect all hazards in the image sequences captured by optical camera. The spatial stream processes the colour image sequences and the temporal one learns features from optical flow image sequences. The spatial features and temporal features are fused by the proposed metric fusion method, and then the spatiotemporal features become more discriminative through triplet loss. A hazard map can ultimately be obtained in the testing phase after processing on the full-size image sequences. The evaluation results prove the effectiveness of the proposed network, and the testing results show the feasibility of practical application.
Export citation and abstract BibTeX RIS
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.