Towards Dynamic and Small Objects Refinement for Unsupervised Domain Adaptative Nighttime Segmentation

Jingyi Pan1, Sihang Li1, Yucheng Chen1, Jinjing Zhu1, Lin Wang1,2,*,
1Artificial Intelligence Thrust, HKUST(GZ)
2Dept. of Computer Science & Engineering, HKUST
Visual Learning & Intelligent Systems LAB
arXiv
*Corresponding author

The videos of nighttime semantic segmentation comparisons in NighttimeDriving Test set.

Abstract

Nighttime semantic segmentation is essential for various applications, \eg, autonomous driving, which often faces challenges due to poor illumination and the lack of well-annotated datasets. Unsupervised domain adaptation (UDA) has shown potential for addressing the challenges and achieved remarkable results for nighttime semantic segmentation. However, existing methods still face limitations in 1) their reliance on style transfer or relighting models, which struggle to generalize to complex nighttime environments, and 2) their ignorance of dynamic and small objects like vehicles and traffic signs, which are difficult to be directly learned from other domains. This paper proposes a novel UDA method that refines both label and feature levels for dynamic and small objects for nighttime semantic segmentation. First, we propose a dynamic and small object refinement module to complement the knowledge of dynamic and small objects that are normally context-inconsistent due to poor illumination. Then, we design a feature prototype alignment module to reduce the domain gap by deploying contrastive learning between features and prototypes of the same class from different domains, while re-weighting the categories of dynamic and small objects. Extensive experiments on four benchmark datasets demonstrate that our method outperforms prior arts by a large margin for nighttime segmentation. Our codes will be released soon.

MY ALT TEXT

Our proposed framework.