基于异构卷积神经网络集成的无监督行人重识别方法
An Unsupervised Person Re-Identification Method Based on Heterogeneous Convolutional Neural Networks Ensemble
查看参考文献42篇
文摘
|
行人重识别旨在从不同的摄像头中识别目标行人的图像.由于不同场景之间存在域偏差,在一个场景中训练好的重识别模型无法直接应用在另一个场景中.为克服该问题,现有的无监督行人重识别方法倾向通过使用聚类算法获得伪标签,再利用伪标签训练重识别模型.但是,由于聚类结果是不准确的,这类方法会引入大量噪声标签,从而限制了模型的泛化能力.因此,为减轻噪声伪标签的影响,本文提出了一种基于异构卷积神经网络集成的无监督行人重识别方法.该框架不使用任何人工标记信息,自动推测目标域中行人图像之间的关系,并构建协作可信实例选择机制,选择可信度高的伪标签用于模型的训练.通过设计双分支异构卷积神经网络学习判别能力强的多种行人特征,并利用记忆单元存储训练过程中的全局特征,减少因噪声标签在训练过程中产生的波动,提高模型的鲁棒性.本文方法在多个公开行人数据集上进行了验证并得到了良好的实验结果.在Market1501和DukeMTMC-reID数据集上,mAP分别达到了85.4%和74.8%. |
其他语种文摘
|
Person re-identification (re-ID) aims to identify a person's images across different cameras. However, the domain bias between different datasets makes it a challenge for re-ID models trained on one dataset to be adapted to another. A variety of unsupervised domain adaptation methods tend to transfer learned knowledge from one domain to another by optimizing with pseudo-labels. However, these methods introduce a large number of noisy labels through one-shot clustering, which hinders the retraining process and limits generalization. To mitigate the impact of noisy pseudo-labels, this paper proposes an unsupervised person re-identification method based on an ensemble of heterogeneous convolutional neural networks. The framework does not apply any manual labeling information, automatically infers the relationship between pedestrian images in the target domain, and a cooperative trusted instance selection mechanism is established to select pseudolabels with high credibility. By constructing a dual-branch heterogeneous network, a variety of different pedestrian features are learned, and memory structures are designed to store the life-long features during the training stage, which could reduce the fluctuation of noise labels, and improve the robustness of the model. Comprehensive experimental results have demonstrated that our proposed method can achieve excellent performances on benchmark datasets. And mAP is increased to 85.4% and 74.8% on Market1501 and DukeMTMC-reID, respectively. |
来源
|
电子学报
,2023,51(10):2902-2914 【核心库】
|
DOI
|
10.12263/DZXB.20220467
|
关键词
|
行人重识别
;
异构卷积神经网络
;
协作可信实例选择
;
噪声平滑
;
自适应更新
|
地址
|
1.
河北大学网络空间安全与计算机学院, 河北, 保定, 071000
2.
大连海事大学信息科学技术学院, 辽宁, 大连, 116026
|
语种
|
中文 |
文献类型
|
研究性论文 |
ISSN
|
0372-2112 |
学科
|
自动化技术、计算机技术 |
基金
|
国家自然科学基金
;
河北大学高层次人才科研启动项目
|
文献收藏号
|
CSCD:7607874
|
参考文献 共
42
共3页
|
1.
Tang Z M. Harmonious multi-branch network for person re-identification with harder triplet loss.
ACM Transactions on Multimedia Computing, Communications, and Applications,2022,18(4):98
|
CSCD被引
1
次
|
|
|
|
2.
Ahmed E. An improved deep learning architecture for person re-identification.
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2015:3908-3916
|
CSCD被引
3
次
|
|
|
|
3.
匡澄. 基于多粒度特征融合网络的行人重识别.
电子学报,2021,49(8):1541-1550
|
CSCD被引
6
次
|
|
|
|
4.
Zhang G Q. Close-set camera style distribution alignment for single camera person re-identification.
Neurocomputing,2022,486:93-103
|
CSCD被引
1
次
|
|
|
|
5.
Zhong Z. Invariance matters: Exemplar memory for domain adaptive person reidentification.
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2020:598-607
|
CSCD被引
2
次
|
|
|
|
6.
Zhong Z. Generalizing a person retrieval model hetero-and homogeneously.
European Conference on Computer Vision,2018:176-192
|
CSCD被引
1
次
|
|
|
|
7.
Ge Y X. Self-paced contrastive learning with hybrid memory for domain adaptive object re-ID.
Proceedings of the 34th International Conference on Neural Information Processing Systems,2020:11309-11321
|
CSCD被引
5
次
|
|
|
|
8.
Wang Z D. CycAs: Selfsupervised cycle association for learning re-identifiable descriptions.
European Conference on Computer Vision,2020:72-88
|
CSCD被引
2
次
|
|
|
|
9.
Fan H H. Unsupervised person re-identification: Clustering and fine-tuning.
ACM Transactions on Multimedia Computing, Communications, and Applications,2018,14(4):83
|
CSCD被引
5
次
|
|
|
|
10.
Lin Y T. A bottom-up clustering approach to unsupervised person re-identification.
Proceedings of the AAAI Conference on Artificial Intelligence,2019,33(1):8738-8745
|
CSCD被引
4
次
|
|
|
|
11.
Wang D K. Unsupervised person re-identification via multi-label classification.
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2020:10978-10987
|
CSCD被引
2
次
|
|
|
|
12.
Chen H. ICE: Inter-instance contrastive encoding for unsupervised person reidentification.
2021 IEEE/CVF International Conference on Computer Vision (ICCV),2022:14940-14949
|
CSCD被引
1
次
|
|
|
|
13.
Li M K. The devil in the tail: Cluster consolidation plus cluster adaptive balancing loss for unsupervised person re-identification.
Pattern Recognition,2022,129:108763
|
CSCD被引
1
次
|
|
|
|
14.
Li M K. Cluster-guided asymmetric contrastive learning for unsupervised person re-identification.
IEEE Transactions on Image Processing,2022,31:3606-3617
|
CSCD被引
5
次
|
|
|
|
15.
Cho Y. Part-based pseudo label refinement for unsupervised person re-identification.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2022:7298-7308
|
CSCD被引
2
次
|
|
|
|
16.
Zhang X. Refining pseudo labels with clustering consensus over generations for unsupervised object re-identification.
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2021:3435-3444
|
CSCD被引
1
次
|
|
|
|
17.
Xuan S Y. Intra-inter camera similarity for unsupervised person re-identification.
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2021:11921-11930
|
CSCD被引
1
次
|
|
|
|
18.
Yang F X. Joint noise-tolerant learning and meta camera shift adaptation for unsupervised person re-identification.
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2021:4853-4862
|
CSCD被引
2
次
|
|
|
|
19.
Wang M L. Camera-aware proxies for unsupervised person re-identification.
Proceedings of the AAAI Conference on Artificial Intelligence,2021,35(4):2764-2772
|
CSCD被引
2
次
|
|
|
|
20.
Zhuang W M. Joint optimization in edge-cloud continuum for federated unsupervised person re-identification.
Proceedings of the 29th ACM International Conference on Multimedia,2021:433-441
|
CSCD被引
2
次
|
|
|
|
|