帮助 关于我们

返回检索结果

基于弱标签争议的半自动分类数据标注方法
The Semi-Automatic Classification Data Labeling Method Based on Dispute About Weak Label

查看参考文献19篇

李自强 1   杨薇 2 *   杨先凤 2   罗林 3  
文摘 当前,深度主动学习(Deep Active Learning,DAL)在分类数据标注工作中获得成功,但如何筛选出最能提升模型性能的样本仍是难题.本文提出基于弱标签争议的半自动分类数据标注方法(Dispute about Weak Label based Deep Active Learning,DWLDAL),迭代地筛选出模型难以区分的样本,交给人工进行准确标注.该方法包含伪标签生成器和弱标签生成器,伪标签生成器是在准确标注的数据集上训练而成,用于生成无标签数据的伪标签;弱标签生成器则是在带伪标签的随机子集上训练而成.弱标签生成器委员会决定哪些无标签数据最有争议,则交给人工标注.本文针对文本分类问题,在公开数据集IMDB(Internet Movie DataBase)、20NEWS(20NEWSgroup)和chnsenticorp(chnsenticorp_htl_all)上进行实验验证.从数据标注和分类任务的准确性2个角度,对3种不同投票决策方式进行评估. DWLDAL方法中数据标注的F1分数比现有方法Snuba分别提高30.22%、14.07%和2.57%,DWLDAL方法中分类任务的F1分数比Snuba分别提高1.01%、22.72%和4.83%.
其他语种文摘 At present, deep active learning (DAL) in the classification data labeling work has achieved outstanding success. How to select samples to improve the performance of models is still a difficult problem in deep active learning. We proposes a semi-automatic classification data labeling method based on weak label dispute (Dispute about Weak Labelbased Deep Active Learning, DWLDAL). The method iteratively selects samples that is difficult for model to distinguish, and manually annotate these sample. This method contains pseudo label generator and weak label generator, pseudo label generator is trained on accurately annotated datasets to generate pseudo label for unlabeled data; weak label generator is trained on random data subset with pseudo labels. Weak label generator committee are used to determine which unlabeled data is the most controversial and should be manually annotated. We conducted experimental validation on the common datasets IMDB (Internet Movie Database), 20NEWS (20NEWSgroup), and chnsenticorp (chnsenticorp_htl_all) to address the issue of text classification. Three different voting decision-making methods are evaluated from the perspective of the accuracy of data annotation and classification tasks. The F1 score of data annotation in DWLDAL method is 30.22%, 14.07% and 2.57% higher than that in the existing method Snuba, respectively. The F1 score of classification task in DWLDAL method is 1.01%, 22.72% and 4.83% higher than that in Snuba method, respectively.
来源 电子学报 ,2024,52(8):2891-2899 【核心库】
DOI 10.12263/DZXB.20230648
关键词 深度主动学习 ; 文本分类 ; 伪标签生成器 ; 弱标签生成器 ; 投票委员会
地址

1. 四川师范大学影视与传媒学院, 四川, 成都, 610066  

2. 西南石油大学计算机与软件学院, 四川, 成都, 610500  

3. 泰豪软件股份有限公司成都研发中心, 四川, 成都, 610041

语种 中文
文献类型 研究性论文
ISSN 0372-2112
学科 自动化技术、计算机技术
基金 四川省科技厅重点研发计划 ;  国家自然科学基金
文献收藏号 CSCD:7813008

参考文献 共 19 共1页

1.  Cao Z H. Weak human preference supervision for deep reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems,2021,32(12):5369-5378 CSCD被引 1    
2.  何雨航. 基于深度神经网络与弱监督学习的开放域问答技术研究,2022 CSCD被引 1    
3.  Ren P Z. A survey of deep active learning. ACM Computing Surveys,2022,54(9):1-40 CSCD被引 1    
4.  Trust P. Understanding the influence of news on society decision making: Application to economic policy uncertainty. Neural Computing and Applications,2023,35(20):14929-14945 CSCD被引 1    
5.  Ratner A. Snorkel: Rapid training data creation with weak supervision. The VLDB Journal,2020,29(2):709-730 CSCD被引 2    
6.  Varma P. Snuba: Automating weak supervision to label training data. Proceedings of the VLDB Endowment,2018,12(3):223-236 CSCD被引 2    
7.  Park Y. Distribution aware active learning via gaussian mixtures. The International Conference on Learning Representations,2023:1-22 CSCD被引 1    
8.  Chen Y K. Applying active learning to high-throughput phenotyping algorithms for electronic health records data. Journal of the American Medical Informatics Association,2013,20(e2):e253-e259 CSCD被引 1    
9.  Goudjil M. A novel active learning method using SVM for text classification. International Journal of Automation and Computing,2018,15(3):290-298 CSCD被引 21    
10.  Buchert F. Toward label-efficient neural network training: Diversity-based sampling in semi-supervised active learning. IEEE Access,2023,11:5193-5205 CSCD被引 2    
11.  Zhou S S. Active deep networks for semi-supervised sentiment classification. Coling 2010-23rd International Conference on Computational Linguistics, Proceedings of the Conference. 2,2010:1515-1523 CSCD被引 1    
12.  Bhattacharjee S D. Active learning based news veracity detection with feature weighting and deep-shallow fusion. 2017 IEEE International Conference on Big Data,2017:556-565 CSCD被引 1    
13.  Lison P. Skweak: Weak supervision made easy for NLP. 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations,2021:337-346 CSCD被引 1    
14.  Liu P. A survey on active deep learning: From model driven to data driven. ACM Computing Surveys,2022,54(10s):1-34 CSCD被引 1    
15.  Liu J. Attention-based BiGRU-CNN for Chinese question classification. Journal of Ambient Intelligence and Humanized Computing,2019,12(2):709-730 CSCD被引 1    
16.  Devlin J. BERT: Pretraining of deep bidirectional transformers for language understanding. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,2019:4171-4186 CSCD被引 8    
17.  Gao Z W. Short text aspectbased sentiment analysis based on CNN + BiGRU. Applied Sciences,2022,12(5):2707 CSCD被引 1    
18.  Safranchik E. Weakly supervised sequence tagging from noisy rules. Proceedings of the AAAI Conference on Artificial Intelligence. 34(4),2020:5570-5578 CSCD被引 2    
19.  Beluch W H. The power of ensembles for active learning in image classification. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2018:9368-9377 CSCD被引 2    
引证文献 1

1 王培晓 地理空间智能预测研究进展与发展趋势 地球信息科学学报,2025,27(1):60-82
CSCD被引 0 次

显示所有1篇文献

论文科学数据集
PlumX Metrics
相关文献

 作者相关
 关键词相关
 参考文献相关

版权所有 ©2008 中国科学院文献情报中心 制作维护:中国科学院文献情报中心
地址:北京中关村北四环西路33号 邮政编码:100190 联系电话:(010)82627496 E-mail:cscd@mail.las.ac.cn 京ICP备05002861号-4 | 京公网安备11010802043238号