基于改进主题分布特征的神经网络语言模型
Neural Network Language Modeling Using an Improved Topic Distribution Feature
查看参考文献19篇
文摘
|
在递归神经网络(RNN)语言模型输入中增加表示当前词所对应主题的特征向量是一种有效利用长时间跨度历史信息的方法。由于在不同文档中各主题的概率分布通常差别很大,该文提出一种使用文档主题概率改进当前词主题特征的方法,并将改进后的特征应用于基于长短时记忆(LSTM)单元的递归神经网络语言模型中。实验表明,在PTB数据集上该文提出的方法使语言模型的困惑度相对于基线系统下降11.8%。在SWBD数据集多候选重估实验中,该文提出的特征使LSTM模型相对于基线模型词错误率(WER)相对下降6.0%;在WSJ数据集上的实验中,该特征使LSTM模型相对于基线模型词错误率(WER)相对下降6.8%,并且在eval92测试集上,改进隐含狄利克雷分布(LDA)特征使RNN效果与LSTM相当。 |
其他语种文摘
|
Attaching topic features to the input of Recurrent Neural Network (RNN) models is an efficient method to leverage distant contextual information. To cope with the problem that the topic distributions may vary greatly among different documents, this paper proposes an improved topic feature using the topic distributions of documents and applies it to a recurrent Long Short-Term Memory (LSTM) language model. Experiments show that the proposed feature achieved an 11.8% relatively perplexity reduction on the Penn TreeBank (PTB) dataset, and reached 6.0% and 6.8% relative Word Error Rate (WER) reduction on the SWitch BoarD (SWBD) and Wall Street Journal (WSJ) speech recognition task respectively. On WSJ speech recognition task, RNN with this feature can reach the effect of LSTM on eval92 testset. |
来源
|
电子与信息学报
,2018,40(1):219-225 【核心库】
|
DOI
|
10.11999/jeit170219
|
关键词
|
语音识别
;
语言模型
;
隐含狄利克雷分布
;
长短时记忆
|
地址
|
1.
中国科学院声学研究所, 中国科学院语言声学与内容理解重点实验室, 北京, 100190
2.
中国科学院大学, 北京, 100049
3.
中国科学院新疆理化技术研究所, 新疆民族语音语言信息处理实验室, 乌鲁木齐, 830011
|
语种
|
中文 |
文献类型
|
研究性论文 |
ISSN
|
1009-5896 |
学科
|
自动化技术、计算机技术 |
基金
|
国家自然科学基金
;
国家重点研发计划重点专项
;
新疆维吾尔自治区科技重大专项
|
文献收藏号
|
CSCD:6158980
|
参考文献 共
19
共1页
|
1.
Mikolov T. Recurrent neural network based language model.
INTERSPEECH,2010:1045-1048
|
被引
9
次
|
|
|
|
2.
Mikolov T.
Learning longer memory in recurrent neural networks
|
被引
1
次
|
|
|
|
3.
Medennikov I. LSTM-based language models for spontaneous speech recognition.
International Conference on Speech and Computer,2016:469-475
|
被引
1
次
|
|
|
|
4.
Huang Z. Cache based recurrent neural network language model inference for first pass speech recognition.
IEEE International Conference on Acoustics, Speech and Signal Processing,2014:6354-6358
|
被引
1
次
|
|
|
|
5.
Coccaro N. Towards better integration of semantic predictors in statistical language modeling.
International Conference on Spoken Language Processing,1998:2403-2406
|
被引
1
次
|
|
|
|
6.
Khudanpur S. Maximum entropy techniques for exploiting syntactic, semantic and collocational dependencies in language modeling.
Computer Speech & Language,2000,14(4):355-372
|
被引
3
次
|
|
|
|
7.
Lau R. Trigger-based language models: A maximum entropy approach.
IEEE International Conference on Acoustics, Speech, and Signal Processing,2002:45-48
|
被引
1
次
|
|
|
|
8.
Echeverry-Correa J D. Topic identification techniques applied to dynamic language model adaptation for automatic speech recognition.
Expert Systems with Applications,2015,42(1):101-112
|
被引
1
次
|
|
|
|
9.
Mikolov T. Context dependent recurrent neural network language model.
Spoken Language Technology Workshop,2012:234-239
|
被引
1
次
|
|
|
|
10.
张剑. 基于词向量特征的循环神经网络语言模型.
模式识别与人工智能,2015(4):299-305
|
被引
15
次
|
|
|
|
11.
Gong C. Recurrent neural network language model with part-of-speech for Mandarin speech recognition.
International Symposium on Chinese Spoken Language Processing,2014:459-463
|
被引
1
次
|
|
|
|
12.
左玲云. 电话交谈语音识别中基于LSTM-DNN语言模型的重评估方法研究.
重庆邮电大学学报(自然科学版),2016,28(2):180-186
|
被引
4
次
|
|
|
|
13.
王龙. 基于循环神经网络的汉语语言模型并行优化算法.
应用科学学报,2015,33(3):253-261
|
被引
1
次
|
|
|
|
14.
Piotr Bojanowski.
Enriching word vectors with subword information
|
被引
1
次
|
|
|
|
15.
Ganguly D. Word embedding based generalized language model for information retrieval.
The International ACM SIGIR Conference,2015:795-798
|
被引
1
次
|
|
|
|
16.
Li X.
Recurrent neural network training with preconditioned stochastic gradient descent,2016
|
被引
1
次
|
|
|
|
17.
Blei D M. Latent dirichlet allocation.
Journal of Machine Learning Research,2003,3:993-1022
|
被引
1298
次
|
|
|
|
18.
Bhutada S. Semantic latent dirichlet allocation for automatic topic extraction.
Journal of Information & Optimization Sciences,2016,37(3):449-469
|
被引
2
次
|
|
|
|
19.
Marcus M P. Building a large annotated corpus of English: the penn treebank.
Computational Linguistics,1993,19(2):313-330
|
被引
32
次
|
|
|
|
|