期刊论文详细信息
ETRI Journal
A Multi-Strategic Concept-Spotting Approach for Robust Understanding of Spoken Korean
关键词: spontaneous speech;    concept spotting;    information extraction;    Robust spoken language understanding;   
Others  :  1185578
DOI  :  10.4218/etrij.07.0106.0204
PDF
【 摘 要 】

We propose a multi-strategic concept-spotting approach for robust spoken language understanding of conversational Korean in a hostile recognition environment such as in-car navigation and telebanking services. Our concept-spotting method adopts a partial semantic understanding strategy within a given specific domain since the method tries to directly extract pre-defined meaning representation slot values from spoken language inputs. In spite of partial understanding, we can efficiently acquire the necessary information to compose interesting applications because the meaning representation slots are properly designed for specific domain-oriented understanding tasks. We also propose a multi-strategic method based on this concept-spotting approach such as a voting method. We present experiments conducted to verify the feasibility of these methods using a variety of spoken Korean data.

【 授权许可】

   

【 预 览 】
附件列表
Files Size Format View
20150520112522524.pdf 615KB PDF download
【 参考文献 】
  • [1]M.F. McTear, Spoken Dialogue Technology, Springer, 2004.
  • [2]R. Cole (ed.), Survey of the State of the Art in Human Language Technology, Cambridge University Press, New York, NY, USA, 1997.
  • [3]C.-K. Lee, G.-B. Lee, and M.-G. Kim, "Dependency Structure Applied to Language Modeling for Information Retrieval," ETRI J., vol. 28, no. 3. 2006, pp. 337-346.
  • [4]S. Seneff, "TINA: A Natural Language System for Spoken Language Applications," Computational Linguistics, vol. 18, no. 1, 1992, pp. 61-86.
  • [5]W. Ward and B. Pellom, "THE CU-Communicator System," Proc. IEEE Workshop on Automatic Speech Recognition and Understanding, Keystone, Colorado, 1999.
  • [6]E. Levin and R. Pieraccini, "CHRONUS, the Next Generation," Proc. 1995 ARPA Spoken Language Systems Technical Workshop, 1995, pp. 269-271.
  • [7]R. Schwartz, S. Miller, D. Stallard, and J. Makhoul, "Hidden Understanding Models for Statistical Sentence Understanding," Proc. ICASSP, 2:1479, 1997.
  • [8]Y. He and S.J. Young, "Semantic Processing using the Hidden Vector State Model," Computer Speech and Language, vol. 19, no. 1, 2005, pp. 85-106.
  • [9]Y. Wang, A. Acero, M. Mahajan, and J. Lee, "Combining Statistical and Knowledge-Based Spoken Language Understanding in Conditional Models," Proc. COLING/ACL, July 2006.
  • [10]C. Wutiwiwatchai and S. Furui, "A Multi-Stage Approach for Thai Spoken Language Understanding," Speech Communication, In Press, Corrected Proof, Available online 21 April 2005.
  • [11]J. Lafferty, A. McCallum, and F. Pereira, "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data," Proc. Int’l Conf. Machine Learning, 2001.
  • [12]D. Pinto, A. McCallum, X. Lee, and W.B. Croft, "Table Extraction Using Conditional Random Fields," Proc. ACM SIGIR, 2003.
  • [13]F. Sha and F. Pereira, Shallow Parsing with Conditional Random Fields, Proc. HLT-NAACL, 2003.
  • [14]F. Peng and A. McCallum, "Accurate Information Extraction from Research Papers using Conditional Random Fields," Proc. HLT-NAACL, 2004.
  • [15]M. Jeong and G.G. Lee, "Exploiting Non-Local Features for Spoken Language Understanding," Proc. COLING/ACL, July 2006.
  • [16]J. Cowie and W. Lehnert, "Information Extraction," Communications of the ACM, vol. 39, no. 1, 1996, pp. 80-91.
  • [17]A. Ratnaparkhi, Maximum Entropy Models for Natural Language Ambiguity Resolution, PhD thesis, University of Pennsylvania, 1998.
  • [18]R. Malouf, "A Comparison of Algorithms for Maximum Entropy Parameter Estimation," Proc. 6th CoNLL, 2002.
  • [19]V.N. Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, New York, NY, USA, 1995.
  • [20]C.J.C. Burges, "A Tutorial on Support Vector Machines for Pattern Recognition," Data Mining and Knowledge Discovery, vol. 2, no. 2, 1998, pp. 121-167.
  • [21]C. Chang and C. Lin, LIBSVM - A Library for Support Vector Machines, National Taiwan University, 2006.
  • [22]A. Zell, G. Mamier, M. Vogt, N. Mache, R. Hubner, S. Doring, K.U. Herrmann, T. Soyez, M. Schmalzl, T. Sommer, A. Hatzigeorgiou, D. Posselt, T. Schreiner, B. Kett, G. Clemente, and J. Wieland, Stuttgart Neural Network Simulator, User Manual, version 4.1,
  • [23]T.G. Dietterich, "Ensemble Methods in Machine Learning," J. Kittler and F. Roli (ed.) First International Workshop on Multiple Classifier Systems, Lecture Notes in Computer Science, 2002, pp. 1-15.
  • [24]E. Brill and J. Wu, Classifier Combination for Improved Lexical Disambiguation, Proc. COLING-ACL, 1998, pp. 191-195.
  • [25]R. Mihalcea, "Classifier Stacking and Voting for Text Filtering," The Eleventh Text Retrieval Conference (TREC 2002), 2002.
  • [26]M. Jeong, J. Eun, S. Jung, and G.G. Lee, "An Error-Corrective Language-Model Adaptation for Automatic Speech Recognition," Interspeech 2005-Eurospeech, Lisbon, 2005.
  文献评价指标  
  下载次数:13次 浏览次数:12次