期刊论文详细信息
Algorithms
Knowledge Distillation-Based Multilingual Code Retrieval
Junfei Xu1  Wen Li1  Qi Chen1 
[1] College of Computer Science and Technology, Zhejiang University, Hangzhou 310013, China;
关键词: multilingualities;    code search;    knowledge distillation;   
DOI  :  10.3390/a15010025
来源: DOAJ
【 摘 要 】

Semantic code retrieval is the task of retrieving relevant codes based on natural language queries. Although it is related to other information retrieval tasks, it needs to bridge the gaps between the language used in the code (which is usually syntax-specific and logic-specific) and the natural language which is more suitable for describing ambiguous concepts and ideas. Existing approaches study code retrieval in a natural language for a specific programming language, however it is unwieldy and often requires a large amount of corpus for each language when dealing with multilingual scenarios.Using knowledge distillation of six existing monolingual Teacher Models to train one Student Model—MPLCS (Multi-Programming Language Code Search), this paper proposed a method to support multi-programing language code search tasks. MPLCS has the ability to incorporate multiple languages into one model with low corpus requirements. MPLCS can study the commonality between different programming languages and improve the recall accuracy for small dataset code languages. As for Ruby used in this paper, MPLCS improved its MRR score by 20 to 25%. In addition, MPLCS can compensate the low recall accuracy of monolingual models when perform language retrieval work on other programming languages. And in some cases, MPLCS’ recall accuracy can even outperform the recall accuracy of monolingual models when they perform language retrieval work on themselves.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:1次