Research on Computational Methods for Optimizing the Teaching Content of Translation Based on Semantic Association Network Models

Peng Li1, Yunxuan Zhang2
1School of Foreign Languages, Wuhan Polytechnic University, Wuhan, Hubei, 430048, China
2School of Foreign Languages, Wuhan City Polytechnic, Wuhan, Hubei, 430000, China

Abstract

Existing translation teaching content has certain deficiencies, this paper discusses the computational methods to optimize the translation teaching content by combining the semantic association network model. A domain translation model with joint semantic information is proposed, which constructs a bilingual mapping relation of domain-specific word vectors to obtain the semantic k-nearest neighbors of words in a specific domain,so as to estimate the domain intertranslation degree of words and improve the adaptive ability of the domain translation model. Then a semantic similarity computation model (SRoberta-SelfAtt) incorporating Robert’s pre-training model is proposed. The model incorporates a self-attention mechanism to extract the association of different words within the text, and acquires richer sentence vector information. The proposed domain translation model is able to obtain more accurate translation results while spending less time. Compared with the stability of the iterative process of the basic model, the SRoberta-SelfAtt model has higher iterative stability. The Roberta-based semantic similarity computation model can effectively improve the performance of the word vector model. The experimental results show that the domain translation model with joint semantic information and the SRoberta-SelfAtt model are more practical for the task of optimizing translation teaching content.

Keywords: Semantic similarity computation, domain translation model, semantic k-nearest neighbor words, Robert pre-training model, translation teaching