The rapid development of natural language processing technology makes machine translation play an increasingly important role in cross-lingual information exchange. In this paper, we propose an English long text translation paradigm based on the self-attention mechanism and introduce various improvement strategies to enhance the model performance. The model’s ability to process English long text is improved by introducing multi-head attention and hierarchical self-attention modules. The long text translation paradigm is optimized by using techniques such as residual linkage, layer normalization and dynamic memory network. A series of experiments are conducted to verify the effectiveness of the improved model on the English long text translation task. The English long text translation paradigm constructed in this paper outperforms the Transformer model and other related variants on both CPU and GPU. And Transformer outperforms this paper’s model in terms of n-gram accuracy in real translation experiments. The BLEU scores of the improved model on News and other datasets are significantly improved compared with the original baseline model, which verifies the effectiveness of the improvement strategy of this paper and provides a reference for the solution of the problem of English long text translation.
1970-2025 CP (Manitoba, Canada) unless otherwise stated.