In this paper, we use a large language model for business English translation and context analysis, and propose an adaptive parameter unfreezing method based on the quantization difference between adjacent layers within the decoder to fine-tune the layers of the language model related to the translation task, and to understand the behavior of the model in the relevant layers. Then the method of combining different encoders is proposed as a dual encoding-decoding framework on top of the traditional encoding-decoding framework, which is applied to the task of context analysis in business English translation. The fine-tuning method in this paper significantly improves the text translation quality of the language model, especially in the English-X tri-lingualization, which improves the COMET and BLEU metrics by 3.22 and 2.58 points respectively. In addition, the dual encoding-decoding model proposed in this paper is applicable to the task of contextual analysis in business English translation, which significantly improves the performance of contextual analysis in business English, and the F1 value on the HIT-CDTB dataset is improved by 11.60% compared with that of Rutherford’s model. The experiment proves that the proposed method of text has made progress in the research of the task of analyzing textual contextual relations in business English.