In this paper, information theory and information metrics are used to obtain an approximate estimation of linguistic information entropy. After that, the binary model of large-scale corpus and foreign language words is established, N-Gram model is constructed, and the information entropy of modern foreign language speech is estimated. Finally, the N-Gram model was utilized to statistically analyze the results of interpreting information loss, comparing the rate of information transfer in foreign language speeches and the subjects’ interpreting performance. The results showed that the phenomenon of information loss was prominent, with many types of loss, high frequency, and serious loss situations. T assertions had 8.61%-18.95% of propositional information loss, 3.0%-7.6% of constituent information loss, and 49.68% of overall loss. The data on the information loss of each language component showed that TPO and SPE presented the most and the least frequency among the 6 propositional information losses, which were 67 and 1 times, respectively. Among the 13 types of information component loss, TFLS presented the highest frequency and TLE and SFLO presented the lowest, with their losses of 55, 1, and 1 times, respectively. In the interpreted material of English speech, the rate of narration was 2.25 words per second and the average rate was 13.45 bits per second. Among the T assertions, numbers S7, S4, and S9 have the highest propositional untranslated rate (21.8%), propositional mistranslated rate (23.5%), and propositional information loss rate (44.5%), respectively; the corresponding lowest values are at S4 (2.7%), S5 (1.8%), and S4 (2.8%).