Graph neural networks are an effective method for action recognition using human skeletal data, but previous recognition methods lack attention to spatial features. In order to improve this research deficiency, this paper conducts action recognition research based on ST-GCN. An action recognition network based on two-way skeletal joint information is proposed, where the human body is divided into various parts to calculate the representation vectors, and a graph convolutional neural network is trained to obtain the classification results. Attention mechanism is designed to minimize the effect of background noise, and data enhancement by means of flipping and shifting is performed to improve the model performance. Ablation experiments verify high accuracy when using both the attention mask matrix and the global self-attention mechanism, as well as when using both the joints network branching and the parts network branching. The model in this paper recognizes all 12 actions of the NW-UCLA dataset with accuracies higher than 92%, and the data enhancement effect is also verified.