Utilitas Algorithmica (UA)
ISSN: xxxx-xxxx (print)
Utilitas Algorithmica (UA) is a premier, open-access international journal dedicated to advancing algorithmic research and its applications. Launched to drive innovation in computer science, UA publishes high-impact theoretical and experimental papers addressing real-world computational challenges. The journal underscores the vital role of efficient algorithm design in navigating the growing complexity of modern applications. Spanning domains such as parallel computing, computational geometry, artificial intelligence, and data structures, UA is a leading venue for groundbreaking algorithmic studies.
- Research article
- https://doi.org/10.61091/jcmcc127b-164
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2911-2931
- Published Online: 16/04/2025
Syntactic analysis is a basic work in the field of natural language processing, which explores the syntactic structures and their interaction relations in sentences. This paper first describes the basic approach of syntactic analysis, and explores the computational method of Chinese syntactic structure classification from large-scale corpus construction. Then, a grid-based large-scale corpus construction and distribution model is constructed. And the word embedding model BERT is used as the pre-trained language model, and the captured semantic features are input into the Bi-LSTM model to extract the contextual bidirectional sequence information, and the results of Chinese syntactic structure classification are obtained by the Conditional Random Field (CRF) processing. Through manual proofreading as well as the calculation of confidence level, the average correct rate of syntactic structure classification of the final Chinese canonical corpus is increased from 94.21% to 99.06%, which is an improvement of 4.85%. The syntactic structure classification accuracy of the BERT-Bi-LSTM-CRF1 and BERT-Bi-LSTM-CRF2 models with “complement structure” and “object structure” were higher than those of the BERT model, the Bi-LSTM-CRF model and the BERT-Bi-LSTM-CRF3 model with all syntactic structures. Meanwhile, the accuracy of the syntactic structure annotation method of BERT-Bi-LSTM-CRF model + manual differs from that of manual annotation by only 0.66%, and the average time spent is reduced by 37.04%, which reduces the workload of the annotators and improves the efficiency of the annotation, which verifies the validity and practicability of this paper’s model in automatic classification of Chinese syntactic structures.
- Research article
- https://doi.org/10.61091/jcmcc127b-163
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2895-2909
- Published Online: 16/04/2025
The construction of dual prevention mechanism is a necessary way to solve the problem of “not recognizing, not thinking, not managing well” in the field of enterprise safety production. This paper combines the elements involved in the theoretical framework of the dual prevention mechanism, constructs two evaluation index systems of safety risk classification and the operation effect of the dual prevention mechanism, and then establishes an evaluation model based on the multi-level analysis method and the fuzzy comprehensive evaluation method, to explore the operation effect of the dual prevention mechanism in the enterprise. The evaluation results show that after the dual prevention mechanism of safety risk classification and hidden danger investigation and management strategy is operated in S enterprises with higher safety risk level (1.50 points), the awareness of safety production and the level of intrinsic safety of the enterprises have been significantly improved, and the average value of the evaluation of the operation effect of the dual prevention mechanism in enterprises is 3.91 points, which reaches a good level. The research results of this paper not only have strong guiding significance and practical help for the optimization of risk management of production safety in enterprises, but also can be used by the same type of enterprises and even other enterprises in optimizing the risk management of production safety and the management of hidden danger investigation.
- Research article
- https://doi.org/10.61091/jcmcc127b-162
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2875-2894
- Published Online: 16/04/2025
Digital auditing has become the key to the transformation and upgrading of the auditing field. Financial audit data anomaly detection needs to combine multiple aspects of information, and it is of great practical significance to utilize the existing technical means to discover financial anomalies in the limited content. In this paper, based on the limitations of the weighted KNN deep neural network algorithm, a multi-branch deep neural network is proposed and a cost-sensitive loss function is designed. Combining the qualitative and quantitative methods of risk assessment, the enterprise audit risk assessment index system is constructed, the indexes are standardized, and the results of enterprise audit risk assessment are analyzed. The specific application effect of the assessment model is analyzed from the aspects of industry status and key financial performance, and the relevant strategies for corporate audit risk response are proposed. In the 1st risk assessment, 8 of the 20 enterprises are above higher risk, 6 are medium risk, and 6 are below lower risk. The results of the 2nd audit risk assessment have varying degrees of reduction between -0.3663 and -0.0119. From 2017, the overall net profit growth rate of enterprises is decreasing year by year, especially in the period from 2019 to 2020, and the net profit growth rate of the industry in 2020 is -24.87%, which predicts that the future development of the industry is not optimistic.
- Research article
- https://doi.org/10.61091/jcmcc127b-161
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2857-2873
- Published Online: 16/04/2025
With the rapid development of blockchain technology, consistency assurance of distributed database has become one of the key issues. In this paper, a blockchain distributed database consistency assurance mechanism based on the practical Byzantine fault tolerance (Rpbft) algorithm and its improved algorithm is studied in depth.The RPBFT algorithm combines the RSA algorithm and the PBFT consensus algorithm, and then performs the signature operation after message encryption in order to increase the system security. Aiming at the shortcomings of the master node selection mechanism of the original algorithm and the RPBFT algorithm, a master node selection mechanism that includes the time factor is proposed, which introduces the role of the recording node, so that the waiting time of the node can be adjusted dynamically. Meanwhile the algorithm changes the conditions of view switching and reduces the system consumption. Through simulation experiments to verify the performance of this paper’s R-PBFT algorithm and OmniLedger and RapidChain two programs in the same network conditions, this paper’s algorithm compared to the comparison algorithm can be more effective in guaranteeing the consistency of the distributed database, when the number of slices is 20, the transaction latency time is 13s, 25s lower than that of RapidChain and OmniLedger, respectively. When the number of shards is 20, the transaction delay time is lower than that of RapidChain and OmniLedger by 13s and 25s respectively, which provides strong support for the application of blockchain technology in the field of distributed database.
- Research article
- https://doi.org/10.61091/jcmcc127b-160
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2833-2856
- Published Online: 16/04/2025
Urban spatial structure and three-dimensional perspective can express personalized city brand image, which is an important feature of city brand form. In this paper, computer graphics technology is applied to design a city 3D modeling algorithm based on point cloud fusion, which transforms city information into city spatial visual symbols, and then carries out the innovation of city brand image morphology. Firstly, on the basis of binocular stereo vision, tilted image generation modeling technology is utilized to realize texture mapping 3D dense point cloud structure network. Aiming at the lack of accuracy of the sparse point cloud and the existence of noise points and mesh voids due to the influence of occlusion and shadows, we design the stereo vision PMVS algorithm based on the faceted slice in order to realize the densification of the point cloud. The algorithm performance is tested on the dataset using standard 3D reconstruction evaluation metrics F-score, chamfer distance (CD), and the application analysis of segmentation and merging execution efficiency for building clusters, optimization effect of rectangle fitting, and height calculation of building clusters, and the study finds that this paper’s algorithm is ahead of the baseline model in 13 categories. When the number of regions reaches 70,000, the traditional RAG method takes 26.9 seconds, while this paper’s algorithm only takes 14.8 seconds. The time consumption reduction reaches more than 40%. The average score of the aesthetic assessment of the city brand design is 83.47 points, and the 10 experts’ evaluation of the spatial aesthetics is above 90 points, and the design is unanimously recognized. The study makes a useful exploration for the innovation of city brand image under the conditions of cutting-edge information technology.
- Research article
- https://doi.org/10.61091/jcmcc127b-159
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2819-2832
- Published Online: 16/04/2025
The study of the impact of climate change on permafrost and the response mechanism in the Upper Irtysh River Basin can help to comprehensively understand the impact of climate change and grasp the development of coping strategies. In this paper, the one-dimensional heat conduction equation is used as the core to propose a model for calculating the distribution of permafrost in the upper Irtysh River Basin and the boundary conditions for solving the model, and the model is simulated and solved by using the general form of partial differential equations in the COMSOL Multiphysics finite element analysis software. Subsequently, the simulation results and regression equations are combined to investigate the driving effect of meteorological data changes on permafrost depth distribution changes. The simulation results found that the meteorological factor regression model could explain 30.6% of the variation in maximum permafrost depth, with mean annual relative humidity driving permafrost depth to the greatest extent (Beta = -0.251). This paper finds that the driving effect of meteorological factors on permafrost depth change provides a new perspective for understanding the dynamical mechanism of permafrost change in the upper Irtysh River Basin, and also provides a scientific basis for predicting and responding to the impact of future climate change on permafrost.
- Research article
- https://doi.org/10.61091/jcmcc127b-158
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2805-2817
- Published Online: 16/04/2025
In this paper, the basic structure of fuzzy integral-based multi-classifier fusion model is used as a reference to construct Choquet integral vectors, measure the similarity of English sentences, and construct a fast retrieval algorithm for English sentences based on Choquet expectation. Determine the algorithm threshold and compare the running time of similar retrieval algorithms. Deploy the algorithm into the English sentence retrieval model for dataset training and comparison experiments. Verify the model robustness and determine the chosen K value for the model. Further use the test set to compare the retrieval effectiveness of the model with the traditional semantic retrieval model. The algorithm threshold is set to 6 to improve English sentence recall. The running time consumption of the algorithm is 0.827s and 1.941s, which is lower than the other three similar retrieval algorithms. In the dataset comparison experiments, the algorithmic model of this paper scores better than the comparison model in all 5 evaluation metrics. The model has the best robustness when k takes the value of 15. The model check accuracy and check completeness are higher than the semantic retrieval model LM by nearly 8 percentage points. The fast retrieval algorithm for English sentences based on Choquet expectation can improve sentence retrieval timeliness and retrieval accuracy, and reduce retrieval energy consumption.
- Research article
- https://doi.org/10.61091/jcmcc127b-157
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2789-2804
- Published Online: 16/04/2025
The development of digital technology provides more possibilities for the inheritance of Chinese excellent traditional handicrafts. This paper takes Chinese movable type printing as the research object, and develops and designs a user-oriented virtual experience system by combining its handicraft characteristics. In order to optimize the rendering of real-time images and video frames of the virtual scene in this system, this paper takes the deep learning oversampling algorithm as the basic framework, and uses two major types of neural network structures, namely convolutional neural network (CNN) and recurrent neural network (RNN), to carry out the rendering reconstruction, and at the same time, it uses the texture enhancement oversampling algorithm to recover the image texture details, improve the edge sharpness of the image, and comprehensively build the DLSS model. The performance of the DLSS model constructed in this paper and the virtual experience system of movable type printing is tested successively. The average score difference between the pre- and post-tests of the virtual experience system of this paper is 34.46, which is much higher than that of the traditional form of knowledge mastery of 20.76, indicating that the virtual experience system supported by this paper’s algorithms can effectively carry out the inheritance of traditional handicrafts.
- Research article
- https://doi.org/10.61091/jcmcc127b-156
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2771-2788
- Published Online: 16/04/2025
Phishing has become an increasing threat on online networks with evolving Web, mobile device and social networking technologies. Therefore, there is an urgent need for effective methods and techniques used to detect and prevent phishing attacks. In this paper, a phishing detection model based on decision tree and optimal feature selection is proposed. An optimal feature selection algorithm based on a newly defined feature evaluation metric (f_Value), decision tree and local search is designed to prune out negative and useless features. The overfitting problem in the process of training neural network classifiers is mitigated. The optimal set of sensitive features for feature selection and the optimal structure for training the neural network classifier are constructed by tuning the parameters. Experiments on CART-based phishing detection system and comparative experiments based on different phishing detection models are also conducted. The experimental results show that the model precision, accuracy, and recall of the improved decision tree-based algorithm proposed in the article are 92.7%, 96.5%, and 88.3%, respectively, on the dataset of phishtank, and the three metrics are 98.3%, 99.1%, and 99.5%, respectively, on the datasets of Vrbanˇciˇc-small and show that the proposed CART has a higher performance than the many existing method models.
- Research article
- https://doi.org/10.61091/jcmcc127b-155
- Full Text
- Journal of Combinatorial Mathematics and Combinatorial Computing
- Volume 127b
- Pages: 2757-2770
- Published Online: 16/04/2025
Image segmentation, as an important direction of computer vision, is gradually being applied to a variety of fields, however, the existing image segmentation methods still need to be improved in terms of segmentation accuracy and effect. In this paper, the variational level set method is used as the level set image segmentation method, and its theoretical basics and solution method (gradient descent flow method) are described in detail. For the problem of insufficient gradient vector flow in the traditional parametric active contour Sanke model, a global gradient vector flow model that can overcome the noise interference is given to obtain a more accurate gradient field, thus combining with the variational level set method to build an image segmentation model based on global gradient vector flow (GGF Snake). In the comparison experiments with three commonly used image segmentation algorithms, the DSC value of this paper’s algorithm reaches more than 96.00%, and the time used is less than 15s, which is better than the remaining three algorithms, and verifies the superiority of this paper’s algorithm.




