Loading...
Office

Table of Content

    20 June 2024 Volume 51 Issue 3
      
    Information and Communications Engineering
    Superimposed pilots transmission for unsourced random access
    HAO Mengnan, LI Ying, SONG Guanghui
    Journal of Xidian University. 2024, 51(3):  1-8.  doi:10.19665/j.issn1001-2400.20230907
    Abstract ( 87 )   HTML ( 24 )   PDF (856KB) ( 132 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In unsourced random access,the base station(BS) only needs to recover the messages sent by each active device without identifying the device,which allows a large number of active devices to access the BS at any time without requiring a resource in advance,thereby greatly reducing the signaling overhead and transmission delay,which has attracted the attention of many researchers.Currently,many works are devoted to design random access schemes based on preamble sequences.However,these schemes have poor robustness when the number of active devices changes,and cannot make full use of channel bandwidth,resulting in poor performance when the number of active devices is large.Aiming at this problem,a superimposed pilots transmission scheme is proposed to improve the channel utilization ratio,and the performance for different active device numbers is further improved by optimal power allocation,making the system have good robustness when the number of active devices changes.In this scheme,the first Bp bits of the sent message sequence are used as the index,to select a pair of pilot sequence and interleaver.Then,using the selected interleaver,the message sequence is encoded,modulated and interleaved,and the selected pilot sequence is then superimposed on the interleaved modulated sequence to obtain the transmitted signal.For this transmission scheme,a power optimization scheme based on the minimum probability of error is proposed to obtain the optimal power allocation ratio for different active device numbers,and a two-stage detection scheme of superimposed pilots detection cancellation and multi-user detection decoding is designed.Simulation results show that the superimposed pilot transmission scheme can improve the performance of the unsourced random access scheme based on the preamble sequence by about 1.6~2.0 dB and 0.2~0.5 dB respectively,and flexibly change the number of active devices that the system carries and that it has a lower decoding complexity.

    Efficient semantic communication method for bandwidth constrained scenarios
    LIU Wei, WANG Mengyang, BAI Baoming
    Journal of Xidian University. 2024, 51(3):  9-18.  doi:10.19665/j.issn1001-2400.20240203
    Abstract ( 53 )   HTML ( 9 )   PDF (1035KB) ( 67 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Semantic communication provides a new research perspective for communication system optimization and performance improvement.However,current research on semantic communication ignores the impact of communication overhead and does not consider the relationship between semantic communication performance and communication overhead,resulting in difficulty in improving semantic communication performance when the bandwidth resource is limited.Therefore,an information bottleneck based semantic communication method for text sources is proposed.First,the Transformer model is used for semantic and channel joint encoding and decoding,and a feature selection module is designed to identify and delete redundant information,and then an end-to-end semantic communication model is constructed in the method;Second,considering the tradeoff between semantic communication performance and communication cost,a loss function is designed based on the information bottleneck theory to ensure the semantic communication performance,reduce the communication cost,and complete the training and optimization of the semantic communication model.Experimental results show that on the proceedings of the European Parliament,compared with the baseline model,the proposed method can reduce communication overhead by 20%~30% while ensuring communication performance.Under the same bandwidth conditions,the BLEU score of this method can be increased by 5%.Experimental results prove that the proposed method can effectively reduce the semantic communication overhead,thereby improving semantic communication performance when the bandwidth resource is limited.

    High precision time synchronization between nodes under motion scenario of UAV platforms
    CHEN Cong, DUAN Baiyu, XU Qiang, PAN Wensheng, MA Wanzhi, SHAO Shihai
    Journal of Xidian University. 2024, 51(3):  19-29.  doi:10.19665/j.issn1001-2400.20231207
    Abstract ( 51 )   HTML ( 5 )   PDF (1363KB) ( 28 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Time synchronization is the foundation for transmission resource scheduling,cooperative localization and data fusion in UAV clusters.Two-way time synchronization is commonly used to synchronize time between nodes in scenarios with high synchronization accuracy requirements.However,the relative motion of the UAVs will cause the propagation delays of the two synchronization messages to be unequal,thereby causing time synchronization errors.To solve this problem,the causes of synchronization deviation are analyzed from the perspective of solving linear equations.A method is proposed to increase the number of equations by conducting two-way time synchronization twice,with the number of unknown quantities being reduced under the premise of the uniform motion of nodes.The solution formula for the clock deviation under uniform motion of nodes is derived,and the derivation results show that the clock deviation solution is independent of the speed of the nodes.Synchronization performance is compared with that of existing compensation methods under the additive Gaussian white noise channel.The effect of time stamp deviation and speed changing on the accuracy of the clock deviation solution is analyzed.Finally,the effectiveness of the dual-trigger two-way time synchronization is verified through field experiments.Simulation and experiment results show that,compared with conventional two-way time synchronization,the dual-trigger two-way time synchronization does not cause systematic deviations by the uniform motion of nodes.

    Doppler frequency shift estimation and the tracking algorithm for air-to-air high-speed mobile communications
    ZHANG Xin, LI Jiandong
    Journal of Xidian University. 2024, 51(3):  30-37.  doi:10.19665/j.issn1001-2400.20240304
    Abstract ( 39 )   HTML ( 9 )   PDF (1484KB) ( 26 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Under air-to-air high-speed mobile communications,the Doppler frequency shift of the Aerial platform has characteristics of a large range and rapid change.It is difficult for existing frequency estimation algorithms to tackle both high estimation accuracy and engineering realization feasibility.In this paper,the time-varying Doppler frequency shift model is first constructed according to the traditional frequency offset model and the spatiotemporal correlation of Doppler frequency shift versus time.Based on this model,the coarse frequency offset estimation values of adjacent short preambles are associated.The optimization problem of frequency offset estimation is transformed into a classic optimization problem of overdetermined linear equations,which reduces the estimation variance to the maximum extent and improves the estimation accuracy.Simulation results show that the residual frequency offset of the proposed algorithm is reduced significantly compared with the traditional algorithm.Simulation results show that the root mean square error(RMSE) of the proposed algorithm is less than 100 Hz when the SNR is greater than 5 dB.Aiming at the numerical stability problem existing in the proposed algorithm,the corresponding engineering realizable method is given in the paper.Unlike the traditional phase-locked loop feedback tracking scheme,the proposed algorithm adopts a feedforward compensation scheme,thereby improving the system stability and timeliness.

    Electromagnetic modeling of general waveports with the method of moments
    DING Ning, HOU Peng, ZHAO Xunwang, LIN Zhongchao, ZHANG Yu
    Journal of Xidian University. 2024, 51(3):  38-45.  doi:10.19665/j.issn1001-2400.20230908
    Abstract ( 19 )   HTML ( 7 )   PDF (2095KB) ( 17 )   Save
    Figures and Tables | References | Related Articles | Metrics

    For the problems of electromagnetic modeling of waveports with irregular cross-sections by the integral equation method,a general waveport modeling method based on the higher-order method of moments is proposed.We establish the waveport surface integral equations based on the equivalence principle and the mode matching(MM) method.Additionally,we utilize the two-dimensional finite element method(2-D FEM) to accurately analyze the modes of irregular waveports,thereby extending the modeling capability of the MoM from the regular waveport model to a general waveport model suitable for both regular and irregular waveports modeling,on the basis of which the adoption of the higher-order basis functions defined on quadrilateral elements instead of lower-order basis functions reduces the unknown of the MoM,thus significantly reducing the memory requirements and computation time.The proposed method is tested through numerical examples,and the comparison of the tested results with the numerical results of the FEM verifies the correctness of the proposed method,and the comparison with RWG-MoM verifies the efficiency.Numerical results show that the proposed method has the advantages of high efficiency and high numerical accuracy for the general waveport modeling.

    Siamese network tracking using template updating and trajectory prediction
    HE Wangpeng, HU Deshun, LI Cheng, ZHOU Yue, GUO Baolong
    Journal of Xidian University. 2024, 51(3):  46-54.  doi:10.19665/j.issn1001-2400.20231002
    Abstract ( 24 )   HTML ( 10 )   PDF (3448KB) ( 25 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Object tracking is an active and challenging issue in the field of computer vision.To tackle the problem that a target may suffer from deformation,occlusion and fast motion during the tracking process,a novel Siamese network tracking algorithm is proposed,with emphasis on template updating and trajectory prediction.First,an effective template updating mechanism is introduced to the Siamese network tracking model that adaptively represents the variation of target appearance.This mechanism could further improve the tracking performance when the target suffers from shape or color deformation.Specifically,by analyzing the tracking results of each frame to determine whether the update conditions are met,an adaptive template update strategy is designed,effectively reducing the possibility of template contamination.Second,the Kalman filter is utilized to collect the target position information and predict the motion trajectory.By fusing the object position information predicted by the tracking algorithm in the previous frame with the position information predicted by the trajectory,the cropping position of the search area in the current frame is obtained,which further solves the problem of the object being occluded or moving quickly by combining offline tracking and online learning.Extensive experiments on the VOT2018 and LaSOT datasets verify that the tracking performance of the proposed approach exceeds that obtained by other state-of-the-art algorithms under various complex scenarios.

    Algorithm for estimation of the two-dimensional robust super-resolution angle under amplitude and phases uncertainty background
    LIU Minti, ZENG Cao, HU Shulin, CHENG Jianzhong, LI Jun, LI Shidong, LIAO Guisheng
    Journal of Xidian University. 2024, 51(3):  55-62.  doi:10.19665/j.issn1001-2400.20231201
    Abstract ( 23 )   HTML ( 5 )   PDF (2186KB) ( 18 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In order to address the issues of low angle resolution in elevation and azimuth dimensions of the 4D vehicle-mounted millimeter wave radar,as well as the biased angle measurement when the array includes amplitude and phase defects.A robust two-dimensional super-resolution angle estimation method based on fast sparse Bayesian Learning(FSBL) is suggested as a solution to this issue.First,a two-dimensional super-resolution angle signal model with amplitude and phase errors is built by using grids to split the angle domain space depending on spatial sparsity.Then,the two-dimensional angle estimation for spatial proximity targets is obtained using the fixed-point updated based MacKay SBL reconstruction algorithm,with the phase error and biased angle compensation calibrated using the self-correcting algorithm based on vector dot product.Finally,the computational complexity of the proposed algorithm is analyzed,and the Cramer-Rao Lower Bound(CRB) for two-dimensional angle estimation under MIMO non-uniform sparse arrays is provided.By comparing six distinct categories of super-resolution algorithms,simulation results demonstrate that the proposed method has a high angle resolution and a low root mean square error(RMSE) in a low SNR and few snapshot numbers under the actual layout of 12 transmitting and 16 receiving antennas for the continental ARS548 radar.

    Multi-objective optimization offloading decision with cloud-side-end collaboration in smart transportation scenarios
    ZHU Sifeng, SONG Zhaowei, CHEN Hao, ZHU Hai, QIAO Rui
    Journal of Xidian University. 2024, 51(3):  63-75.  doi:10.19665/j.issn1001-2400.20230802
    Abstract ( 26 )   HTML ( 6 )   PDF (3027KB) ( 14 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the rapid development of intelligent transportation,the cloud computing network and the edge computing network,the information interaction among vehicle terminal,road base unit and central cloud server becomes more and more frequent.In view of how to efficiently realize vehicle-road-cloud integration fusion sensing,group decision making and reasonable allocation of re-sources between each server and the servers under the cloud-edge-terminal collaborative computing scenario of intelligent transportation,a network architecture based on the comprehensive convergence of the cloud-edge-terminal and intelligent transportation is designed.A network architecture based on the comprehensive integration of cloud-side-end and intelligent transportation is designed.Under this architecture,by reasonably dividing the task types,each server selectively caches and offloads them;under the collaborative computing scenario of the cloud-side-end of intelligent transportation,an adaptive caching model for tasks,a task offloading delay model,a system energy loss model,a model for evaluating the dissatisfaction of in-vehicle users with the quality of service,and a model for the multi-objective optimization problem are designed in turn,and a multi-objective optimization task offloading decision-making scheme is given based on the improved non-dominated genetic algorithms.Experimental results show that the proposed scheme can effectively reduce the delay and energy consumption brought by the task offloading process,improve the utilization rate of system resources,and bring better service experience to the vehicle user.

    Incomplete multi-view clustering analysis of 6G business scenarios
    ZHANG Ruqian, CHENG Nan, CHEN Wen, LI Changle
    Journal of Xidian University. 2024, 51(3):  76-87.  doi:10.19665/j.issn1001-2400.20230703
    Abstract ( 23 )   HTML ( 6 )   PDF (3209KB) ( 23 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In the 6G network,due to the variety of business types and different requirements,the three major business scenarios divided in the 5G network can no longer meet the granularity requirements,which brings great challenges to the realization of the goal of 6G on-demand services.Aiming at the massive and messy 6G scenarios and the huge amount of business data and data missing in the classification of 6G scenarios,this paper proposes a set of multi-dimensional scenario clustering analytical schemes based on business key performance indicators.The scheme is based on the incomplete multi-view clustering technology,and uses the elbow method and the silhouette coefficient method to perform parameter tuning clustering under thousands of parameter combinations.Clustering results show that the scheme proposed in this paper can guarantee convergence in incomplete scene datasets and achieve high silhouette coefficient values.In addition,by comparing the missing data clustering experiments with different proportions,the proposed 6G scene clustering scheme can effectively complete the multi-dimensional clustering for different degrees of missing data.Finally,this paper combines the original data and clustering labels,analyzes and refines the clustering to obtain the scene knowledge of 11 types of scenarios and the characteristics of key performance indicators of each scenario,so as to provide the method basis and theoretical reference for emerging scenarios and services in the future 6G network.

    A self-attention sequential model for long-term prediction of video streams
    GE Yunfeng, LI Hongyan, SHI Keyi
    Journal of Xidian University. 2024, 51(3):  88-102.  doi:10.19665/j.issn1001-2400.20240202
    Abstract ( 26 )   HTML ( 6 )   PDF (3703KB) ( 12 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Video traffic prediction is a key technology to achieve accurate transmission bandwidth allocation and improve the quality of the Internet service.However,the inherent high rate variability,long-term dependence and short-term dependence of video traffic make it difficult to make a quick,accurate and long-term prediction:because existing models for predicting sequence dependencies have a high complexity and prediction models fail quickly.Aiming at the problem of long-term prediction of video streams,a sequential self-attention model with frame structure feature embedding is proposed.The sequential self-attention model has a strong modeling ability for the nonlinear relationship of discrete data.Based on the difference of correlation between video frames,this paper applies the time series self-attention model to the long-term prediction of video traffic for the first time.The existing time series self-attention model cannot effectively represent the category features of video frames.By introducing an embedding layer based on the frame structure,the frame structure information is effectively embedded into the time series to improve the accuracy of the model.The results show that,compared with the existing long short-term memory network model and convolutional neural network model,the proposed sequential self-attention model based on frame structure feature embedding has a fast inference speed,and that the prediction accuracy is reduced by at least 32% in the mean absolute error.

    Time series prediction method based on the bidirectional long short-term memory network
    GUAN Yepeng, SU Guangyao, SHENG Yi
    Journal of Xidian University. 2024, 51(3):  103-112.  doi:10.19665/j.issn1001-2400.20231205
    Abstract ( 50 )   HTML ( 7 )   PDF (2614KB) ( 24 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Time series prediction means the use of historical time series to predict a period of time in the future,so as to formulate corresponding strategies in advance.At present,the categories of time series are complex and diverse.However,existing time series prediction models cannot achieve stable prediction results when faced with multiple types of time series data.The application requirements of complex time series data prediction in reality are difficult to simultaneously meet.To address the problem,a time series prediction method is proposed based on the Bidirectional Long and Short-term Memory(BLSTM) with the attention mechanism.The improved forward and backward propagation mechanisms are used to extract temporal information.The future temporal information is inferred through an adaptive weight allocation strategy.Specifically,an improved BLSTM is proposed to extract deep time series features and explore temporal dependencies of context by combining BLSTM and Long Short-term Memory(LSTM) networks,on the basis of which the proposed temporal attention mechanism is fused to achieve adaptive weighting of deep time series features,which improves the saliency expression ability of deep time series features.Experimental results demonstrate that the proposed method has a superior prediction performance in comparison with some representative methods in multiple time series datasets of different categories.

    Computer Science and Technology & Artificial Intelligence
    Complex text region detection based on polygon feature pooling and the transformer
    ZHANG Xiangnan, GAO Xinbo, TIAN Chunna
    Journal of Xidian University. 2024, 51(3):  113-123.  doi:10.19665/j.issn1001-2400.20230801
    Abstract ( 23 )   HTML ( 10 )   PDF (1983KB) ( 23 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Text detection plays an important role in image understanding,and deep-learning-based algorithms are popular methods including single-stage and two-stage methods.Usually,two-stage based text detection methods have a higher accuracy than the single stage based methods.The two-stage text detection method usually contains the feature pooling operation in the region of interests(RoI),which provides the local region features with fixed dimensions for further detection and recognition tasks.However,for complex text areas such as a curved text,the existing pooling methods based on the rectangular RoI are no longer applicable.Using point features instead of area features to solve the problem loses spatial information compared with area features.To address this issue,we propose a complex text region detection method based on polygon feature pooling and Transformer.First,we extend the feature pooling shape of RoI from the rectangle to the polygon,which does not need any shape fitting.and the features of polygon RoI with fixed dimensions are pooled,which avoids the error in the fitting process.Furthermore,the pooled polygon region features are regarded as context-sensitive sequences,which are input to the Transformer to fuse the context of the visual feature to reduce the training difficulties and improves the detection accuracy.Our experiments on the complex text region datasets,such as ICDAR2015,MLT,Total Text and CTW1500,show that the proposed two-stage detection algorithm can extract the features of RoI very well and achieves better detection results than the state-of-the-art methods.

    New prediction strategy based evolutionary algorithm for dynamic multi-objective optimization
    WAN Mengyi, WU Yan
    Journal of Xidian University. 2024, 51(3):  124-135.  doi:10.19665/j.issn1001-2400.20230902
    Abstract ( 30 )   HTML ( 7 )   PDF (3026KB) ( 29 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Dynamic multi-objective optimization problems(DMOPs) where the environments change over time require that an evolutionary algorithm be able to continuously track the moving Pareto set or Pareto front.Response strategies based prediction has received much attention.However,these strategies mostly use historical environmental information for prediction,which will make the predicted results inaccurate.In this paper,we strengthen the mining and utilization of new environmental information and propose a new prediction strategy based evolutionary algorithm for dynamic multi-objective optimization(RAM),which includes mainly two core parts,namely,response mechanism and acceleration mechanism.The response mechanism reinitializes the population after the environmental changes,some individuals are generated by the prediction strategy,which is close to the new environmental PS to improve the optimization ability of this algorithm,and the remaining individuals are generated by the local search strategy to increase the population diversity.The acceleration mechanism is used in the static optimization process to accelerate the convergence speed of the RAM.Finally,the RAM is compared with other three advanced dynamic multi-objective optimization algorithms on a series of test functions with different dynamic characteristics.The results show that the RAM has more advantages than other three algorithms in solving dynamic multi-objective optimization problems.

    Texture-aware video inpainting algorithm based on the multi-attention mechanism
    XIA Yilan, WANG Xiumei, CHENG Peitao
    Journal of Xidian University. 2024, 51(3):  136-146.  doi:10.19665/j.issn1001-2400.20231004
    Abstract ( 28 )   HTML ( 10 )   PDF (5489KB) ( 18 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Existing video inpainting methods cannot effectively utilize distant spatial contents,which results in unreasonable structures and textures.To solve this problem,a texture-aware video inpainting algorithm based on the multi-attention mechanism is proposed in this paper.The algorithm designs a multi-attention mechanism composed of multi-head spatiotemporal attention and single-image local attention,guaranteeing global structures and enriching local textures.Multi-head spatial-temporal attention focuses on the overall spatial-temporal information,and single-image local attention distills local information through local windows of the self-attention mechanism.A plug-and-play fast Fourier convolution layer residual block is used to replace vanilla convolution in feedforward networks,expanding the receptive field into the entire image so that the global structure and texture of a single frame image can be enriched.The fast Fourier convolutional layer residual block and the single-image local attention complement each other and jointly promote the quality of local textures.Experimental results on YouTube-VOS and DAVIS datasets show that although the proposed method ranks second only to the optimal method Fuseformer on objective metrics,the number of parameters and running time are reduced by 54.8% and 21.5% respectively.And the proposed method can generate more visually realistic and semantically reasonable contents.

    Relatively accelerated stochastic gradient algorithm for a class of non-smooth convex optimization problem
    ZHANG Wenjuan, FENG Xiangchu, XIAO Feng, HUANG Shujuan, LI Huan
    Journal of Xidian University. 2024, 51(3):  147-157.  doi:10.19665/j.issn1001-2400.20240301
    Abstract ( 20 )   HTML ( 5 )   PDF (1667KB) ( 11 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The first order method is widely used in the fields such as machine learning,big data science,computer vision,etc.A crucial and standard assumption for almost all first order methods is that the gradient of the objective function has to be globally Lipschitz continuous,which,however,can’t be satisfied by a lot of practical problems.By introducing stochasticity and acceleration to the vanilla GD (Gradient Descent) algorithm,a RASGD (Relatively Accelerated Stochastic Gradient Descent) algorithm is developed,and a wild relatively smooth condition rather than the gradient Lipschitz is needed to be satisfied by the objective function.The convergence of the RASGD is related to the UTSE (Uniformly Triangle Scaling Exponent).To avoid the cost of tuning this parameter,a ARASGD(Adaptively Relatively Accelerated Stochastic Gradient Descent)algorithm is further proposed.The theoretical convergence analysis shows that the objective function values of the iterates converge to the optimal value.Numerical experiments are conducted on the Poisson inverse problem and the minimization problem with the operator norm of Hessian of the objective function growing as a polynomial in variable norm,and the results show that the convergence performance of the ARASGD method and RASGD method is better than that of the RSGD method.

    Cyberspace Security
    Bidirectional adaptive differential privacy federated learning scheme
    LI Yang, XU Jin, ZHU Jianming, WANG Youwei
    Journal of Xidian University. 2024, 51(3):  158-169.  doi:10.19665/j.issn1001-2400.20230706
    Abstract ( 37 )   HTML ( 4 )   PDF (2749KB) ( 16 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the explosive growth of personal data,the federated learning based on differential privacy can be used to solve the problem of data islands and preserve user data privacy.Participants share the parameters with noise to the central server for aggregation by training local data,and realize distributed machine learning training.However,there are two defects in this model:on the one hand,the data information in the process of parameters broadcasting by the central server is still compromised,with the risk of user privacy leakage;on the other hand,adding too much noise to parameters will reduce the quality of parameter aggregation and affect the model accuracy of federated learning.In order to solve the above problems,a bidirectional adaptive differential privacy federated learning scheme(Federated Learning Approach with Bidirectional Adaptive Differential Privacy,FedBADP) is proposed,which can adaptively add noise to the gradients transmitted by participants and central servers,and keep data security without affecting the model accuracy.Meanwhile,considering the performance limitations of the participants hardware devices,this model samples their gradients to reduce the communication overhead,and uses the RMSprop to accelerate the convergence of the model on the participants and central server to improve the accuracy of the model.Experiments show that our novel model can enhance the user privacy preserving while maintaining a good accuracy.

    Spatial-temporal graph convolutional networks foranomaly detection in multivariate time series
    WANG Jing, HE Miaomiao, DING Jianli, LI Yonghua
    Journal of Xidian University. 2024, 51(3):  170-181.  doi:10.19665/j.issn1001-2400.20230804
    Abstract ( 34 )   HTML ( 7 )   PDF (1548KB) ( 69 )   Save
    Figures and Tables | References | Related Articles | Metrics

    To address the problem that the existing multivariate time series anomaly detection models have an insufficient ability to capture local and global spatial-temporal dependencies,a multivariate time series anomaly detection model based on spatial-temporal graph convolutional networks is proposed.First,in the temporal dimension,the short-term and long-term temporal dependencies in time series data are captured by using dilated causal convolution and multi-headed self-attention mechanisms,respectively.And the channel attention is introduced to learn the importance weights of different channels.Second,in the spatial dimension,a graph adjacency matrix is constructed by the static graph learning layer according to the node embedding,which is used to model the global spatial dependencies.Meanwhile,a series of evolutionary graph adjacency matrices is constructed by using the dynamic graph learning layer,so as to capture the local dynamic spatial dependencies.Finally,the reconstruction model and the prediction model are jointly optimized,and the anomaly score is calculated by the reconstructed error and the prediction error.Then,the relationship between the threshold and the anomaly score is compared to detect the anomaly.Experimental results on three public datasets,MSL,SMAP,and SwaT,show that the model outperforms the relevant baseline models such as OmniAnomaly,MTAD-GAT,and GDN in terms of the anomaly detection performance metric F1 score.

    Homomorphic noise evaluation of LowMC in BGV environment
    LI Xuelian, CHEN Zhuohao
    Journal of Xidian University. 2024, 51(3):  182-193.  doi:10.19665/j.issn1001-2400.20230905
    Abstract ( 23 )   HTML ( 10 )   PDF (1706KB) ( 11 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The ciphertext computing characteristics of full homomorphic encryption technology can effectively protect users' sensitive data on the Internet,but the problem of ciphertext inflation in this technology is a difficulty that restricts its practical application in fields such as cloud computing and privacy protection.In response to the above issues,this article proposes a hybrid homomorphic encryption scheme FHE-LowMC,which combines the LowMC symmetric encryption algorithm with the BGV homomorphic encryption algorithm to analyze the homomorphic noise of LowMC in the BGV homomorphic encryption environment.First,a method for encoding the LowMC plaintext into integer coefficient polynomials is proposed,which utilizes encoding and decoding to complete the conversion of plaintext messages in different spaces.Then,the selection rules for the cyclotomic polynomial f(X) is described,with the conditions f(X) suitable for the LowMC encryption algorithm given.Afterwards,the homomorphic noise of the simplified LowMC is analyzed.Finally,homomorphic noise evaluation is performed on LowMC under general conditions.The results show that the number of circuit layers consumed by the LowMC round function is about two.Compared with the currently commonly used AES and BGV combination scheme,the scheme combining LowMC and BGV has a lower noise,which means it consumes fewer layers of circuits and has lower costs,making it more suitable for constructing cloud servers based on homomorphisms.In addition,users can independently select the parameter set( n ˜,k,m,d)of LowMC,which meets the different needs of users and has a wider scope of application.

    Efficient smart contract testing scheme supporting transactions filtering
    PENG Yongxiang, MA Yong, LIU Zhiquan, WANG Libo, WU Yongdong, CHEN Ning, TANG Yong
    Journal of Xidian University. 2024, 51(3):  194-202.  doi:10.19665/j.issn1001-2400.20230803
    Abstract ( 17 )   HTML ( 4 )   PDF (1254KB) ( 11 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In recent years,the smart contract has become a focal point of both the industry and academia as a vital component of the Ethereum blockchain.A smart contract is a program deployed on the blockchain that enables distributed transactions.However,due to the financial attributes of the smart contract,it becomes the target of hacker attacks.To ensure the security of the contract,identifying and repairing vulnerabilities is essential,and functional consistency must be guaranteed through rigorous testing.Regrettably,existing smart contract testing schemes suffer from several shortcomings,including a low replay accuracy and high storage consumption.In response to these challenges,an efficient smart contract testing scheme supporting transactions filtering is proposed which first models transaction features based on Ethereum state changes to enhance scalability;then optimizes storage space by storing Ethereum historical data based on a second-order tree structure;and finally,perform the transaction replay through the forking mechanism to test patched contract without interfering with the main chain.The prototype tool SCTester is implemented based on the proposed solution and conducts comparative assessments against existing contract testing schemes such as EVMPatch,Hartel,and Kim.Experimental results show the superiority of our proposed approach in terms of scalability and replay accuracy.Besides,it reduces storage space by 21.6% compared with Kim in terms of space consumption;and in terms of time consumption,it reduces time consumption by 70.5% compared with Kim in transaction replay under account testing scenario.

    Time series anomaly detection based on multi-scale feature information fusion
    HENG Hongjun, YU Longwei
    Journal of Xidian University. 2024, 51(3):  203-214.  doi:10.19665/j.issn1001-2400.20230906
    Abstract ( 69 )   HTML ( 16 )   PDF (2089KB) ( 23 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Currently,most time series lack corresponding anomaly labels and existing reconstruction-based anomaly detection algorithms fail to capture the complex underlying correlations and temporal dependencies among multidimensional data effectively.To construct feature-rich time series,a multi-scale feature information fusion anomaly detection model is proposed.First,the model employs convolutional neural networks to perform feature convolution on different sequences within sliding windows,capturing local contextual information at different scales.Then,position encoding from the Transformer is utilized to embed the convolved time series windows,enhancing the positional relationships between each time series and its neighboring sequences within the sliding window.Time attention is introduced to capture the temporal autocorrelation of the data,and multi-head self-attention adaptively assigns different weights to different time series within the window.Finally,the reconstructed window data obtained through the down-sampling process is progressively fused with the local features and temporal context information at different scales.This process accurately reconstructs the original time series,with the reconstruction error used as the final anomaly score for anomaly determination.Experimental results indicate that the constructed model achieves improved F1 scores compared to the baseline models on both the SWaT and SMD datasets.On the high-dimensional and imbalanced WADI dataset,the F1 score decreases by 1.66% compared to the GDN model.

Baidu
map