Office

Top Read Articles

    Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Sea-surface multi-target tracking method aided by target returns features
    ZHANG Yichen,SHUI Penglang,LIAO Mo
    Journal of Xidian University    2023, 50 (5): 1-10.   DOI: 10.19665/j.issn1001-2400.20230201
    Abstract478)   HTML101)    PDF(pc) (3631KB)(365)       Save

    Due to the complex marine environment and the dense sea-surface targets,radars often face the tough tracking scenarios with a high false alarm rate and high target density.The measurement points originating from clutter and multiple closely-spaced targets appear densely in the detection space.The traditional tracking methods only use the position information,which cannot distinguish the specific source of the measurement well,resulting in serious degradation of the tracking performance.Target returns features can be used to solve the problem without increasing the complexity of the algorithm,but the generalization ability of the features is low.It is necessary to select suitable features according to different radar systems,working scenes and requirements.In this paper,the test statistic and the target radial velocity measurement are used as the target returns features,and the tracking equation is reconstructed so that features can be fully applied in all aspects of tracking.In addition,this paper adopts a "two-level" tracking process,which divides track and candidate track according to track quality.Experimental results show that the proposed method can achieve robust target tracking in the complex multi-target scenarios on the sea surface.

    Table and Figures | Reference | Related Articles | Metrics
    Research on the multi-objective algorithm of UAV cluster task allocation
    GAO Weifeng, WANG Qiong, LI Hong, XIE Jin, GONG Maoguo
    Journal of Xidian University    2024, 51 (2): 1-12.   DOI: 10.19665/j.issn1001-2400.20230413
    Abstract394)   HTML31)    PDF(pc) (2779KB)(266)       Save

    Aiming at the cooperative task allocation problem of UAV swarm in target recognition scenario,an optimization model with recognition cost and recognition benefit as the goal is established,and a multi-objective differential evolution algorithm based on decomposition is designed to solve the model.First,an elite initialization method is proposed,and the initial solution is screened to improve the quality of the solution set on the basis of ensuring the uniform distribution of the obtained nondominated solution.Second,the multi-objective differential evolution operator under integer encoding is constructed based on the model characteristics to improve the convergence speed of the algorithm.Finally,a tabul search strategy with restrictions is designed,so that the algorithm has the ability to jump out of the local optimal.The algorithm provides a set of nondominated solution sets for the solution of the problem,so that a more reasonable optimal solution can be selected according to actual needs.After obtaining the allocation scheme by the above method,the task reallocation strategy is designed based on the auction algorithm,and the allocation scheme is further adjusted to cope with the unexpected situation of UAV damage.On the one hand,simulation experiments verify the effectiveness of the proposed algorithm in solving small,medium and large-scale task allocation problems,and on the other hand,compared with other algorithms,the nondominated set obtained by the proposed algorithm has a higher quality,which can consume less recognition cost and obtain higher recognition revenue,indicating that the proposed algorithm has certain advantages.

    Table and Figures | Reference | Related Articles | Metrics
    Spectrum compression based autofocus algorithm for the TOPS BP image
    ZHOU Shengwei, LI Ning, XING Mengdao
    Journal of Xidian University    2024, 51 (1): 1-10.   DOI: 10.19665/j.issn1001-2400.20230102
    Abstract275)   HTML35)    PDF(pc) (4183KB)(295)       Save

    In the high squint TOPS mode SAR imaging of the maneuvering platform,by using the BP imaging algorithm in the rectangular coordinate system of the ground plane,the wide swath SAR image without distortion in the ground plane can be obtained in a short time.However,how to quickly complete the motion error compensation and side lobe suppression of the BP image is still difficult in practical application.This paper proposes an improved spectral compression method,which can quickly realize the follow-up operations such as autofocus of the BP image of the ground plane in the high squint TOPS mode of the mobile platform.First,by considering that the traditional BP spectral compression method is only applicable to the spotlight imaging mode,combined with the virtual rotation center theory of high-squint TOPS SAR and the wavenumber spectrum analysis,an improved exact spectral compression function is derived,which can give rise to the unambiguous ground plane TOPS mode BP image spectrum through full-aperture compression,on the basis of which the phase gradient autofocus(PGA) can be used to quickly complete the full aperture motion error estimation and compensation.In addition,based on the unambiguous aligned BP image spectrum obtained by the improved spectral compression method proposed in this paper,the image sidelobe suppression can be realized by uniformly windowing in the azimuth frequency domain.Finally,the effectiveness of the proposed algorithm is verified by simulation data processing.

    Table and Figures | Reference | Related Articles | Metrics
    Efficient federated learning privacy protection scheme
    SONG Cheng,CHENG Daochen,PENG Weiping
    Journal of Xidian University    2023, 50 (5): 178-187.   DOI: 10.19665/j.issn1001-2400.20230403
    Abstract258)   HTML9)    PDF(pc) (1908KB)(96)       Save

    Federated learning allows clients to jointly train models with only shared gradients,rather than directly feeding the training data to the server.Although federated learning avoids exposing data directly to third parties and plays a certain role in protecting data,research shows that the transmission gradient in federated learning scenarios will still lead to the disclosure of private information.However,the computing and communication overhead brought by the encryption scheme in the training process will affect the training efficiency,and it is difficult to apply to resource-constrained environments.Aiming at the security and efficiency problems of privacy protection schemes in current federated learning,a safe and efficient privacy protection scheme for federated learning is proposed by combining homomorphic encryption and compression techniques.The homomorphic encryption algorithm is optimized to ensure the security of the scheme,reduce the number of operations and improve the efficiency of operations.At the same time,a gradient filtering compression algorithm is designed to filter out the local updates that are not related to the convergence trend of the global model,and the update parameters are quantized by a computationally negligible compression operator,which ensures the accuracy of the model and increases the communication efficiency.The security analysis shows that the scheme satisfies the security characteristics such as indistinguishability,data privacy and model security.Experimental results show that the proposed scheme has not only higher model accuracy,but also obvious advantages over the existing schemes in terms of communication cost and calculation cost.

    Table and Figures | Reference | Related Articles | Metrics
    Construction method of temporal correlation graph convolution network for traffic prediction
    ZHANG Kehan,LI Hongyan,LIU Wenhui,WANG Peng
    Journal of Xidian University    2023, 50 (5): 11-20.   DOI: 10.19665/j.issn1001-2400.20221103
    Abstract244)   HTML28)    PDF(pc) (4522KB)(199)       Save

    The existing traffic prediction methods in the virtual network of data centers characterize the correlation between links with difficulty,which leads to the difficulty in improving the accuracy of traffic prediction.Based on this,this paper proposes a Temporal Correlation Graph Convolutional neural Network (TC-GCN),which enables the representation of Temporal and spatial Correlation of the data center Network link traffic and improves the accuracy of traffic prediction.First,the graph convolutional neural network adjacency matrix with the time attribute is constructed to solve the problem of prediction deviation caused by traffic asynchronism between virtual network links,and to achieve accurate representation of link correlation.Second,a traffic prediction mechanism based on long/short window graph convolutional neural network weighting is designed,which adapts the smooth and fluctuating segments of the traffic sequence with a finite length long/short window,effectively avoids the vanishing gradient problem of the neural network,and improves the traffic prediction accuracy of the virtual network.Finally,an error weighting unit is designed to sum the prediction results of the long/short window graph convolutional neural network.The output of the network is the predicted value of link traffic.In order to ensure the practicability of the results,the simulation experiments of the proposed temporal correlation graph convolutional network are carried out based on the real data center network data.Experimental results show that the proposed method has a higher prediction accuracy than the traditional graph convolutional neural network traffic prediction method.

    Table and Figures | Reference | Related Articles | Metrics
    Federated learning scheme for privacy-preserving of medical data
    WANG Bo,LI Hongtao,WANG Jie,GUO Yina
    Journal of Xidian University    2023, 50 (5): 166-177.   DOI: 10.19665/j.issn1001-2400.20230202
    Abstract209)   HTML8)    PDF(pc) (4010KB)(93)       Save

    As an emerging training model with neural networks,federated learning has received widespread attention due to its ability to carry out model training on the premise of protecting user data privacy.However,since adversaries can track and derive participants’ privacy from the shared gradients,federated learning is still exposed to various security and privacy threats.Aiming at the privacy leakage problem of medical data in the process of federated learning,a secure and privacy-preserving medical data federated learning architecture is proposed based on Paillier homomorphic encryption technology (HEFLPS).First,the shared training model of the client is encrypted with Paillier homomorphic encryption technology to ensure the security and privacy of the training model,and a zero-knowledge proof identity authentication module is designed to ensure the credibility of the training members;second,the disconnected or unresponsive users are temporarily eliminated by constructing a message confirmation mechanism on the server side,which reduces the waiting time of the server and reduces the communication cost.Experimental results show that the proposed mechanism has high model accuracy,low communication delay and a certain scalability while achieving privacy protection.

    Table and Figures | Reference | Related Articles | Metrics
    Research on a clustering-assisted intelligent spectrum allocation technique
    ZHAO Haoqin, YANG Zheng, SI Jiangbo, SHI Jia, YAN Shaohu, DUAN Guodong
    Journal of Xidian University    2023, 50 (6): 1-12.   DOI: 10.19665/j.issn1001-2400.20231006
    Abstract202)   HTML30)    PDF(pc) (3593KB)(167)       Save

    Aiming at the problem of low spectrum utilization of the traditional spectrum allocation scheme in a large-scale and high dynamic electromagnetic spectrum warfare system,intelligent spectrum allocation technology research is carried out.In this paper,first,we construct a complex and highly dynamic electromagnetic spectrum combat scenario,and under the coexistence conditions of multiple types of equipment such as radar,communication and jamming,we model the spectrum allocation of the complex electromagnetic environment as an optimization problem to maximize the number of access devices.Second,an intelligent spectrum allocation algorithm based on clustering assistance is proposed.Aiming at the centralized resource allocation algorithm facing the problem of exploding action space dimensions,a multi-DDQN network is used to characterize the decision-making information of each node.Then based on the elbow law and K-means++ algorithm,a multi-node collaborative approach is proposed,where nodes within a cluster make chained decisions by sharing action information and nodes between clusters make independent decisions,assisting the DDQN algorithm to intelligently allocate resources.By designing the state,action space and reward function,and adopting the variable learning rate to realize the fast convergence of the algorithm,the nodes are able to dynamically allocate the multidimensional resources such as frequency/energy according to the electromagnetic environment changes.Simulation results show that under the same electromagnetic environment,when the number of nodes is 20,the number of accessible devices of the proposed algorithm is increased by about 80% compared with the number by the greedy algorithm,and about 30% compared with that by the genetic algorithm,which is more suitable for the spectrum allocation of multi-devices under dynamic electromagnetic environment.

    Table and Figures | Reference | Related Articles | Metrics
    Analysis of the spatial coverage area of linear distributed directional array beamforming
    DUAN Baiyu,YANG Jian,CHEN Cong,GUO Wenbo,LI Tong,SHAO Shihai
    Journal of Xidian University    2023, 50 (5): 32-43.   DOI: 10.19665/j.issn1001-2400.20230103
    Abstract161)   HTML10)    PDF(pc) (3773KB)(95)       Save

    The phased array antenna has been widely used in radar,communication and other fields because of its advantages of high gain,high reliability and controllability of the beam.Considering the limitations of the size,the deployment terrain and the power consumption of the phased array antenna,it is difficult for a single phased array antenna to meet the requirements in some complex scenes,especially in some scenarios such as the space-earth communication,reconnaissance and jamming,so it is necessary to deploy multiple phased array antennas in a distributed manner for cooperative beamforming to obtain a higher power gain than a single array antenna.The distributed directional array uses multiple distributed array nodes to realize a virtual antenna array,sending or receiving the same signal by adjusting the phase of each array element to form the directional beam.A calculation method is proposed based on the principle of array synthesis and spatial analytic geometry aiming at the problem of calculating the gain coverage area of the distributed directional array beam in a specific height plane.Analysis and simulation results show that the gain coverage area of the linear distributed directional array beam,including the main lobe and gate lobe beam gain coverage area,is strongly correlated with the elevation angle of the distributed array,the height of the target plane,the signal carrier frequency and the number of distributed nodes,while it is weakly correlated with the distance between the distributed nodes.The analytical value of the proposed method is consistent with the computer simulation value,which can provide a theoretical reference for the implementation of the long-distance high-power distributed array in engineering.

    Table and Figures | Reference | Related Articles | Metrics
    Work pattern recognition method based on feature fusion
    LIU Gaogao, HUANG Dongjie, XI Xin, LI Hao, CAO Xuyuan
    Journal of Xidian University    2023, 50 (6): 13-20.   DOI: 10.19665/j.issn1001-2400.20230705
    Abstract159)   HTML21)    PDF(pc) (810KB)(96)       Save

    Operational pattern recognition is one of the important means in the field of intelligence reconnaissance and electronic countermeasures,which is to determine the function and behavior of radar through signal processing and analysis.With the diversification of modern airborne radar functions,the corresponding signal styles are becoming more and more complex,and the increasingly complex reconnaissance environment also leads to the uneven quality of reconnaissance signals,which brings about great difficulties to the traditional operational pattern recognition methods.To solve this problem,based on the existing work pattern recognition methods,a new work pattern recognition method is proposed,which integrates parameter feature recognition and D-S evidence theory recognition.First,for the radiation source characteristic signals processed by each reconnaissance plane,the feature parameter recognition algorithm is used to quickly obtain the working mode information,and the recognition results are verified by the D-S evidence theory.Second,for the signal that can not be recognized by a single platform,the method of D-S evidence theory fusion recognition is used to distinguish the working mode.From the theoretical analysis,it can be concluded that the algorithm has the advantages of fast operation speed and simple structure,and that the new fusion recognition method can improve the recognition accuracy of the working mode.Finally,the feasibility of the method is verified by simulation.

    Table and Figures | Reference | Related Articles | Metrics
    Cause-effectgraph enhanced APT attack detection algorithm
    ZHU Guangming,LU Zijie,FENG Jiawei,ZHANG Xiangdong,ZHANG Fengjun,NIU Zuoyuan,ZHANG Liang
    Journal of Xidian University    2023, 50 (5): 107-117.   DOI: 10.19665/j.issn1001-2400.20221105
    Abstract157)   HTML13)    PDF(pc) (2814KB)(87)       Save

    With the development of information technology,the cyberspace also derives an increasing number of security risks and threats.There are more and more advanced cyberattacks,with the Advanced Persistent Threat(APT) attack being one of the most sophisticated attacks and commonly adopted by modern attackers.Traditional statistical or machine learning detection methods based on network flow are challenging in coping with complicated and persistent APT-style attacks.Aiming to overcome the difficulty in detecting APT attacks,a cause-effect graph enhanced APT attack detection algorithm is proposed to model the interaction process between network nodes at different times and identify malicious packets in the attack process in network flows.First,the causal-effect graph is used to model the network packet sequences,and the data flows between IP nodes in the network are associated to establish the context sequence of attack and non-attack behaviors.Then,the sequence data are normalized,and the deep learning model based on the long short-term memory network(LSTM) is used for sequence classification.Finally,based on the sequence classification results,the original packets are screened for malignancy.A new dataset is constructed based on the DAPT 2020 dataset,with the proposed algorithm’s ROC-AUC indicator on the test set reaching 0.948.Experimental results demonstrate that the attack detection algorithm based on causal-effect graph sequences has obvious advantages and is a feasible algorithm for detecting APT attack network flow.

    Table and Figures | Reference | Related Articles | Metrics
    Indoor pseudolite hybrid fingerprint positioning method
    LI Yaning,LI Hongsheng,YU Baoguo
    Journal of Xidian University    2023, 50 (5): 21-31.   DOI: 10.19665/j.issn1001-2400.20221102
    Abstract142)   HTML23)    PDF(pc) (4526KB)(105)       Save

    At present,the interaction mechanism between the complex indoor environment and pseudolite signals has not been fundamentally resolved,and the stability,continuity,and accuracy of indoor positioning are still technical bottlenecks.Existing fingerprint positioning methods face the limitation that the collection workload is proportional to the positioning accuracy and positioning range,and have the disadvantage that the positioning cannot be completed without actual collection.In order to solve the above shortcomings of the existing methods,by combining the advantages of actual measurement,mathematical simulation and the artificial neural network,an indoor pseudolite hybrid fingerprint location method based on actual fingerprints,simulation fingerprints and the artificial neural network is proposed.First,the actual environment and signal transceiver are modeled.Second,both the simulated fingerprints generated by ray tracing simulation after conversion and the measured fingerprints are added to the input of the neural network,which expands the sample characteristics of the input data set of the original single measured fingerprints.Finally,the artificial neural network positioning model is jointly trained by the mixed fingerprints and then used for online positioning.By taking an airport environment as an example,it is proved that the hybrid method can improve the positioning accuracy of the sparsely collected fingerprint region,and that the root mean square error is 0.485 0 m,which is 54.7% lower than that of the traditional fingerprint positioning method.Preliminary positioning can also be completed in areas where no fingerprints are collected,and the root mean square positioning error is 1.123 7 m,which breaks through the limitations of traditional fingerprint location methods.

    Table and Figures | Reference | Related Articles | Metrics
    Traffic flow prediction method for integrating longitudinal and horizontal spatiotemporal characteristics
    HOU Yue,ZHENG Xin,HAN Chengyan
    Journal of Xidian University    2023, 50 (5): 65-74.   DOI: 10.19665/j.issn1001-2400.20221101
    Abstract132)   HTML10)    PDF(pc) (4496KB)(85)       Save

    Aiming at the problems of insufficient mining of time delay characteristics and spatial flow characteristics of upstream and downstream traffic flow as well as insufficient consideration of spatiotemporal characteristics of lane-level traffic flow in existing urban road traffic flow prediction research,a traffic flow prediction method for integrating longitudinal and horizontal spatiotemporal characteristics is proposed.First,the method quantifies and eliminates the effect of spatial time lag between upstream and downstream traffic flow by calculating the delay time to enhance the spatiotemporal correlation of upstream and downstream traffic flow sequences.Then,the traffic flow with the elimination of spatial time lag is passed into the bidirectional long short-term memory network through the vector split data input method to capture the longitudinal transmission and backtracking bidirectional spatiotemporal relationship of upstream and downstream traffic flow.At the same time,the multiscale convolution group is used to mine the multi-time step horizontal spatiotemporal relationship between the traffic flows of each lane in the section to be predicted.Finally,the attention mechanism is used to dynamically fuse the longitudinal and horizontal spatiotemporal characteristics to obtain the predicted value.Experimental results show that by applying the proposed method in the single-step prediction experiment,the MAE and RMSE decrease by 15.26% and 13.83% respectively,and increase by 1.25% compared with conventional time series prediction model.In the medium and long-term multi-step prediction experiment,it is further proved that the proposed method can effectively mine the fine-grained spatiotemporal characteristics of longitudinal and horizontal traffic flow,and has a certain stability and universality.

    Table and Figures | Reference | Related Articles | Metrics
    Research on the interference combinational sequence generation algorithm for the intelligent countermeasure UAV
    MA Xiaomeng, GAO Meiguo, YU Mohan, LI Yunjie
    Journal of Xidian University    2023, 50 (6): 44-61.   DOI: 10.19665/j.issn1001-2400.20230903
    Abstract132)   HTML16)    PDF(pc) (8025KB)(68)       Save

    With the maturity and development of the autonomous navigation flight technology for the unmanned aerial vehicle(UAV),the phenomenon of the unauthorized UAV flying in controlled airspace appears,which brings a great hidden danger to personal safety and causes a certain degree of economic losses.The research of this paper is on improving the effectiveness of adaptive measurement and control and navigation interference in the unknown situation of UAV flight control on the basis of identifying the UAV flight status and real-time evaluation of countermeasure effectiveness,and finally realizing the intelligent countermeasure game between the non-intelligent UAV based on the combination of remote communication interference and navigation and positioning interference.In this paper,a game model of the anti-UAV system(AUS) and UAV confrontation is developed based on the original units of radar detection,GPS navigation positioning,UAV remote communication suppression jamming and GPS navigation suppression and spoofing.The mathematical model is constructed by using deep reinforcement learning and the Markov decision process.Meanwhile,the concept of situation assessment ring for the classification of the UVA flight status is proposed to provide basic information for network sensing jamming effectiveness.The near-end strategy optimization algorithm,maximum entropy optimization algorithm and actor-critic algorithm are respectively used to train the constructed intelligent AUS for many times,and finally the network parameters are generated to generate the intelligent interference combination sequence according to the UAV flight state and countermeasures efficiency.The intelligent interference combination sequences generated by various deep reinforcement learning algorithms in this paper all achieve the initial goal of deceiving UAVs,which verifies the effectiveness of the anti-UAVs system model.The comparison experiment shows that the proposed situation assessment loop is sufficient and effective in the aspect of AUS sensing interference effectiveness.

    Table and Figures | Reference | Related Articles | Metrics
    Real-time power scheduling optimization strategy for 5G base stations considering energy sharing
    LIU Didi,YANG Yuhui,XIAO Jiawen,YANG Yifei,CHENG Pengpeng,ZHANG Quanjing
    Journal of Xidian University    2023, 50 (5): 44-53.   DOI: 10.19665/j.issn1001-2400.20230101
    Abstract130)   HTML10)    PDF(pc) (2647KB)(81)       Save

    To alleviate the pressure on society's power supply caused by the huge energy consumption of the 5th generation mobile communication (5G) base stations,a joint distributed renewables,energy sharing and energy storage model is proposed with the objective of minimizing the long-term power purchase cost for network operators.A low-complexity real-time scheduling algorithm for energy sharing based on the Lyapunov optimization theory is proposed,taking into account the fact that the a priori statistical information on renewable energy output,energy demand and time-varying tariffs in smart grids are unknown.A virtual queue is constructed for the flexible electricity demand of the base stations in optimization problem solving.The energy storage time coupling constraint is transformed in the energy scheduling problem into a virtual queue stability problem.The proposed algorithm schedules the renewable energy output,energy storage,energy use and energy sharing of the base stations in real time,and minimizes the long-term cost of network operators purchasing power from the external grid on the premise of meeting the electricity demand of each base station.Theoretical analysis shows that all the proposed algorithm needs is to make real-time decisions based on the current system state and that the optimization result is infinitely close to the optimal value.Finally,simulation results show that the proposed algorithm can effectively reduce the power purchase cost of the network operator by 43.1% compared to the baseline greedy Algorithm One.

    Table and Figures | Reference | Related Articles | Metrics
    Research on lightweight and feature enhancement of SAR image ship targets detection
    GONG Junyang, FU Weihong, FANG Houzhang
    Journal of Xidian University    2024, 51 (2): 96-106.   DOI: 10.19665/j.issn1001-2400.20230407
    Abstract127)   HTML9)    PDF(pc) (2728KB)(87)       Save

    The accuracy of ship targets detection in sythetic aperture radar images is susceptible to the nearshore clutter.The existing detection algorithms are highly complex and difficult to deploy on embedded devices.Due to these problems a lightweight and high-precision SAR image ship target detection algorithm CA-Shuffle-YOLO(Coordinate Shuffle You Only Look Once) is proposed in this article.Based on the YOLO v5 target detection algorithm,the backbone network is improved in two aspects:lightweight and feature refinement.The lightweight module is introduced to reduce the computational complexity of the network and improve the reasoning speed,and a collaborative attention mechanism module is introduced to enhance the algorithm's ability to extract the detailed information on near-shore ship targets.In the feature fusion network,weighted feature fusion and cross-module fusion are used to enhance the ability of the model to fuse the detailed information on SAR ship targets.At the same time,the depth separable convolution is used to reduce the computational complexity and improve the real-time performance.Through the test and comparison experiments on the SSDD ship target detection dataset,the results show that the detection accuracy of CA-Shuffle-YOLO is 97.4%,the detection frame rate is 206FPS,and the required computational complexity is 6.1GFlops.Compare to the original YOLO v5,the FPS of our algorithm is 60FPS higher with the required computational complexity of our algorithm being only the 12% that of the ordinary YOLOv5.

    Table and Figures | Reference | Related Articles | Metrics
    Double windows sliding decoding of spatially-coupled quantum LDPC codes
    WANG Yunjiang, ZHU Gaohui, YANG Yuting, MA Zhong, WEI Lu
    Journal of Xidian University    2024, 51 (1): 11-20.   DOI: 10.19665/j.issn1001-2400.20230301
    Abstract125)   HTML12)    PDF(pc) (1669KB)(102)       Save

    Quantum error-correcting codes are the key way to address the issue caused by the inevitable noise along with the quantum computing process.Spatially coupled quantum LDPC codes,as their classical counterparts,can achieve a good balance between the error-correcting capacity and the decoding delay in principle.By considering the problems of high complexity and long decoding delay caused by the standard belief propagation algorithm(BPA) for decoding the spatially coupled quantum LDPC codes(SC-QLDPCs),a quantum version of the sliding decoding scheme,named the double window sliding decoding algorithm is proposed in this paper.The proposed algorithm is inspired by the idea of classical sliding window decoding strategies and by exploiting the non-zero diagonal bands on the principal and sub-diagonals structure of the corresponding two parity-check matrices(PCMs) of the concerned SC-QLDPC.The phase and bit flipping error syndromes of the received codeword are obtained by sliding the two windows along the principal and sub-diagonals of the two classical PCMs simultaneously,which enables a good trade-off between complexity and decoding delay to be obtained by using the proposed strategy,with numerical results given to verify the performance of the proposed double window sliding decoding scheme.Simulation results show that the proposed algorithm can not only offer a low latency decoding output but also provide a decoding performance approaching that of the standard BPA when enlarging the window size,thus improving the application scenarios of the SC-QLDPC significantly.

    Table and Figures | Reference | Related Articles | Metrics
    Anti-collusion attack image retrieval privacy protection scheme for ASPE
    CAI Ying,ZHANG Meng,LI Xin,ZHANG Yu,FAN Yanfang
    Journal of Xidian University    2023, 50 (5): 156-165.   DOI: 10.19665/j.issn1001-2400.20230406
    Abstract123)   HTML9)    PDF(pc) (1886KB)(74)       Save

    The existing algorithm based on Asymmetric Scalar-Product-Preserving Encryption (ASPE) realizes privacy protection in image retrieval under cloud computing.But due to untrustworthy cloud service providers and retrieval users during retrieval and the existence of an external adversary,it cannot resist the collusion attack of malicious users and cloud servers,which may lead to the leakage of image data containing sensitive information.Aiming at multi-user scenarios,an Anti-collusion attack image retrieval privacy protection scheme for ASPE is proposed.First,the scheme uses proxy re-encryption to solve the problem of image key leakage caused by transmitting private keys to untrusted users.Second,the feature key leakage problem between the cloud service provider and the retrieval user due to collusion attacks is solved by adding a diagonal matrix encryption at the client side.Finally,linear discriminant analysis is used to solve the problem of retrieval accuracy drop caused by dimensionality reduction when locality sensitive hashing is used to construct an index.The security analysis proves that the scheme is safe and effective and that it can not only resist collusion attacks from cloud service providers and untrusted users,ciphertext-only attacks,known background attacks and known plaintext attacks,but also realize protection of images and private keys during the process.Experimental results show that under the premise of protecting image privacy and ensuring retrieval efficiency,the retrieval accuracy of the proposed scheme in the ciphertext domain and that in the plaintext domain are only about 2% different.

    Table and Figures | Reference | Related Articles | Metrics
    Double adaptive image watermarking algorithm based on regional edge features
    GUO Na,HUANG Ying,NIU Baoning,LAN Fangpeng,NIU Zhixian,GAO Zhuojie
    Journal of Xidian University    2023, 50 (5): 118-131.   DOI: 10.19665/j.issn1001-2400.20221107
    Abstract122)   HTML7)    PDF(pc) (2865KB)(75)       Save

    Local image watermarking is a hotspot technology which embeds watermark in a partial image and can resist cropping attacks.Existing local watermarking technology locates the embedded region by the feature points,which may be offset when attacks occur.Due to the obvious difference of the pixel near edges,if the region contains many edges,the offset will lead to excessive regional pixel error and fail the watermark extraction.To solve this problem,a double adaptive image watermarking algorithm based on regional edge features is proposed.First,a method to determine the embedding region is proposed,which uses a sliding window to choose embedding regions with few edges and good hiding ability by taking image features such as the edge,texture,etc.into account.Second,a double adaptive watermark embedding scheme is proposed,which is divided into blocks,with each block embedding 1-bit watermark information by modifying the pixel value.In the first coarse-grained adaptive scheme,the function between the embedding parameter and the number of edge pixels is established through linear regression analysis,and the embedding strength is adaptively adjusted by the function to enhance the robustness of blocks containing edges.In the second fine-grained adaptive scheme,the gaussian window is used to adaptively adjust the modifications of different pixels to improve the imperceptibility of the watermark.Experiments show that the proposed algorithm can effectively enhance the robustness of the watermark at the edge,and improve its imperceptibility.

    Table and Figures | Reference | Related Articles | Metrics
    Fast algorithm for intelligent optimization of the cross ambiguity function of passive radar
    CHE Jibin, WANG Changlong, JIA Yan, REN Zizheng, LIU Chunheng, ZHOU Feng
    Journal of Xidian University    2023, 50 (6): 21-33.   DOI: 10.19665/j.issn1001-2400.20231003
    Abstract121)   HTML11)    PDF(pc) (5554KB)(67)       Save

    The passive radar system realizes the target detection by receiving the direct wave signal from the emitter and the target echo signal.The cross ambiguity function is an important means to improve the coherent accumulation of the echo signal.However,the echo signal received by the passive radar is very weak,so it is necessary to increase the accumulation time to improve the estimation accuracy.When the target speed is fast,the frequency search range increases.In order to achieve a range of target detection requirements and take into account the real-time performance of data processing,it is of great significance to study the fast calculation method of the cross ambiguity function,and due to the objective requirements of long-time accumulation and large-scale time-frequency search,the computation of the cross ambiguity function is huge,which makes it difficult for the traditional accelerated calculation method based on ergodic search to meet the real-time requirements of system processing.In order to improve the efficiency of cross ambiguity function optimization,a time-frequency difference calculation method based on multi-group feature optimization is proposed in this paper.By deeply analyzing the characteristics of typical digital TV signals,a two-stage cross ambiguity intelligent optimization fast calculation method based on target characteristics is designed in the framework of particle swarm optimization theory.By designing an effective search strategy,this method introduces the multi-population iteration mechanism and shrinkage factor,which avoids the disadvantages of the traditional method of redundant computation.On the premise of ensuring the calculation accuracy,the time-frequency point calculation is greatly reduced,and the search efficiency of cross ambiguity function is improved.

    Table and Figures | Reference | Related Articles | Metrics
    Time-varying channel prediction algorithm based on the attention denoising and complex LSTM network
    CHENG Yong, JIANG Fengyuan
    Journal of Xidian University    2024, 51 (1): 29-40.   DOI: 10.19665/j.issn1001-2400.20230203
    Abstract120)   HTML15)    PDF(pc) (1707KB)(97)       Save

    With the development of wireless communication technology,the research on communication technology in high-speed scenario is becoming more and more extensive,one aspect of which is that obtaining accurate channel state information is of great significance to improving the performance of a wireless communication system.In order to solve the problem that the existing channel prediction algorithms for orthogonal Frequency Division multiplexing(OFDM) systems do not consider the influence of noise and the low prediction accuracy in high-speed scenarios,a time-varying channel prediction algorithm based on attention denoising and complex convolution LSTM is proposed.First,a channel attention channel denoising network is proposed to denoise the channel state information,which reduces the influence of noise on the channel state information.Second,a channel prediction model based on the complex convolutional layer and long short term memory(LSTM) is constructed.The channel state information at the historical moment after denoising is extracted,and then it is input into the channel prediction model to predict the channel state information at the future moment.The improved LSTM prediction model enhances the ability to extract channel timing features and improves the accuracy of channel prediction.Finally,the Adam optimizer is used to predict the channel state information at the future time.Simulation results show that the proposed time-varying channel prediction algorithm based on the attention denoising and complex convolutional LSTM network method has a higher prediction accuracy for the channel state information than the comparison algorithm.At the same time,the proposed method can be applied to the time-varying channel prediction in high-speed moving scenarios.

    Table and Figures | Reference | Related Articles | Metrics
    Nuclear segmentation method for thyroid carcinoma pathologic images based on boundary weighting
    HAN Bing,GAO Lu,GAO Xinbo,CHEN Weiming
    Journal of Xidian University    2023, 50 (5): 75-86.   DOI: 10.19665/j.issn1001-2400.20230501
    Abstract119)   HTML9)    PDF(pc) (5221KB)(84)       Save

    Thyroid cancer is one of the most rapidly growing malignancies among all solid cancers.Pathological diagnosis is the gold standard for doctors to diagnose tumors,and nuclear segmentation is a key step in the automatic analysis of pathological images.Aiming at the low segmentation performance of existing segmentation methods on the nuclear boundary of the cell nucleus in the thyroid carcinoma pathological image,we propose an improved U-Net method based on boundary weighting for nuclear segmentation.This method uses the designed boundary weighting module,which can make the segmentation network pay more attention to the boundary of the nuclear.At the same time,in order to avoid the proposed network paying too much attention to the boundary and ignoring the main part of the nucleus,which leads to the failure for some lightly stained nuclei segmentation,we design a segmentation network to enhance the foreground area and suppresses the background area in the upsampling stage.In addition,we build a dataset for nuclear segmentation of thyroid carcinoma pathologic images named VIP-TCHis-Seg dataset.Our method achieves the Dice coefficient(Dice) of 85.26% and the pixel accuracy(PA) of 95.89% on self-built TCHis-Seg dataset,and achieves the Dice coefficient(Dice) of 81.03% and the pixel accuracy(PA) of 94.63% on common dataset MoNuSeg.Experimental results show that our method can achieve the best performance on both Dice and PA as well as effectively improve the segmentation accuracy of the network at the boundary compared with other methods.

    Table and Figures | Reference | Related Articles | Metrics
    Resource optimization algorithm for unmanned aerial vehicle jammer assisted cognitive covert communications
    LIAO Xiaomin, HAN Shuangli, ZHU Xuan, LIN Chushan, WANG Haipeng
    Journal of Xidian University    2023, 50 (6): 75-83.   DOI: 10.19665/j.issn1001-2400.20230603
    Abstract117)   HTML6)    PDF(pc) (1909KB)(62)       Save

    Aiming at the covert communication scenario of an unmanned aerial vehicle(UAV) jammer assisted cognitive radio network,a transferred generative adversarial network based resource optimization algorithm is proposed for the UAV’s joint trajectory and transmit power optimization problem.First,based on the actual covert communication scenario,the UAV jammer assisted cognitive covert communication model is constructed.Then,a transferred generative adversarial network based resource allocation algorithm is designed,which introduces a transfer learning and generative adversarial network.The algorithm consists of a source domain generator,a target domain generator,and a discriminator,which extract the main resource allocation features of legitimate users not transmitting covert message by transfer learning,then transform the whole covert communication process into an interactive game between the legitimate users and the eavesdropping,alternatively train the target domain generator and discriminator in a competitive manner,and achieve the Nash equilibrium to obtain resource optimization solution for the covert communications.Numerical results show that the proposed algorithm can attain near-optimal resource optimization solution for the covert communication and achieve rapid convergence under the assumptions of knowing the channel distribution information and not knowing the detection threshold of the eavesdropper.

    Table and Figures | Reference | Related Articles | Metrics
    Electromagnetic calculation of radio wave propagation in electrically large mountainous terrain environment
    WANG Nan, LIU Junzhi, CHEN Guiqi, ZHAO Yanan, ZHANG Yu
    Journal of Xidian University    2024, 51 (1): 21-28.   DOI: 10.19665/j.issn1001-2400.20230210
    Abstract111)   HTML6)    PDF(pc) (1446KB)(81)       Save

    In emerging industries such as unmanned aerial vehicles and drones,the signal coverage requirements are high,not only in the city,but in the inaccessible mountains,deserts,and forests also wireless signal coverage is needed to truly complete remote control.These areas need to consider the impact of terrain changes on electromagnetic transmission.The Uniform Geometrical Theory of Diffraction method in Computational Electromagnetic is an effective method to analyze electromagnetic problems in electrically large environments and this paper uses the method of computational electromagnetics to study the propagation of electromagnetic waves in mountainous environments.A new method of constructing an irregular terrain model is presented.The available terrain data can be generated by the cubic surface algorithm,and the irregular terrain is spliced by multiple cubic surfaces.The accuracy of the model data is verified by the mean root mean square error.Based on the topographic data,a parallel 3D geometric optical algorithm is completed,and the distribution of the regional electromagnetic field is simulated.The actual mountain terrain environment is selected for field measurement,and the comparison trend between the measurement results and the simulation results is consistent,which verifies the effectiveness of the method in the analysis of electromagnetic wave propagation in the irregular terrain.Considering the scale of environmental electromagnetic computation,a parallel strategy is established,and the parallel efficiency of 100 cores test can be kept to be above 80%.

    Table and Figures | Reference | Related Articles | Metrics
    Anti-occlusion PMBM tracking algorithm optimized by fuzzy inference
    LI Cuiyun,HENG Bowen,XIE Jinchi
    Journal of Xidian University    2023, 50 (5): 54-64.   DOI: 10.19665/j.issn1001-2400.20230401
    Abstract109)   HTML12)    PDF(pc) (6914KB)(82)       Save

    Target occlusion is a common problem in multiple extended target tracking.When the distance between targets is close or there are unknown obstacles within the scanning range of the sensor,the phenomenon of partial or complete occlusion of the target will occur,resulting in underestimation of the target quantity.Aiming at the problem that the existing Poisson multi-Bernoulli mixture(PMBM) filtering algorithms cannot perform stable tracking in occlusion scenarios,this paper proposes a GP-PMBM algorithm incorporating fuzzy inference.First,based on the random set target tracking framework,the corresponding extended target occlusion model is given according to different occlusion scenarios.On this basis,the state space of the GP-PMBM filter is expanded,and the influence of occlusion on the target state is taken into account in the filtering steps of the algorithm by adding variable detection probability.Finally,a fuzzy inference system that can estimate the target occlusion probability is constructed and combined with the GP-PMBM algorithm,and the accurate estimation of the target in occlusion scenarios is achieved with the help of the description ability of the fuzzy system and the good tracking performance of the PMBM filter.Simulation results show that the tracking performance of the proposed algorithm in target occlusion scenarios is better than that of the existing PMBM filtering algorithms.

    Table and Figures | Reference | Related Articles | Metrics
    Cloth-changing person re-identification paradigm based on domain augmentation and adaptation
    ZHANG Peixu,HU Guanyu,YANG Xinyu
    Journal of Xidian University    2023, 50 (5): 87-94.   DOI: 10.19665/j.issn1001-2400.20221106
    Abstract108)   HTML10)    PDF(pc) (1901KB)(84)       Save

    In order to solve the influence of the clothing change on the model’s recognition accuracy of the personal identity,a clothes-changing person re-identification paradigm based on domain augmentation and adaptation is proposed,which enables the model to learn general robust identity representation features in different domains.First,a clothing semantic-aware domain data enhancement method is designed based on the semantic information of the human body,which changes the color of sample clothes without changing the identity of the target person to fill the lack of domain diversity in the data; second,a multi-positive class domain adaptive loss function is designed,which assigns differential weights to the multi-positive class data losses according to the different contributions made by different domain data in the model training,forcing the model to focus on the learning of generic identity features of the samples.Experiments demonstrate that the method achieves 59.5%,60.0%,and 88.0%,84.5% of Rank-1 and mAP on two clothing change datasets,PRCC and CCVID,without affecting the accuracy of non-clothing person re-identification.Compared with other methods,this method has a higher accuracy and stronger robustness and significantly improves the model’s ability to recognize persons.

    Table and Figures | Reference | Related Articles | Metrics
    Damage effect and protection design of the p-GaN HEMT induced by the high power electromagnetic pulse
    WANG Lei, CHAI Changchun, ZHAO Tianlong, LI Fuxing, QIN Yingshuo, YANG Yintang
    Journal of Xidian University    2023, 50 (6): 34-43.   DOI: 10.19665/j.issn1001-2400.20230502
    Abstract107)   HTML10)    PDF(pc) (4062KB)(71)       Save

    Nowadays,severe electromagnetic circumstances pose a serious threat to electronic systems.The excellent performance of gallium nitride based high electron mobility transistors makes them more suitable for high power and high frequency applications.With the continuous improvement in the quality of crystal epitaxial material and device manufacture technology,gallium nitride semiconductor devices are rapidly developing towards the direction of high power and miniaturization,which challenges the reliability and stability of devices.In this paper,the damage effects of the high power electromagnetic pulse(EMP) on the enhanced GaN high-electron-mobility transistor(HEMT) are investigated in detail.The mechanism is presented by analyzing the variation of the internal multiple physical quantities distribution in the device.It is revealed that the device damage is dominated by the different thermal accumulation effect such as self-heating,avalanche breakdown and hot carrier emission during the action of the high power EMP.Furthermore,the multi-scale protection design of the GaN HEMT against the high power electromagnetic interference(EMI) is presented and verified by simulation study.The device structure optimization results demonstrate that a proper passivation layer which enhances the breakdown characteristics can improve the anti-EMI capability.The circuit optimization presents the influences of external components on the damage progress.It is found that the resistive components which are in series at the source and gate will strengthen the capability of the device to withstand high power EMP damage.All above conclusions are important for device reliability design using gallium nitride materials,especially when the device operates under severe electromagnetic circumstances.

    Table and Figures | Reference | Related Articles | Metrics
    Generative adversarial model for radar intra-pulse signal denoising and recognition
    DU Mingyang, DU Meng, PAN Jifei, BI Daping
    Journal of Xidian University    2023, 50 (6): 133-147.   DOI: 10.19665/j.issn1001-2400.20230312
    Abstract106)   HTML11)    PDF(pc) (14404KB)(73)       Save

    While deep neural networks have achieved an impressive success in computer vision,the related research remains embryonic in radio frequency signal processing,i.e.,a vital task in modern wireless systems,for example,the electronic reconnaissance system.Noise corruption is a harmful but unavoidable factor causing severe performance degradation in the signal processing procedure,and thus has persistently been an intractable problem in the radio frequency domain.For example,a classifier trained on the high signal-to-noise ratio(SNR) data might experience a severe performance degradation when dealing with low SNR data.To address this problem,in this paper we leverage the powerful data representation capacity of deep learning and propose a Generative Adversarial Denoising and classification Network(GADNet) for radar signal restoration and a classification task.The proposed GADNet consists of a generator,a discriminator and a classifier fulfilling an end-to-end workflow.The encoder-decoder structure generator is trained to extract the high-level features and recover signals.Meanwhile,it fools the discriminator’s judges by bewildering the denoising results coming from the clean data.The classification loss from the classifier is adopted jointly to the training procedure.Extensive experiments demonstrate the benefit of the proposed technique in terms of high-quality restoration and accurate classification for radar signals with intense noise.Moreover,it also exhibits superior transferability in low SNR environments compared to the state-of-the-art methods.

    Table and Figures | Reference | Related Articles | Metrics
    UAV swarm power allocation strategy for resilient topology construction
    HU Jialin, REN Zhiyuan, LIU Anni, CHENG Wenchi, LIANG Xiaodong, LI Shaobo
    Journal of Xidian University    2024, 51 (2): 28-45.   DOI: 10.19665/j.issn1001-2400.20230314
    Abstract104)   HTML6)    PDF(pc) (5173KB)(68)       Save

    A topology construction method of the Unmanned combat network with strong toughness is proposed for the problem of network performance degradation and network paralysis caused by the failure of the Unmanned combat network itself or interference by enemy attack.The method first takes the edge-connectivity as the toughness indicator of the network;second,the minimum cut is used as the measure of the toughness indicator based on the maximum flow minimum cut(Max-flow min-cut) theorem,on the basis of which considering the limited power of a single UAV and the system,the topology is constructed by means of power allocation to improve the network toughness from the physical layer perspective,and the power allocation strategy of the Unmanned combat network under power constraint is proposed;finally,particle swarm optimization(PSO) algorithm is used to solve the topology toughness optimization problem under the power constraint.Simulation results show that under the same modulation and power constraints,the power allocation scheme based on the PSO algorithm can effectively improve the toughness of the Unmanned combat network compared with other power allocation algorithms in the face of link failure mode and node failure mode,and that the average successful service arrival rate of the constructed network remains above 95% in about 66.7% of link failures,which meets the actual combat requirements.

    Table and Figures | Reference | Related Articles | Metrics
    Research on aviation ad hoc network routing protocols in highly dynamic and complex scenarios
    JIANG Laiwei, CHEN Zheng, YANG Hongyu
    Journal of Xidian University    2024, 51 (1): 72-85.   DOI: 10.19665/j.issn1001-2400.20230313
    Abstract99)   HTML6)    PDF(pc) (1772KB)(64)       Save

    With the rapid enlargement of the air transportation scale,the aviation ad hoc network(AANET) communication based on the civil aviation aircraft has possessed the capacities of communication network coverage.To find an effective means of important data transmission of aircraft nodes in highly dynamic and uncertain complex scenarios and backup them safely has become more important for improving the reliability and management abilities of the air-space-ground integrated network.However,the characteristics of the AANET,such as high dynamic change of network topology,large network span,and unstable network links,have brought severe challenges to the design of AANET protocols,especially the routing protocols.In order to facilitate the future research on the design of AANET routing protocols,this paper comprehensively analyzes the relevant requirements of AANET routing protocol design and investigates the existing routing protocols.First,according to characteristics of the AANET,this paper analyzes the factors,challenges,and design principles that need to be considered in the design of the routing protocols.Then,according to the design characteristics of existing routing protocols,this paper classifies and analyzes the existing routing protocols of the AANET.Finally,the future research focus of the routing protocols for the AANET is analyzed,so as to provide reference for promoting the research on the next generation of the air-space-ground integrated network in China.

    Table and Figures | Reference | Related Articles | Metrics
    Medicaldata privacy protection scheme supporting controlled sharing
    GUO Qing, TIAN Youliang
    Journal of Xidian University    2024, 51 (1): 165-176.   DOI: 10.19665/j.issn1001-2400.20230104
    Abstract98)   HTML5)    PDF(pc) (1588KB)(58)       Save

    The rational use of patient medical and health data information has promoted the development of medical research institutions.Aiming at the current difficulties in sharing medical data between patients and medical research institutions,data privacy is easy to leak,and the use of medical data is uncontrollable,a medical data privacy protection scheme supporting controlled sharing is proposed.Firstly,the blockchain and proxy server are combined to design a medical data controlled sharing model that the blockchain miner nodes are distributed to construct proxy re-encryption keys,and the proxy server is used to store and convert medical data ciphertext,and proxy re-encryption technology is used to bring about the secure sharing of medical data while protecting the privacy of patients.Secondly,a dynamic adjustment mechanism of user permissions is designed that the patient and the blockchain authorization management nodes update the access permissions of medical data through the authorization list to realize the controllable sharing of medical data by patients.Finally,the security analysis shows that the proposed scheme can bring about the dynamic sharing of medical data while protecting the privacy of medical data,and can also resist collusion attacks.Performance analysis shows that this scheme has advantages in communication overhead and computing overhead,and is suitable for controlled data sharing between patients or hospitals and research institutions.

    Table and Figures | Reference | Related Articles | Metrics
    Workflow deployment method based on graph segmentation with communication and computation jointly optimized
    MA Yinghong, LIN Liwan, JIAO Yi, LI Qinyao
    Journal of Xidian University    2024, 51 (2): 13-27.   DOI: 10.19665/j.issn1001-2400.20231206
    Abstract97)   HTML14)    PDF(pc) (3074KB)(95)       Save

    For the purpose of improving computing efficiency,it becomes an important way for cloud data centers to deal with the continuous growth of computing and network tasks by decomposes complex large-scale tasks into simple tasks and modeling them into workflows,which are then completed by parallel distributed computing clusters.However,the communication bandwidth consumption caused by inter-task transmission can easily cause network congestion in data center.It is of great significance to deploy workflow scientifically,taking into account both computing efficiency and communication overhead.There are two typical types of workflow deployment algorithms:list-based workflow deployment algorithm and cluster-based workflow deployment algorithm.However,the former focuses on improving the computing efficiency while does not pay attention to the inter-task communication cost,so the deployment of large-scale workflow is easy to bring heavy network load.The latter focuses on minimizing the communication cost,but sacrifices the parallel computing efficiency of the tasks in the workflow,which results in a long workflow completion time.This work fully explores the dependency and parallelism between tasks in workflow,from the perspective of graph theory.By improving the classic graph segmentation algorithm,community discovery algorithm,the balance between minimizing communication cost and maximizing computation parallelism was achieved in the process of workflow task partitioning.Simulation results show that,under different workflow scales,the proposed algorithm reduces the communication cost by 35%~50%,compared with the typical list-based deployment algorithm,and the workflow completion time by 50%~65%,compared with the typical cluster-based deployment algorithm.Moreover,its performance has good stability for workflows with different communication-calculation ratios.

    Table and Figures | Reference | Related Articles | Metrics
    Random chunks attachment strategy based secure deduplication for cloud data
    LIN Genghao,ZHOU Ziji,TANG Xin,ZHOU Yiteng,ZHONG Yuqi,QI Tianyang
    Journal of Xidian University    2023, 50 (5): 212-228.   DOI: 10.19665/j.issn1001-2400.20230503
    Abstract96)   HTML9)    PDF(pc) (5198KB)(72)       Save

    Source based deduplication prevents subsequent users from uploading the same file by returning a deterministic response,which greatly saves the network bandwidth and storage overhead.However,the deterministic response inevitably introduces side channel attacks.Once the subsequent uploading is not needed,an attacker can easily steal the existent privacy of the target file in cloud storage.To resist side channel attacks,various kinds of defense schemes such as adding trusted gateways,setting trigger thresholds,confusing response values,and so on are proposed.However,these methods suffer from the problems of high deployment costs,high startup costs and the difficulty in resisting random chunks generation attack and learn remaining information attack.Thus,we propose a novel secure deduplication scheme,which utilizes the random chunks attachment strategy to achieve obfuscation in response.Specifically,we first add a certain number of chunks with the unknown existent status at the end of the request to blur the existent status of the original requested ones,and then reduce the probability of returning a lower boundary value in response by scrambling strategy.Finally,the deduplication response is generated with the help of the newly designed response table.Security analysis and experimental results show that,compared with the existing works,our scheme significantly improve the security at the expense of just a little extra overhead.

    Table and Figures | Reference | Related Articles | Metrics
    Multi-scale convolutional attention network for radar behavior recognition
    XIONG Jingwei, PAN Jifei, BI Daping, DU Mingyang
    Journal of Xidian University    2023, 50 (6): 62-74.   DOI: 10.19665/j.issn1001-2400.20231005
    Abstract94)   HTML10)    PDF(pc) (8738KB)(76)       Save

    A radar behavior mode recognition framework is proposed aiming at the problems of difficult feature extraction and low recognition stability of the radar signal under a low signal-to-noise ratio,which is based on depth-wise convolution,multi-scale convolution and the self-attention mechanism.It improves the recognition ability in complex environment without increasing the difficulty of training.This algorithm employs depth-wise convolution to segregate weakly correlated channels in the shallow network.Subsequently,it utilizes multi-scale convolution to replace conventional convolution for multi-dimensional feature extraction.Finally,it employs a self-attention mechanism to adjust and optimize the weights of different feature maps,thus suppressing the influence of low and negative correlations in both channels and the spatial domains.Comparative experiments demonstrate that the proposed MSCANet achieves an average recognition rate of 92.25% under conditions of 0~50% missing pulses and false pulses.Compared to baseline networks such as AlexNet,ConvNet,ResNet,and VGGNet,the accuracy has been improved by 5% to 20%.The model exhibits stable recognition of various radar patterns and demonstrates enhanced generalization and robustness.Simultaneously,ablation experiments confirm the effectiveness of deep grouped convolution,multi-scale convolution,and the self-attention mechanism for radar behavior recognition.

    Table and Figures | Reference | Related Articles | Metrics
    Improved short-signature based cloud data audit scheme
    CUI Yuanyou,WANG Xu’an,LANG Xun,TU Zheng,SU Yunxuan
    Journal of Xidian University    2023, 50 (5): 132-141.   DOI: 10.19665/j.issn1001-2400.20230107
    Abstract92)   HTML6)    PDF(pc) (1741KB)(59)       Save

    With the development of the Internet of Things,Cloud storage has experienced an explosive growth.Effective verification of the integrity of data stored on the Cloud storage service providers(CSP) has become an important issue.In order to solve the problem that the existing data integrity audit scheme based on the BLS short signature is inefficient,ZHU et al.designed a data integrity audit scheme based on the ZSS short signature in 2019.However,this paper points out that the proof generated by ZHU et al.'s scheme in the challenge phase is incorrect and can be subjected to replay attacks or attacked by using a bilinear map,so as to pass the audit of a third party auditor(TPA).Then,this paper proposes an improved cloud audit scheme based on the short signature by improving the calculation method of proof in the challenge stage and optimizing the equations used by the third party auditor in the verification stage for verifying proof.This paper proves the correctness of the improved scheme,compensates for the shortcomings in the original scheme,and analyzes the security of the scheme.The improved scheme not only can make attackers including the third party auditor unable to recover users’ data,but also can resist replay attacks and forgery attacks of attackers including malicious cloud storage service providers.Through numerical analysis,it is found that the computational cost did not change much,and that the communication cost decreased,thus providing a better computational accuracy than the original scheme.

    Table and Figures | Reference | Related Articles | Metrics
    COLLATE:towards the integrity of control-related data
    DENG Yingchuan,ZHANG Tong,LIU Weijie,WANG Lina
    Journal of Xidian University    2023, 50 (5): 199-211.   DOI: 10.19665/j.issn1001-2400.20230106
    Abstract92)   HTML5)    PDF(pc) (3156KB)(58)       Save

    Programs written in C/C++ may contain bugs that can be exploited to subvert the control flow.Existing control-flow hijacking mitigations validate the indirect control-flow transfer targets,or guarantee the integrity of code pointers.However,attackers can still overwrite the dependencies of function pointers,bending indirect control-flow trans-fers(ICTs) to valid but unexpected targets.We introduce the control-related data integrity(COLLATE) to guarantee the integrity of function pointers and their dependencies.The dependencies determine the potential data-flow between function pointers definition and ICTs.The COLLATE identifies function pointers,and collects their dependencies with the inter-procedure static taint analysis.Moreover,the COLLATE allocates control-related data on a hardware-protected memory domain MS to prevent unauthorized modifications.We evaluate the overhead of the COLLATE on SPEC CPU 2006 benchmarks and Nginx.Also,we evaluate its effectiveness on three real-world exploits and one test suite for vtable pointer overwrites.The evaluation results show that the COLLATE successfully detects all attacks,and introduces a 10.2% performance overhead on average for the C/C++ benchmark and 6.8% for Nginx,which is acceptable.Experiments prove that the COLLATE is effective and practical.

    Table and Figures | Reference | Related Articles | Metrics
    Highly dynamic multi-channel TDMA scheduling algorithm for the UAV ad hoc network in post-disaster
    SUN Yanjing, LI Lin, WANG Bowen, LI Song
    Journal of Xidian University    2024, 51 (2): 56-67.   DOI: 10.19665/j.issn1001-2400.20230414
    Abstract91)   HTML5)    PDF(pc) (1608KB)(75)       Save

    Extreme emergencies,mainly natural disasters and accidents,have posed serious challenges to the rapid reorganization of the emergency communication network and the real-time transmission of disaster information.It is urgent to build an emergency communication network with rapid response capabilities and dynamic adjustment on demand.In order to realize real-time transmission of disaster information under the extreme conditions of "three interruptions" of power failure,circuit interruption and network connection,the Flying Ad Hoc Network can be formed by many unmanned aerial vehicles to cover the network communication in the disaster-stricken area.Aiming at the channel collision problem caused by unreasonable scheduling of FANET communication resources under the limited conditions of complex environment after disasters,this paper proposes a multi-channel time devision multiple access(TDMA) scheduling algorithm based on adaptive Q-learning.According to the link interference relationship between UAVs,the vertex interference graph is established,and combined with the graph coloring theory,and the multi-channel TDMA scheduling problem is abstracted into a dynamic double coloring problem in highly dynamic scenarios.Considering the high-speed mobility of the UAV,the learning factor of Q-learning is adaptively adjusted according to the change of network topology,and the trade-off optimization of the convergence speed of the algorithm and the exploration ability of the optimal solution is realized.Simulation experiments show that the proposed algorithm can realize the trade-off optimization of network communication conflict and convergence speed,and can solve the problem of resource allocation decision and fast-changing topology adaptation in post-disaster high-dynamic scenarios.

    Table and Figures | Reference | Related Articles | Metrics
    Adaptivedensity peak clustering algorithm
    ZHANG Qiang, ZHOU Shuisheng, ZHANG Ying
    Journal of Xidian University    2024, 51 (2): 170-181.   DOI: 10.19665/j.issn1001-2400.20230604
    Abstract88)   HTML4)    PDF(pc) (3821KB)(52)       Save

    Density Peak Clustering(DPC) is widely used in many fields because of its simplicity and high efficiency.However,it has two disadvantages:① It is difficult to identify the real clustering center in the decision graph provided by DPC for data sets with an uneven cluster density and imbalance;② There exists a "chain effect" where a misallocation of the points with the highest density in a region will result in all points within the region pointing to the same false cluster.In view of these two deficiencies,a new concept of Natural Neighbor(NaN) is introduced,and a density peak clustering algorithm based on the natural neighbor(DPC-NaN) is proposed which uses the new natural neighborhood density to identify the noise points,selects the initial preclustering center point,and allocates the non-noise points according to the density peak method to get the preclustering.By determining the boundary points and merging radius of the preclustering,the results of the preclustering can be adaptively merged into the final clustering.The proposed algorithm eliminates the need for manual parameter presetting and alleviates the problem of "chain effect".Experimental results show that compared with the correlation clustering algorithm,the proposed algorithm can obtain better clustering results on typical data sets and perform well in image segmentation.

    Table and Figures | Reference | Related Articles | Metrics
    Superimposed pilots transmission for unsourced random access
    HAO Mengnan, LI Ying, SONG Guanghui
    Journal of Xidian University    2024, 51 (3): 1-8.   DOI: 10.19665/j.issn1001-2400.20230907
    Abstract87)   HTML24)    PDF(pc) (856KB)(132)       Save

    In unsourced random access,the base station(BS) only needs to recover the messages sent by each active device without identifying the device,which allows a large number of active devices to access the BS at any time without requiring a resource in advance,thereby greatly reducing the signaling overhead and transmission delay,which has attracted the attention of many researchers.Currently,many works are devoted to design random access schemes based on preamble sequences.However,these schemes have poor robustness when the number of active devices changes,and cannot make full use of channel bandwidth,resulting in poor performance when the number of active devices is large.Aiming at this problem,a superimposed pilots transmission scheme is proposed to improve the channel utilization ratio,and the performance for different active device numbers is further improved by optimal power allocation,making the system have good robustness when the number of active devices changes.In this scheme,the first Bp bits of the sent message sequence are used as the index,to select a pair of pilot sequence and interleaver.Then,using the selected interleaver,the message sequence is encoded,modulated and interleaved,and the selected pilot sequence is then superimposed on the interleaved modulated sequence to obtain the transmitted signal.For this transmission scheme,a power optimization scheme based on the minimum probability of error is proposed to obtain the optimal power allocation ratio for different active device numbers,and a two-stage detection scheme of superimposed pilots detection cancellation and multi-user detection decoding is designed.Simulation results show that the superimposed pilot transmission scheme can improve the performance of the unsourced random access scheme based on the preamble sequence by about 1.6~2.0 dB and 0.2~0.5 dB respectively,and flexibly change the number of active devices that the system carries and that it has a lower decoding complexity.

    Table and Figures | Reference | Related Articles | Metrics
    Drone identification based on the normalized cyclic prefix correlation spectrum
    ZHANG Hanshuo, LI Tao, LI Yongzhao, WEN Zhijin
    Journal of Xidian University    2024, 51 (2): 68-75.   DOI: 10.19665/j.issn1001-2400.20230704
    Abstract84)   HTML4)    PDF(pc) (1621KB)(61)       Save

    Radio-frequency(RF)-based drone identification technology has the advantages of long detection distance and low environmental dependence,so that it has become an indispensable approach to monitoring drones.How to identify a drone effectively at the low signal-to-noise ratio(SNR) regime is a hot topic in current research.To ensure excellent video transmission quality,drones commonly adopt orthogonal frequency division multiplexing(OFDM) modulation with cyclic prefix(CP) as the modulation of video transmission links.Based on this property,we propose a drone identification algorithm based on the convolutional neural network(CNN) and normalized CP correlation spectrum.Specifically,we first analyze the OFDM symbol durations and CP durations of drone signals,on the basis of which the normalized CP correlation spectrum is calculated.When the modulation parameters of a drone signal match the calculated normalized CP correlation spectrum,several correlation peaks will appear in the normalized CP correlation spectrum.The positions of these peaks reflect the protocol characteristics of drone signals,such as frame structure and burst rules.Finally,for identifying drones,a CNN is trained to extract these characteristics from the normalized CP correlation spectrum.In this work,a universal software radio peripheral(USRP) X310 is utilized to collect the RF signals of five drones to construct the experimental dataset.Experimental results show that the proposed algorithm performs better than spectrum-based and spectrogram-based algorithms,and it remains effective at low SNRs.

    Table and Figures | Reference | Related Articles | Metrics
    Document image forgery localization and desensitization localization using the attention mechanism
    ZHENG Kengtao, LI Bin, ZENG Jinhua
    Journal of Xidian University    2023, 50 (6): 207-218.   DOI: 10.19665/j.issn1001-2400.20230105
    Abstract83)   HTML8)    PDF(pc) (6344KB)(47)       Save

    Some important documents such as contracts,certificates and notifications are often stored and disseminated in a digital format.However,due to the inclusion of key text information,such images are often easily illegally tampered with and used,causing serious social impact and harm.Meanwhile,taking personal privacy and security into account,people also tend to remove sensitive information from these digital documents.Malicious tampering and desensitization can both introduce extra traces to the original images,but there are differences in motivation and operations.Therefore,it is necessary to differentiate them to locate the tamper areas more accurately.To address this issue,we propose a convolutional encoder-decoder network,which has multi-level features of the encoder through U-Net connection,effectively learning tampering and desensitization traces.At the same time,several Squeeze-and-Excitation attention mechanism modules are introduced in the decoder to suppress image content and focus on weaker operation traces,to improve the detection ability of the network.To effectively assist network training,we build a document image forensics dataset containing common tampering and desensitization operations.Experimental results show that our model performs effectively both on this dataset and on the public tamper datasets,and outperforms comparison algorithms.At the same time,the proposed method is robust to several common post-processing operations.

    Table and Figures | Reference | Related Articles | Metrics
    Encrypted deduplication scheme with access control and key updates
    HA Guanxiong, JIA Qiaowen, CHEN Hang, JIA Chunfu, LIU Lanqing
    Journal of Xidian University    2023, 50 (6): 195-206.   DOI: 10.19665/j.issn1001-2400.20230306
    Abstract82)   HTML7)    PDF(pc) (2150KB)(53)       Save

    In the scenario of data outsourcing,access control and key update have an important application value.However,it is hard for existing encrypted deduplication schemes to provide flexible and effective access control and key update for outsourcing user data.To solve this problem,an encrypted deduplication scheme with access control and key updates is proposed.First,an efficient access control scheme for encrypted deduplication is designed based on the ciphertext-policy attribute-based encryption and the proof of ownership.It combines access control with proof of ownership and can simultaneously detect whether a client has the correct access right and whole data content only through a round of interaction between the client and the cloud server,effectively preventing unauthorized access and ownership fraud attacks launched by adversaries.The scheme has features such as low computation overhead and few communication rounds.Second,by combining the design ideas of server-aided encryption and random convergent encryption,an updatable encryption scheme suitable for encrypted deduplication is designed.It is combined with the proposed access control scheme to achieve hierarchical and user-transparent key updates.The results of security analysis and performance evaluation show that the proposed scheme can provide confidentiality and integrity for outsourcing user data while achieving efficient data encryption,decryption,and key update.

    Table and Figures | Reference | Related Articles | Metrics
    Several classes of cryptographic Boolean functions with high nonlinearity
    LIU Huan, WU Gaofei
    Journal of Xidian University    2023, 50 (6): 237-250.   DOI: 10.19665/j.issn1001-2400.20230416
    Abstract81)   HTML15)    PDF(pc) (882KB)(54)       Save

    Boolean functions have important applications in cryptography.Bent functions have been a hot research topic in symmetric cryptography as Boolean functions have maximum nonlinearity.From the perspective of spectrum,bent functions have a flat spectrum under the Walsh-Hadamard transform.Negabent functions are a class of generalized bent functions,which have a uniform spectrum under the nega-Hadamard transform.A generalized negabent function is a function with a uniform spectrum under the generalized nega-Hadamard transform.Bent functions has been extensively studied since its introduction in 1976.However,there are few research on negabent functions and generalized negabent functions.In this paper,the properties of generalized negabent functions and generalized bent-negabent functions are analyzed.Several classes of generalized negabent functions,generalized bent-negabent functions,and generalized semibent-negabent functions are constructed.First,by analyzing a link between the nega-crosscorrelation of generalized Boolean function and the generalized nega-Hadamard transformation,a criterion for generalized negabent functions is presented.Based on this criterion,a class of generalized negabent functions is constructed.Secondly,two classes of generalized negabent functions of the form f(x)=c1f1(x(1))+c2f2(x(2))+…+crfr(x(r)) are constructed by using the direct sum construction.Finally,generalized bent-negabent functions and generalized semibent-negabent functions over Z8 are obtained by using the direct sum construction.Some new methods for constructing generalized negabent functions are given in this paper,which will enrich the results of negabent functions.

    Table and Figures | Reference | Related Articles | Metrics
    Point set registration optimization algorithm using spatial clustering and structural features
    HU Xin,XIANG Diyuan,QIN Hao,XIAO Jian
    Journal of Xidian University    2023, 50 (5): 95-106.   DOI: 10.19665/j.issn1001-2400.20230411
    Abstract81)   HTML7)    PDF(pc) (6289KB)(66)       Save

    The existence of noise,non-rigid deformation and mis-matching in point set registration results in the difficulty of solving nonlinear optimal space transformation.This paper introduces local constraints and proposes a point set registration optimization algorithm using spatial distance clustering and local structural features(PR-SDCLS).First,the motion consistency clustering subset and outlier clustering subset are constructed by using the point set space distance matrix;Then,the Gaussian mixture model is used to fit the motion consistency cluster subset,and the mixing coefficient considering global and local features is obtained by fusing the shape context feature descriptor and weighted spatial distance.Finally,the maximum expectation algorithm is used to complete the parameter estimation,and the non-rigid point set registration model of the Gaussian mixture model is realized.In order to improve the efficiency of the algorithm,the model transformation uses the reproducing kernel Hilbert space model,and uses the kernel approximation strategy.Experimental results show that the algorithm has a good registration effect and robustness in the face of a large number of outliers on non-rigid data sets involving different types of data degradation(deformation,noise,outliers,occlusion and rotation),and the mean value of registration average error is reduced by 42.053 8% on the basis of classic and advanced algorithms.

    Table and Figures | Reference | Related Articles | Metrics
    Improved double deep Q network algorithm for service function chain deployment
    LIU Daohua, WEI Dinger, XUAN Hejun, YU Changming, KOU Libo
    Journal of Xidian University    2024, 51 (1): 52-59.   DOI: 10.19665/j.issn1001-2400.20230310
    Abstract81)   HTML3)    PDF(pc) (869KB)(57)       Save

    Network Function Virtualization(NFV) has become the key technology of next generation communication.Virtual Network Function Service Chain(VNF-SC) mapping is the key issue of the NFV.To reduce the energy consumption of the communication network server and improve the quality of service,a Function Chain(SFC) deployment algorithm based on an improved Double Deep Q Network(DDQN) is proposed to reduce the energy consumption of network servers and improve the network quality of service.Due to the dynamic change of the network state,the service function chain deployment problem is modeled as a Markov Decision Process(MDP).Based on the network state and action rewards,the DDQN is trained online to obtain the optimal deployment strategy for the service function chain.To solve the problem that traditional deep reinforcement learning draws experience samples uniformly from the experience replay pool leading to low learning efficiency of the neural network,a prioritized experience replay method based on importance sampling is designed to draw experience samples so as to avoid high correlation between training samples to improve the learning efficiency of the neural network.Experimental results show that the proposed SFC deployment algorithm based on the improved DDQN can increase the reward value,and that compared with the traditional DDQN algorithm,it can reduce the energy consumption and blocking rate by 19.89%~36.99% and 9.52%~16.37%,respectively.

    Table and Figures | Reference | Related Articles | Metrics
    Self-supervised contrastive representation learning for semantic segmentation
    LIU Bochong, CAI Huaiyu, WANG Yi, CHEN Xiaodong
    Journal of Xidian University    2024, 51 (1): 125-134.   DOI: 10.19665/j.issn1001-2400.20230304
    Abstract80)   HTML3)    PDF(pc) (2895KB)(63)       Save

    To improve the accuracy of the semantic segmentation models and avoid the labor and time costs of pixel-wise image annotation for large-scale semantic segmentation datasets,this paper studies the pre-training methods of self-supervised contrastive representation learning,and designs the Global-Local Cross Contrastive Learning(GLCCL) method based on the characteristics of the semantic segmentation task.This method feeds global images and a series of image patches after local chunking into the network to extract global and local visual representations respectively,and guides the network training by constructing loss function that includes global contrast,local contrast,and global-local cross contrast,enabling the network to learn both global and local visual representations as well as cross-regional semantic correlations.When using this method to pre-train BiSeNet and transfer to the semantic segmentation task,compared with the existing self-supervised contrastive representational learning and supervised pre-training methods,the performance improvement of 0.24% and 0.9% mean intersection over union(MIoU) is achieved.Experimental results show that this method can improve the segmentation results by pre-training the semantic segmentation model with unlabeled data,which has a certain practical value.

    Table and Figures | Reference | Related Articles | Metrics
    Verifiable traceable electronic license sharing deposit scheme
    WANG Lindong,TIAN Youliang,YANG Kedi,XIAO Man,XIONG Jinbo
    Journal of Xidian University    2023, 50 (5): 142-155.   DOI: 10.19665/j.issn1001-2400.20230408
    Abstract79)   HTML8)    PDF(pc) (4017KB)(65)       Save

    Verifiability and traceability are important challenges to the sharing and retention of electronic licenses.Traditional methods only ensure the verifiability of the issuer through electronic signature technology,but the verifiability of the holder and the depositor and the traceability of the license leakage are difficult to guarantee.Therefore,a verifiable and traceable electronic license sharing deposit scheme is proposed.First,aiming at the problem of unauthorized use of electronic licenses and the inability to trace after leakage,a model of the electronic license sharing and deposit system is constructed.Second,aiming at the problem of watermark information loss in the traditional strong robust watermarking algorithm,the existing strong robust watermarking algorithm is improved based on the BCH code,so as to realize the error correction of watermark information distortion.Finally,in order to realize the verifiability of the issuer,the holder and the depositor as well as the efficient traceability after the leakage of the electronic license,the verifiable and traceable electronic license model is constructed by combining the proposed robust watermark and reversible information hiding technology,on the basis of which the electronic license sharing and deposit protocol is designed to ensure the real authorized use of the license and the efficient traceability after the leakage.The analysis of security and efficiency shows that this scheme can achieve an efficient traceability after license leakage and has a good anti-collusion attack detection ability under the premise of ensuring the verifiability of the three parties,and that its execution time consumption is low enough to meet the needs of practical applications.

    Table and Figures | Reference | Related Articles | Metrics
    Precision jamming waveform design method for spatial and frequency domain joint optimization
    WANG Jing, ZHANG Kedi, ZHANG Jianyun, ZHOU Qinsong, WU Minglin, LI Zhihui
    Journal of Xidian University    2023, 50 (6): 93-104.   DOI: 10.19665/j.issn1001-2400.20230805
    Abstract78)   HTML4)    PDF(pc) (3199KB)(56)       Save

    Precision jamming technology is one of the hot research directions in current new electronic warfare.To solve the problem of accurately adjusting the spatial and frequency domain distribution characteristics of jamming power,a precision jamming waveform design method based on the alternating multiplier method is proposed.First,a mathematical model for the optimization problem of designing constant-modulus precision jamming waveforms under the joint optimization objective of spatial and frequency domain characteristics is presented.Furthermore,by introducing variables,the non-convex quartic term contained in the original objective function is transformed into a quadratic term,enabling the optimization problem to be solved by the alternating direction method of multipliers.Based on theoretical derivation,the optimal closed form solution for each iteration of the alternating direction method of multipliers is obtained,thereby reducing the computational complexity of the algorithm.Simulation experiments show that compared to existing precision jamming waveform design methods that only optimize the spatial jamming power distribution,the precision jamming waveform designed by this algorithm has better power spectrum distribution characteristics in the synthesized signal within the preset jamming area,which is more in line with the requirements of actual jamming tasks;meanwhile,compared to existing joint optimization algorithms in the spatial and frequency domains,the algorithm proposed in this paper considers the waveform constant-modulus constraint,which is more in line with the needs of engineering implementation.Moreover,the proposed algorithm is lower in computational complexity and it can further reduce the computational time through parallel computing.

    Table and Figures | Reference | Related Articles | Metrics
    Real-time smoke segmentation algorithm combining global and local information
    ZHANG Xinyu, LIANG Yu, ZHANG Wei
    Journal of Xidian University    2024, 51 (1): 147-156.   DOI: 10.19665/j.issn1001-2400.20230405
    Abstract76)   HTML3)    PDF(pc) (1887KB)(71)       Save

    The smoke segmentation is challenging because the smoke is irregular and translucent and the boundary is fuzzy.A dual-branch real-time smoke segmentation algorithm based on global and local information is proposed to solve this problem.In this algorithm,a lightweight Transformer branch and a convolutional neural networks branch are designed to extract the global and local features of smoke respectively,which can fully learn the long-distance pixel dependence of smoke and retain the details of smoke.It can distinguish smoke and background accurately and improve the accuracy of smoke segmentation.It can satisfy the real-time requirement of the actual smoke detection tasks.The multilayer perceptron decoder makes full use of multi-scale smoke features and further models the global context information of smoke.It can enhance the perception of multi-scale smoke,and thus improve the accuracy of smoke segmentation.The simple structure can reduce the computation of the decoder.The algorithm reaches 92.88% mean intersection over union on the self-built smoke segmentation dataset with 2.96M parameters and a speed of 56.94 frames per second.The comprehensive performance of the proposed algorithm is better than that of other smoke detection algorithms on public dataset.Experimental results show that the algorithm has a high accuracy and fast inference speed.The algorithm can meet the accuracy and real-time requirements of actual smoke detection tasks.

    Table and Figures | Reference | Related Articles | Metrics
    Deduplication scheme with data popularity for cloud storage
    HE Xinfeng, YANG Qinqin
    Journal of Xidian University    2024, 51 (1): 187-200.   DOI: 10.19665/j.issn1001-2400.20230205
    Abstract75)   HTML6)    PDF(pc) (2040KB)(56)       Save

    With the development of cloud computing,more enterprises and individuals tend to outsource their data to cloud storage providers to relieve the local storage pressure,and the cloud storage pressure is becoming an increasingly prominent issue.To improve the storage efficiency and reduce the communication cost,data deduplication technology has been widely used.There are identical data deduplication based on the hash table and similar data deduplication based on the bloom filter,but both of them rarely consider the impact of data popularity.In fact,the data outsourced to the cloud storage can be divided into popular and unpopular data according to their popularity.Popular data refer to the data which are frequently accessed,and there are numerous duplicate copies and similar data in the cloud,so high-accuracy deduplication is required.Unpopular data,which are rarely accessed,have fewer duplicate copies and similar data in the cloud,and low-accuracy deduplication can meet the demand.In order to address this problem,a novel bloom filter variant named PDBF(popularity dynamic bloom filter) is proposed,which incorporates data popularity into the bloom filter.Moreover,a PDBF-based deduplication scheme is constructed to perform different degrees of deduplication depending on how popular a datum is.Experiments demonstrate that the scheme makes an excellent tradeoff among the computational time,the memory consumption,and the deduplication efficiency.

    Table and Figures | Reference | Related Articles | Metrics
    Study of EEG classification of depression by multi-scale convolution combined with the Transformer
    ZHAI Fengwen, SUN Fanglin, JIN Jing
    Journal of Xidian University    2024, 51 (2): 182-195.   DOI: 10.19665/j.issn1001-2400.20230211
    Abstract75)   HTML8)    PDF(pc) (2907KB)(59)       Save

    In the process of using the deep learning model to classify the EEG signals of depression,aiming at the problem of insufficient feature extraction in single-scale convolution and the limitation of the convolutional neural network in perceiving the global dependence of EEG signals,a multi-scale dynamic convolution network module and the gated transformer encoder module are designed respectively,which are combined with the temporal convolution network,and a hybrid network model MGTTCNet is proposed to classify the EEG signals of patients with depression and healthy controls.First,multi-scale dynamic convolution is used to capture the multi-scale time-frequency information of EEG signals from spatial and frequency domains.Second,the gated transformer encoder is used to learn global dependencies in EEG signals,which effectively enhances the ability of the network to express relevant EEG signal features using the multi-head attention mechanism.Third,the temporal convolution network is used to extract temporal features available for EEG signals.Finally,the extracted abstract features are fed into the classification module for classification.The proposed model is experimentally validated on the public data set MODMA using the Hold-out method and the 10-Fold Cross Validation method,with the classification accuracy being 98.51% and 98.53%,respectively.Compared with the baseline single-scale model EEGNet,the classification accuracy of the proposed model is increased by 1.89% and 1.93%,the F1 value is increased by 2.05% and 2.08%,and the kappa coefficient values are increased by 0.0381 and 0.0385,respectively.Meanwhile,the ablation experiments verify the effectiveness of each module designed in this paper.

    Table and Figures | Reference | Related Articles | Metrics
Baidu
map