Loading...
Office

Table of Content

    20 October 2022 Volume 49 Issue 5
      
    Information and Communications Engineering
    Non-coherent integration constant false alarm rate detectors against the log-normal textured sea clutter
    DUAN Tingyu, SHUI Penglang, FENG Tian
    Journal of Xidian University. 2022, 49(5):  1-8.  doi:10.19665/j.issn1001-2400.2022.05.001
    Abstract ( 747 )   HTML ( 253 )   PDF (1663KB) ( 416 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In a coherent radar system,compared with the coherent integration detectors,non-coherent integration detectors with fast implementation can meet the demand of real-time processing,and are of great significance in practical applications.However,these detectors do not have a constant false alarm rate (CFAR) with respect to the integrated pulse number,the number of reference cells,the shape parameter of the sea clutter model,and the speckle covariance matrix of the sea clutter.In order to realize fast CFAR detection of moving targets in sea clutter amplitude distribution with a log-normal texture,this paper proposes a detection method of speckle whitening followed by non-coherent integration,which is CFAR with respect to the scale parameter and speckle covariance matrix of the sea clutter.Moreover,it is also analyzed how the power integration and amplitude integration are to affect detection performance.By using the decision thresholds matching to the clutter shape parameter,the integrated pulse number,and the number of reference cells,the CFAR characteristic of the proposed detection method can be ensured,and then the proposed detector can implement a large-scene fast detection of maritime radars.Experiments show that the amplitude integration behaves better than the power integration and the speckle whitening before the integration wholly improves the signal-to-clutter ratios of moving targets,and that it can also bring a significant performance improvement for the high-speed moving target.

    Study of height measurement technology for the vehicle-mounted millimeter wave radar in the time-domain
    SHEN Wenhao,YANG Minglei,GUO Junlei,HU Xiaoyu,LIU Nan
    Journal of Xidian University. 2022, 49(5):  9-17.  doi:10.19665/j.issn1001-2400.2022.05.002
    Abstract ( 632 )   HTML ( 212 )   PDF (1581KB) ( 234 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In response to the current situation that general vehicle-mounted radars either do not have the ability to measure height or are expensive in doing it,this paper proposes a single scattering point target height measurement method using time-domain information on the single antenna in the millimeter band.The method utilizes the multipath propagation generated by the interference between the ground reflected multipath signal and the direct signal in the vehicle environment,obtains the echo power of the target at different distances while the vehicle is moving,uniformizes the sampling points in the inverse of the distance by interpolation,and then extracts the height information of the target using the fast Fourier transform.In this paper,first,the conditions for multipath propagation in the vehicle environment are derived,the geometric model through the specular reflection of the ground in the millimeter band is constructed,and then the mathematical model of multipath propagation is derived;second,the factors that may affect the measurement of the height of the target,such as the installation height of the radar,the sampling interval and distance,are analyzed in detail,and various suggestions for optimization are proposed.Finally,the results of simulations and the real data obtained from experiments show that it is feasible to use this method for the measurement of the height of single scattering point targets.Compared with the existing methods,this method has the advantages of easy-structuring and cost-effectiveness in achieving the measurement of the height of a single scattering point target in vehicle environment.

    MMSE-HDFE-RISIC equalization for SC-FDE scatter communication systems
    LIU Gang,PANG Xiangyun,CHEN Zhentao,GUO Yi
    Journal of Xidian University. 2022, 49(5):  18-24.  doi:10.19665/j.issn1001-2400.2022.05.003
    Abstract ( 422 )   HTML ( 99 )   PDF (861KB) ( 120 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Tropospheric scatter communication,as an over-the-horizon communication method,is one of the main communication means in the future battlefield because of its high secrecy and interference resistance.Single carrier frequency domain equalization technology can effectively solve the problem of amplifier transmit peak power limitation and signal multipath fading,and is widely used in tropospheric scatter communication.However,in the SC-FDE systems,the residual inter-symbol interference after the frequency domain equalization can seriously affect the bit error rate (BER) performance of the scatter communication systems.To solve this problem,this paper proposes an MMSE-HDFE-RISIC equalization algorithm based on the minimum mean square error criterion for the SC-FDE scatter communication systems.The algorithm is based on the hybrid decision feedback equalization algorithm,which further reduces the impact of residual inter-symbol interference on the system performance by estimating the residual inter-symbol interference in the frequency domain and then summing up the compensation in the time domain.Simulation analysis shows that the proposed algorithm has a good BER performance under the scattering channel.Under QPSK modulation,the performance improvement is 2.9 dB compared to the MMSE frequency domain equalization algorithm and about 1 dB compared to the traditional hybrid decision feedback equalization algorithm.Under 16QAM modulation,the performance improvement is 1.3 dB compared to the MMSE frequency domain equalization algorithm and about 0.6 dB compared to the traditional hybrid decision feedback equalization algorithm.

    Improved time reversal multi-user DCSK system
    ZHANG Gang,HE Ping,ZHANG Tianqi
    Journal of Xidian University. 2022, 49(5):  25-36.  doi:10.19665/j.issn1001-2400.2022.05.004
    Abstract ( 252 )   HTML ( 16 )   PDF (1455KB) ( 81 )   Save
    Figures and Tables | References | Related Articles | Metrics

    To overcome the shortcomings of high bit error rate and low transmission rate in the shift differential chaos shift keying system based on time reverse,an improved time reversal multi-user DCSK system is proposed to improve the system BER performance and transmission rate.At the transmitter,the chaotic signal generator adopts delaying and time reversing structures to send two orthogonal reference signals.Different orthogonal Walsh codes are used to bear users’ information simultaneously to increase the transmission rate and eliminate the interference between the users,thus reducing the error rate.At the receiver,the reference signal is denoised by moving the average filter,the variance of the decision term is reduced to reduce the bit error rate of the system,and then the correlation demodulation is carried out.The new BER formula is deduced and simulated under the additive white Gaussian noise channel and the multi-path Rayleigh fading channel.The influences of the number of users N,the number of copies P,the sequence length,the SNR and the path number L on BER are analyzed.Compared with BER of the TRM-DCSK system,it is shown that in the AWGN channel,when the number of users is 2,the number of replications is 8,and is 512,the transmission rate of the system is increased by about 300% compared with the TRM-DCSK system,and that the error performance is improved by nearly 3dB,which provides a good theoretical significance and practical value for the practical engineering application of the system.

    Approach to the adjustment of the dynamic threshold of the SPMA protocol
    DING Feng,SHI Yan,ZHAO Xiongwang
    Journal of Xidian University. 2022, 49(5):  37-46.  doi:10.19665/j.issn1001-2400.2022.05.005
    Abstract ( 569 )   HTML ( 86 )   PDF (1631KB) ( 132 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The Statistical Priority-based Multiple Access (SPMA) protocol appears in the TTNT (Tactical targeting network technology) data link,which is an access mode based on random competition.In this protocol,time domain and frequency domain resources are used together,and the upper layer traffic is divided into priority.By comparing the threshold and channel occupation,the channel access status is evaluated and the access behavior of each priority traffic is controlled.However,the fixed threshold setting method in the SPMA protocol cannot flexibly deal with the changes of communication services,resulting in the waste of time-frequency resources and the difficulty in guaranteeing the quality of service of heterogeneous services.Therefore,this paper proposes a dynamic threshold setting method based on the frame successful transmission probability.This method uses the frame successful transmission probability to reflect the real-time load of the network.By comparing the current frame successful transmission probability with the frame successful transmission probability of the highest priority service,it dynamically adjusts the threshold of each priority,so that each priority service can access the channel on demand.In order to maintain the stability of network throughput,on the premise of improving channel utilization,it is ensured that the access delay of high priority services is less than 2ms and the transmission success rate is not less than 99%.

    WSNs node deployment strategy based on the improved multi-objective ant-lion algorithm
    ZHANG Hao, QIN Tao, XU Linghua, WANG Xiao, YANG Jing
    Journal of Xidian University. 2022, 49(5):  47-59.  doi:10.19665/j.issn1001-2400.2022.05.006
    Abstract ( 337 )   HTML ( 22 )   PDF (4741KB) ( 114 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In order to balance the coverage,connectivity and number of nodes in the deployment of wireless sensor networks (WSNs),a multi-objective node deployment model with the minimum coverage and connectivity between nodes as the constraint conditions is constructed,and by using the ideology of the Pareto optimal solution set,a node deployment strategy based on the improved multi-objective ant-lion Algorithm (IMOALO) is proposed.First,the Fuch chaotic map is used to initialize the population and increase the diversity of population.At the same time,the performance of the IMOALO is improved by introducing the adaptive shrinkage boundary factor,which can overcome the shortcoming of being easily plunged into local optimal of the MOALO.Second,the time-varying strategy position disturbance is used to the ant for improving the optimization ability of the algorithm.Third,a comparison of the test function with other multi-objective algorithms shows that the improved algorithm can lead to the minimum GD and IGD values on different test functions,which verifies the effectiveness of the proposed strategy.Finally,the IMOALO is applied to the multi-objective node deployment in WSNs.Simulation results show that compared with other multi-objective algorithms,the IMOALO can effectively solve the multi-objective optimization deployment problem of the nodes in WSNs,improve the coverage and connectivity of the monitoring area,and provide more feasible solutions for decision makers.

    GRN-GRU:a fault detection model for wireless sensor networks
    CHEN Junjie, DENG Honggao, MA Mou, JIANG Junzheng
    Journal of Xidian University. 2022, 49(5):  60-67.  doi:10.19665/j.issn1001-2400.2022.05.007
    Abstract ( 627 )   HTML ( 89 )   PDF (2848KB) ( 157 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Wireless sensor networks (WSN) have become a real-time environmental monitoring solution and are widely used in various fields.Since the sensors in the network are easily affected by complex working environment,their own hardware and other factors,they may fail to work.Therefore,fault detection in wireless sensor networks is an indispensable link in its application field.To address the problem of fault sensor detection in the wireless sensor network,this paper proposes a fault detection model named GCN-GRU,which hybridizes a graph convolutional network (GCN) and a gate recurrent unit (GRU).The model consists of three layers:input layer,spatiotemporal processing layer and output layer.The input layer receives the sensor network data and the graph model constructed by the WSN and transmits them to the spatiotemporal processing layer.In the spatiotemporal processing layer,the spatial distribution features of the WSN and the characteristics of faults in high-dimensional space are extracted by the GCN,and they are constructed as the high-dimensional data of time series which act as the input of the GRU.Then the temporal evolution features of sensor network data and the temporal and spatial evolution characteristics are extracted and fused by the GRU.Finally,the fault detection results are obtained in the output layer.To evaluate the performance of the GCN-GRU,this paper compares the GCN-GRU model with existing fault detection algorithms for the WSN.Numerical experiments show that the GCN-GRU model can significantly improve the fault detection rate and reduce the false alarm rate,thus effectively identifying faulty sensors compared with the existing algorithms.

    Improved hybrid method for gyro random noise compensation
    TIAN Yi,YAN Yuepeng,ZHONG Yanqing,LI Jixiu,MENG Zhen
    Journal of Xidian University. 2022, 49(5):  68-75.  doi:10.19665/j.issn1001-2400.2022.05.008
    Abstract ( 286 )   HTML ( 80 )   PDF (909KB) ( 83 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In order to reduce the random error in the measurement data of a micro-electromechanical system(MEMS) gyroscope,an improved complete ensemble empirical mode decomposition with the adaptive noise-forward linear predictive filtering (CEEMDAN-FLP) hybrid noise reduction method is proposed when the sudden change of the carrier motion state causes the step change of gyro sensor data.The improved algorithm uses a soft interval thresholding filter for low-order noise intrinsic mode functions (IMFs),which avoids the problem of high frequency signal loss caused by the conventional method of removing noisy IMFs directly.At the same time,an FLP filter is used for the mixed IMFS to avoid excessive filtering caused by threshold elevation.Finally,data reconstruction is carried out between the filtering results and signal IMFs.Simulation results show that the root-mean-square error of the improved algorithm is reduced by 51.53% compared with the original signal,and by 17.39% compared with the EMD filtering algorithm.The measured data verify that the gyro data filtered by the improved algorithm and the gyro data filtered by the CEEMDAN algorithm are respectively used for attitude calculation,and that the attitude cumulative error of the improved algorithm is only 20.56% of the attitude cumulative error of the conventional algorithm without significantly increasing the operation burden.It can be seen that the improved algorithm can effectively improve the measurement accuracy of the sensor.

    Design of a low complexity information bottleneck quantization decoder for LDPC codes
    HU Jiwen, ZHENG Huijuan, TONG Sheng, BAI Baoming, XU Daren, WANG Zhongli
    Journal of Xidian University. 2022, 49(5):  76-83.  doi:10.19665/j.issn1001-2400.2022.05.009
    Abstract ( 328 )   HTML ( 93 )   PDF (905KB) ( 102 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Recently,the information bottleneck (IB) method has been successfully applied to the design of quantization decoders for LDPC codes.The resulting high performance quantization decoder can approach the performance of the floating-point SPA (Sum-Product Algorithm) decoder by using only 4 quantization bits.Moreover,the LDPC IB decoder deals only with unsigned integers and replaces complex check node operations by simple lookup tables,and thus is very suitable for practical implementations.However,the number of table lookups is proportional to the square of the node degrees,which is unfavorable for LDPC codes with large node degrees,such as the finite geometric LDPC codes and high-rate LDPC codes.To deal with this issue,an improved scheme for the design of IB decoders is proposed based on the forward-backward algorithm.In the proposed scheme,the node operations based on the forward-backward algorithm can be divided into three steps:forward table lookup pass,backward table lookup pass,and extrinsic information generation.To reduce the memory space,a careful design can be carried such that the forward and backward table look-up passes share the same set of lookup tables.When generating extrinsic information,nodes make full use of the intermediate messages generated by forward and backward table lookups,and effectively remove redundant calculations,thus resulting in a total number of table lookups linear with node degrees.Numerical results are provided to demonstrate the effectiveness of the proposed LDPC IB decoder.

    Method for the construction of a shortened kernel matrix of multi-kernel polar codes
    HU Ligang,XU Liqing,TAN Xiaoqing,LIU Ling,LV Shanxiang
    Journal of Xidian University. 2022, 49(5):  84-91.  doi:10.19665/j.issn1001-2400.2022.05.010
    Abstract ( 204 )   HTML ( 86 )   PDF (802KB) ( 77 )   Save
    Figures and Tables | References | Related Articles | Metrics

    As the first channel coding that can be theoretically proven to achieve channel capacity,polar codes are the coding scheme for control channels in 5G enhanced mobile broadband scenarios.Aiming at the limitation of traditional polar codes in constructing the large-dimensional kernel matrix,this paper proposes an improved method of shortening the kernel based on Kronecker multi-kernel construction.The method first selects factor matrices with a large exponent in the process of multi-kernel construction to obtain matrices with a better performance.Then we use the characteristic of the partial distance to shorten the matrix to obtain a kernel matrix with more flexible dimensions and a better performance.In order to solve the problem that partial distances may exceed the corresponding upper bounds in the process of constructing the kernel matrix,this paper proposes an elimination algorithm based on the Hamming distance.According to the characteristic that the partial distance of the matrix row vector will not exceed its Hamming distance,the algorithm reduces the partial distance by reducing the weight of 1 in the row vector.This paper gives the decoding method of the fifth-order kernel matrix,which provides more choices for the construction of multi-kernel polar codes.Experiments show that the shortening method based on column weights can obtain a partial kernel matrix with a larger exponent than the Kronecker multi-kernel construction.The method proposed in this paper is better than other shortening methods with a similar idea in terms of exponent.In terms of decoding,it follows the general structure of traditional polar codes,but has a lower decoding complexity.

    Phase-frequency response expansion and verification of digital coded metamaterials
    WANG Guoqiang,MA Hui,GAO Sizhe
    Journal of Xidian University. 2022, 49(5):  92-99.  doi:10.19665/j.issn1001-2400.2022.05.011
    Abstract ( 194 )   HTML ( 11 )   PDF (1393KB) ( 67 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Digital coded metamaterials have an excellent electromagnetic control ability,and can be used to realize low-cost digital arrays by controlling the phase in the form of switches to modulate the spatial radiation field.However,the dispersion effect of digital coded metamaterials in practical applications makes the coherent bandwidth of digital quantization phase modulation too narrow.Taking 1 bit coded metamaterial unit as an example,the phase bandwidth is proposed to characterize the phase modulation consistency.According to the structural characteristics of digital coded metamaterials and the transmission line theory,a transmission line structure for a digital coded metamaterial unit is given.By introducing a non-uniform transmission line and adding a multi-branch transmission line,the tuning range of phase-frequency response and amplitude-frequency response of a digital coded metamaterial unit is extended,and the design standard can be proposed according to the bandwidth requirements under different phase modulation consistencies.The electromagnetic simulation software CST is used to optimize the structure parameters of the non-uniform transmission line and the multi-branch transmission line.Simulation results show that the -phase bandwidth is broadened under different unit -phase differences.Finally,under the two typical application backgrounds of array beamforming and random radiation field generation,the effectiveness of the proposed method in broadband applications is proved by the two indicators of beam directivity and pattern correlation of digital coded metamaterials.

    Scheduling method for multi-sensor cooperative ground target tracking
    ZHANG Yunpu,SHAN Ganlin,FU Qiang
    Journal of Xidian University. 2022, 49(5):  100-108.  doi:10.19665/j.issn1001-2400.2022.05.012
    Abstract ( 447 )   HTML ( 86 )   PDF (917KB) ( 100 )   Save
    Figures and Tables | References | Related Articles | Metrics

    To achieve accurate ground target tracking in the presence of the Doppler blind zone,a multi-sensor cooperative scheduling method is proposed.First,by analyzing the basic process of sensor scheduling,a basic model including the scheduling action,the target state transition equation,and the sensor measurement equation in the presence of the Doppler blind zone is established.Second,for the move-stop-move characteristics of the ground target,its motion states are divided into two categories of slow stage and fast stage according to its velocity,and the calculation method for the state probability is given.Then,the variable structure interacting multiple-model method is introduced to estimate the target state,and the calculation methods for the estimation likelihood function and the state transition probability are given.Finally,the tracking accuracy is quantified by the posterior Cramér-rao lower bound,and the objective function of the non-myopic scheduling is established with the optimal tracking accuracy as the optimization objective.Simulation results show that the proposed method can lead to the optimal sensor scheduling scheme by predicting the future multi-step scheduling gains,so as to achieve the high-precision sustained tracking accuracy of the target.

    Design of the 77GHz broadband low sidelobe millimeter wave microstrip antenna
    FAN Wenying, HOU Qingwen, CHEN Xianzhong
    Journal of Xidian University. 2022, 49(5):  109-116.  doi:10.19665/j.issn1001-2400.2022.05.013
    Abstract ( 672 )   HTML ( 112 )   PDF (1591KB) ( 236 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problem of narrow bandwidth of a microstrip antenna,a wideband low sidelobe millimeter wave microstrip array antenna for mine robot environment sensing is proposed.The millimeter wave antenna array consists of a main radiation element layer,a parasitic patch layer and an air layer.The main radiation unit adopts the Dolph-Chebyshev array synthesis method to achieve low sidelobe characteristics,the air layer and parasitic unit can effectively broaden the antenna bandwidth by reducing the equivalent dielectric constant and adding new resonance points,and the feed network design of a T-type power divider achieves good impedance matching.HFSS simulation results show that the -10 dB impedance bandwidth of the antenna is 7 GHz,the operating frequency is 76.6~83.6 GHz,the maximum sidelobe level of the E plane is -15 dB,the maximum sidelobe level of the H plane is -24 dB,and the maximum gain is 17 dBi,which can meet the application requirements of millimeter wave radar long-range and short-range switching and echo sensitivity of the complex scattering medium in a wide range.Compared with the current 77 GHz millimeter wave antenna,this antenna effectively reduces the sidelobe level,further expands the working bandwidth,and improves the radar’s anti-interference ability and resolution.In view of the complex geomagnetic environment characteristics of underground coal mines,the strong anti-interference ability and high resolution of the antenna have great advantages in the perception of underground tunnel environment.

    Computer Science and Technology & Artificial Intelligence
    Small object detection in remote sensing images using non-local context information
    LI Yangyang, MAO Heting, ZHANG Xiaolong, CHEN Yanqiao, CHAI Xinghua
    Journal of Xidian University. 2022, 49(5):  117-124.  doi:10.19665/j.issn1001-2400.2022.05.014
    Abstract ( 469 )   HTML ( 87 )   PDF (1420KB) ( 174 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In recent years,the object detection method based on deep learning has achieved remarkable results and has been successfully applied to remote sensing.However,due to the wide coverage of remote sensing images,and the small object with less effective information and which is difficult to locate,it is challenging to accurately detect a small object from remote sensing images.To solve this problem,non-local information and context are utilized to improve the quality of small object detection in this paper.First,the detector uses a combination of Refine Feature Pyramid Networks(Refine FPN)and Cross-layer Attention Network(CA-Net)as the backbone,where the Refine FPN obtains richer feature information of a small object,and the CA-Net extracts non-local information and distributes it to each layer evenly.Second,the context transfer module is proposed to transfer the non-local context information to the corresponding region of interest.Finally,the cascade network is used as the detection network to improve the quality of the bounding box of the small object.Experiments are carried out on three remote sensing image datasets,Small-DOTA,DIOR,and OHD-SJTU-S.Experimental results show that the mean average precision(mAP)of the detector proposed in this paper reaches the highest in the three datasets.Among the three categories of ships,vehicles,and windmills that contain more small targets in the DIOR,the average precision(AP)of the detector in this paper is also the highest.This shows that compared with the existing methods,the method in this paper can further improve the detection performance of the small object in remote sensing images.

    Computer Science and Technology & Artificial Intelligence
    New intuitionistic fuzzy least squares support vector machine
    ZHANG Dan,ZHOU Shuisheng,ZHANG Wenmeng
    Journal of Xidian University. 2022, 49(5):  125-136.  doi:10.19665/j.issn1001-2400.2022.05.015
    Abstract ( 321 )   HTML ( 82 )   PDF (4205KB) ( 83 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The least square support vector machine only needs to solve a linear system of equations to get a closed solution,so it is widely used in classification problems because of its fast training speed.However,the least squares support vector machine model is easily affected by outliers and noise,which often reduces the classification accuracy.Fuzzy weighting of sample points is an effective method to solve this problem.Intuitionistic fuzzy sets contain both membership information and non-membership information of sample points,which can describe the distribution characteristics of sample points in more detail.Therefore,based on the intuitionistic fuzzy set,this paper obtains a more accurate class center by eliminating outliers,and then calculates the distance between the sample point and the class center to obtain the membership degree of the sample point to its class.At the same time,the kernel k-nearest neighbor method is used to find the number of k neighboring sample points of another class,and then the non-membership information of sample points is obtained.Finally,a new fuzzy value is obtained according to the membership degree and non-membership degree of sample points.Furthermore,the proposed fuzzy values are used to improve the LSSVM model.By assigning outliers and fuzzy values with low noise,their influence on the LSSVM model is reduced and the accuracy of the LSSVM model is improved.Experimental results show that,compared with the existing algorithms,the proposed algorithm can reduce the influence of outliers and noise on the LSSVM model and improve the robustness of the model.

    Computer Science and Technology & Artificial Intelligence
    Lightweight object detection algorithm based on the improved CenterNet
    LI Yueyan, CHENG Peitao, DU Shuxing
    Journal of Xidian University. 2022, 49(5):  137-144.  doi:10.19665/j.issn1001-2400.2022.05.016
    Abstract ( 737 )   HTML ( 111 )   PDF (4599KB) ( 158 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Due to the complex structure,there are a large number of parameters in the CenterNet,which leads to a high computational complexity and long inference time.To solve this problem,a CenterNet-encoder algorithm is proposed for lightweight object detection.First,the fire module is used in the backbone to reduce the number of parameters and increase the calculation speed.Then,an encoding layer is utilized between the backbone and the head,which can increase the receptive field and obtain more accurate corners and center points for heatmaps.Finally,MSE loss is employed in bounding box regression,which accelerates the convergence and further improves the performance.The proposed algorithm achieves 40.5 AP on the MS-COCO test-dev benchmark with 47M parameters.Under the AMD5900X\32GB\RTX3090 environment configuration,the detection speed reaches 18FPS.Experimental results show that the performance of the proposed method is better than other lightweight methods in the number of parameters,inference time and detection accuracy.Although the precision of the proposed method is slightly lower than that of the CenterNet,the number of parameters is reduced by 77.6%,and the inference speed is increased by 69.3%.

    Energy-awarevirtual machine placement strategy for data centers
    YANG Ao, MA Chunmiao, WU Weiguo, WANG Simin, ZHAO Kun
    Journal of Xidian University. 2022, 49(5):  145-153.  doi:10.19665/j.issn1001-2400.2022.05.017
    Abstract ( 190 )   HTML ( 92 )   PDF (1054KB) ( 70 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the development of the Internet,the scale of the data centers continues to expand,with the prominent problem being how to ensure the safe operation of data centers and reduce the operation energy consumption.The current research focuses only on reducing the energy consumption of the data center,but does not consider the ambient temperature of the servers.If the load continues to increase in the high temperature area,it may lead to local hot spot problems and cause the refrigeration equipment to be in the over-cooling state,resulting in the overall increase of the energy consumption.To solve this problem,this paper proposes an energy-aware virtual machine placement strategy that can avoid hot spots while reducing the energy consumption of the data center.The strategy consists of two parts of the algorithm.The first part is the best adaptation algorithm which sorts the physical machine sequence according to the available CPU resource size.For the current virtual machine request,the physical machine with the smallest urgent value is selected as the target location according to the calculation method of temperature urgent value proposed in this paper,and the sequence of the target physical machine is binarized as the initial population of the genetic algorithm.In the second part of the genetic algorithm,the population is cross-mutated,the next-generation population is selected through the fitness value calculated by the fitness function,and the algorithm finally obtains the optimal solution through continuous iterative calculations.To verify the effectiveness of the strategy proposed in this paper,corresponding experiments are carried out on the cloudsim simulation computing platform.The simulation results show that the proposed method reduces not only the operating energy consumption but also the temperature fluctuation value between the servers to avoid the occurrence of hot spots.

    Computer Science and Technology & Artificial Intelligence
    Cutting force prediction under the variable machining condition incorporating workpiece geometric features
    CHANG Jiantao,LIU Yao,KONG Xianguang,LI Xinwei,CHEN Qiang,SU Xin
    Journal of Xidian University. 2022, 49(5):  154-165.  doi:10.19665/j.issn1001-2400.2022.05.018
    Abstract ( 340 )   HTML ( 84 )   PDF (2821KB) ( 71 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In a machining process,changes of workpiece geometric features can lead to variation in the statistical characteristics of cutting forces,causing significant degradation in the accuracy ability of traditional data-driven cutting force prediction models.and different processing conditions make the collected cutting force modeling data have obvious data distribution differences,causing significant degradation in the generalization ability of traditional data-driven cutting force prediction models.To address these problems,this paper proposes a cutting force prediction method incorporating workpiece geometric features under variable machining conditions.First,data preprocessing is carried out,including the workpiece geometric features and working condition information coding processing,cutting force signal removing trend items,and cutting force statistical features removing outliers.By considering the workpiece geometric features and working condition changes caused by the data distribution differences,the data set is divided into source domain data and target domain data;then the source domain data and the target domain data are divided into training sets and test sets according to the rules,and a variable cutting force prediction model incorporating geometric features of the workpiece is constructed based on transfer learning.Finally experimental verification is carried out from different data quantities,single processing geometry characteristics,variable working conditions,and different algorithms.Experimental results show that the method is more suitable for predicting cutting forces under changing working conditions and workpiece geometrical characteristics than the traditional data driven cutting force prediction model,while maintaining a higher prediction accuracy with fewer data samples and a better generalization performance,so that it can provide a better practicality.

    Computer Science and Technology & Artificial Intelligence
    Algorithm for classification of few-shot images by dynamic subspace
    REN Jiaxing, CAO Yudong, CAO Rui, YAN Jia
    Journal of Xidian University. 2022, 49(5):  166-174.  doi:10.19665/j.issn1001-2400.2022.05.019
    Abstract ( 203 )   HTML ( 87 )   PDF (1496KB) ( 84 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The existing few-shot image classification algorithms based on metric learning have a low precision of image categorization and weak generalization performance.A few-shot image classification algorithm by dynamic subspace is proposed in this paper.First,a residual neural network is used to extract few-shot image features.The dynamically orthogonalized projection subspaces representing image categories are generated with decomposed image features of various categorizations so as to enhance the difference of features among categories in orthogonalized projection subspaces.Second,a dynamic subspace classifier based on few-shot learning is constructed by fusing the subspace loss function and the cross-entropy loss function so as to enhance the similarity of samples in the same category.The inter-class distance of the subspace is dynamically updated with the change of sampling amount and sample similarity.Finally,the feature vector of the target image is input into the dynamic subspace classifier,and the Euclidean distance square and the soft max function are used to calculate the category probability of the target feature and predict its category.Performance testing is performed on the few-shot data sets such as mini-ImageNet,CIFAR-100 and Pascal VOC2007.The proposed algorithm is superior to the current mainstream few-shot image classification algorithm,and the average classification precision of the proposed algorithm is 2.3% higher than that of the current DSN with good performance under 5-way 5-shot.Experiments show that the proposed algorithm has a strong generalization performance and an anti-interference ability.

    Computer Science and Technology & Artificial Intelligence
    Progressive dialtion residual network for deep binocular stereo matching
    LIU Shigang,ZHANG Tong,YANG Jiangong,GE Bao
    Journal of Xidian University. 2022, 49(5):  175-180.  doi:10.19665/j.issn1001-2400.2022.05.020
    Abstract ( 185 )   HTML ( 12 )   PDF (1120KB) ( 84 )   Save
    Figures and Tables | References | Related Articles | Metrics

    To realize a lightweight and high precision binocular stereo matching network,we propose a progressive dilated residual depth binocular stereo matching network:PDR_Net.In the feature extraction network module,a progressive dilated residual network structure is proposed.The dilated convolution network replaces the pooling down-sampling method to obtain the multi-scale feature information of the image,which can reduce image feature information loss caused by scale transformation in pooling down-sampling.At the same time,the residual network is introduced to alleviate the loss of image feature information from the characteristics of the dilated convolution network.The progressive cascade method is used to fuse the feature information between the branches of each scale,which promotes the fusion of the feature information from each image scale,namely,the strategy can reduce the network complexity and retain more image features.Finally,in the 3D convolutional network module,the stacked sand drain coding and decoding network structure is adopted,and the feature map can be effectively combined by the jump connection.The channel attention mechanism model is introduced,which enhances the aggregation learning ability of the network between the feature information of different disparities from different channels,and deepens the connection of the feature points from different disparities.Compared with the existing network,our proposed PDR_Net network has the advantages of less parameters,faster speed and higher accuracy.

    Computer Science and Technology & Artificial Intelligence
    Event detection by combining self-attention and CNN-BiGRU
    WANG Kan, WANG Mengyang, LIU Xin, TIAN Guoqiang, LI Chuan, LIU Wei
    Journal of Xidian University. 2022, 49(5):  181-188.  doi:10.19665/j.issn1001-2400.2022.05.021
    Abstract ( 489 )   HTML ( 118 )   PDF (1535KB) ( 255 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Event detection methods based on convolutional neural networks and recurrent neural networks have been widely investigated.However,convolutional neural networks only consider local information within the convolution window,ignoring the context of words.Recurrent neural networks have the problem of vanishing gradient and short-term memory,and their variant gated recurrent units cannot get the features of each word.Therefore,in this paper,an event detection method based on self-attention and convolutional bidirectional gated recurrent units model is proposed,which takes both word vectors and position vectors as inputs.It can not only extract vocabulary level features with different granularities by convolutional neural network and sentence level features by bidirectional gated recurrent units,but also consider global information and pay attention to more important features for event detection by self-attention.The extracted lexical-level features and sentence-level features are combined as the joint features,and the candidate words are classified by the softmax classifier to complete the event detection task.Experimental results show that the F scores of trigger words recognition and classification reach 78.9% and 76.0% respectively on the ACE2005 English corpus,which are better than the results of benchmark methods.Furthermore,the model shows great convergence.It is shown that the proposed model based on self-attention and convolutional bidirectional gated recurrent units possesses good ability of text feature extraction and improves the performance of event detection.

    Computer Science and Technology & Artificial Intelligence
    Multi-server dynamic searchable encryption scheme supporting result verification
    HE Yu,TIAN Youliang,WAN Liang,YANG Li
    Journal of Xidian University. 2022, 49(5):  189-200.  doi:10.19665/j.issn1001-2400.2022.05.022
    Abstract ( 273 )   HTML ( 89 )   PDF (1438KB) ( 44 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the low retrieval efficiency and the single point of failure(SPOF) of the traditional single-server searchable encryption scheme,this paper constructs a multi-cloud server searchable encryption scheme supporting result verification based on Shamir-secret sharing and intelligent contract.First of all,the Shamir-secret sharing technology is used to split data into multiple different data blocks,which are encrypted and stored on each independent server,and a multi-cloud server searchable encryption model is constructed to prevent the problem of massive data loss caused by SPOF and realize safe distributed storage and efficient query of data.Furthermore,using the characteristics of automatic execution of smart contracts to construct a verification method for query results,the verification of query results is realized by signing a contract,which solves the problem that the correctness of the returned results under the semi-trusted cloud server model is difficult to guarantee.In addition,we introduce a block matrix to construct a sub-matrix for the updated data to reduce the computational cost of query after updating documents,and by adding false keyword information,guessing attacks on cloud servers are prevented,and the security of updated data is guaranteed.Finally,the security analysis and experimental analysis show that the scheme can effectively protect data privacy while reducing the index generation time,and achieve a higher retrieval efficiency compared with other schemes.

    Impossible differential attack on the encryption algorithm Simpira v2
    LIU Ya,GONG Jiaxin,ZHAO Fengyu
    Journal of Xidian University. 2022, 49(5):  201-212.  doi:10.19665/j.issn1001-2400.2022.05.023
    Abstract ( 188 )   HTML ( 84 )   PDF (4419KB) ( 30 )   Save
    Figures and Tables | References | Related Articles | Metrics

    It is important to evaluate the security of symmetric encryption algorithms used in various application scenarios for protecting data securely.Simpira v2 is a family of cryptographic permutations with a high throughput proposed in ASIACRYPT 2016.It is very suitable for protecting the confidentiality of data in the information system.Simpira-6 is the case of 6 branches in the Simpira v2 encryption algorithm family,and its block length supports bits.This paper studies the security analysis of Simpira-6 as the permutation algorithm of Even-Mansour structure against impossible differential attacks.First,we propose the longest 9-round impossible differential for Simpira-6 currently,on the basis of which the adversary executes the impossible differential attack,whose time complexity is higher than that of the exhaustive search.Second,under the security claim of Simpira v2,we present a 7-round impossible differential attack on Simpira-6 to recover the 384-bit master key.The data and time complexities of this attack are 257.07 chosen plaintexts and 257.07 7-round Simpira-6 encryptions,respectively.Third,under the security claim of Even-Mansour,we present an 8-round impossible differential attack on Simpira-6 to recover all 768 bits keys.The data and time complexities are 2168 chosen plaintexts and 2168 8-round Simpira-6 encryptions.Those attacks are the first analytical result on Simpira-6 against the impossible differential attack.These results provide an important theoretical foundation for the application of Simpira v2 in future.

    Impossible differential cryptanalysis of the Gimli authenticated encryption scheme
    TAN Hao,SHEN Bing,MIAO Xudong,ZHANG Wenzheng
    Journal of Xidian University. 2022, 49(5):  213-220.  doi:10.19665/j.issn1001-2400.2022.05.024
    Abstract ( 176 )   HTML ( 84 )   PDF (3380KB) ( 27 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Gimli is a candidate for the second round of lightweight encryption algorithm standards initiated by the National Institute of Standards and Technology of the United States.The current security analysis of Gimli focuses mainly on the Gimli permutation,Gimli hash function,and Gimli authenticated encryption with associated data.The Gimli authenticated encryption scheme generally adopts a sponge structure,which is suitable for data encryption scenarios in restricted environments.At present,the best result of the state recovery attack on the Gimli authenticated encryption scheme is 9 rounds,with a time complexity of 2190 and a data complexity of 2192.This paper designs a differential propagation system based on Gimli permutation,and finds a 7-round impossible differential suitable for analyzing the sponge structure authenticated encryption scheme.This impossible differential only limits the value of the 1-bit output difference,which significantly reduces the time complexity and data complexity of the state recovery phase.In this paper,7 rounds of the impossible differential are extended forward for 4 rounds,and the state recovery attack on 11 rounds of the Gimli authenticated encryption scheme is successfully realized.In the state recovery phase,based on the weak diffusion of the first two rounds of Gimli replacement,the 2128 key guesses are reduced to two 264 key guesses.The time complexity of this state recovery attack is about 2110 times encryption,and the data complexity is about 252.5,which is better than the state restoration attack result of the Gimli authenticated encryption scheme in the existing public literature.

Baidu
map