According to the oversampling imaging characteristics, an infrared small target detection method based on deep learning is proposed. A 7-layer deep convolutional neural network(CNN) is designed to automatically extrac...According to the oversampling imaging characteristics, an infrared small target detection method based on deep learning is proposed. A 7-layer deep convolutional neural network(CNN) is designed to automatically extract small target features and suppress clutters in an end-to-end manner. The input of CNN is an original oversampling image while the output is a cluttersuppressed feature map. The CNN contains only convolution and non-linear operations, and the resolution of the output feature map is the same as that of the input image. The L1-norm loss function is used, and a mass of training data is generated to train the network effectively. Results show that compared with several baseline methods, the proposed method improves the signal clutter ratio gain and background suppression factor by 3–4 orders of magnitude, and has more powerful target detection performance.展开更多
To accurately identify soybean pests and diseases, in this paper, a kind of deep convolution network model was used to determine whether or not a soybean crop possessed pests and diseases. The proposed deep convolutio...To accurately identify soybean pests and diseases, in this paper, a kind of deep convolution network model was used to determine whether or not a soybean crop possessed pests and diseases. The proposed deep convolution network could learn the highdimensional feature representation of images by using their depth. An inception module was used to construct a neural network. In the inception module, multiscale convolution kernels were used to extract the distributed characteristics of soybean pests and diseases at different scales and to perform cascade fusion. The model then trained the SoftMax classifier in a uniformed framework. This realized the model of soybean pests and diseases so as to verify the effectiveness of this method. In this study, 800 images of soybean leaf images were taken as the experimental objects. Of these 800 images, 400 were selected for network training, and the remaining 400 images were used for the network test. Furthermore, the classical convolutional neural network was optimized. The accuracies before and after optimization were 96.25% and 95.81%, respectively, in terms of extracting image features. This type of research might be applied to achieve a degree of automation in agricultural field management.展开更多
PM_(2.5) forecasting technology can provide a scientific and effective way to assist environmental governance and protect public health.To forecast PM_(2.5),an enhanced hybrid ensemble deep learning model is proposed ...PM_(2.5) forecasting technology can provide a scientific and effective way to assist environmental governance and protect public health.To forecast PM_(2.5),an enhanced hybrid ensemble deep learning model is proposed in this research.The whole framework of the proposed model can be generalized as follows:the original PM_(2.5) series is decomposed into 8 sub-series with different frequency characteristics by variational mode decomposition(VMD);the long short-term memory(LSTM)network,echo state network(ESN),and temporal convolutional network(TCN)are applied for parallel forecasting for 8 different frequency PM_(2.5) sub-series;the gradient boosting decision tree(GBDT)is applied to assemble and reconstruct the forecasting results of LSTM,ESN and TCN.By comparing the forecasting data of the models over 3 PM_(2.5) series collected from Shenyang,Changsha and Shenzhen,the conclusions can be drawn that GBDT is a more effective method to integrate the forecasting result than traditional heuristic algorithms;MAE values of the proposed model on 3 PM_(2.5) series are 1.587,1.718 and 1.327μg/m3,respectively and the proposed model achieves more accurate results for all experiments than sixteen alternative forecasting models which contain three state-of-the-art models.展开更多
The degradation process of lithium-ion batteries is intricately linked to their entire lifecycle as power sources and energy storage devices,encompassing aspects such as performance delivery and cycling utilization.Co...The degradation process of lithium-ion batteries is intricately linked to their entire lifecycle as power sources and energy storage devices,encompassing aspects such as performance delivery and cycling utilization.Consequently,the accurate and expedient estimation or prediction of the aging state of lithium-ion batteries has garnered extensive attention.Nonetheless,prevailing research predominantly concentrates on either aging estimation or prediction,neglecting the dynamic fusion of both facets.This paper proposes a hybrid model for capacity aging estimation and prediction based on deep learning,wherein salient features highly pertinent to aging are extracted from charge and discharge relaxation processes.By amalgamating historical capacity decay data,the model dynamically furnishes estimations of the present capacity and forecasts of future capacity for lithium-ion batteries.Our approach is validated against a novel dataset involving charge and discharge cycles at varying rates.Specifically,under a charging condition of 0.25 C,a mean absolute percentage error(MAPE)of 0.29%is achieved.This outcome underscores the model's adeptness in harnessing relaxation processes commonly encountered in the real world and synergizing with historical capacity records within battery management systems(BMS),thereby affording estimations and prognostications of capacity decline with heightened precision.展开更多
The accurate and efficient prediction of explosive detonation properties has important engineering significance for weapon design.Traditional methods for predicting detonation performance include empirical formulas,eq...The accurate and efficient prediction of explosive detonation properties has important engineering significance for weapon design.Traditional methods for predicting detonation performance include empirical formulas,equations of state,and quantum chemical calculation methods.In recent years,with the development of computer performance and deep learning methods,researchers have begun to apply deep learning methods to the prediction of explosive detonation performance.The deep learning method has the advantage of simple and rapid prediction of explosive detonation properties.However,some problems remain in the study of detonation properties based on deep learning.For example,there are few studies on the prediction of mixed explosives,on the prediction of the parameters of the equation of state of explosives,and on the application of explosive properties to predict the formulation of explosives.Based on an artificial neural network model and a one-dimensional convolutional neural network model,three improved deep learning models were established in this work with the aim of solving these problems.The training data for these models,called the detonation parameters prediction model,JWL equation of state(EOS)prediction model,and inverse prediction model,was obtained through the KHT thermochemical code.After training,the model was tested for overfitting using the validation-set test.Through the model-accuracy test,the prediction accuracy of the model for real explosive formulations was tested by comparing the predicted value with the reference value.The results show that the model errors were within 10%and 3%for the prediction of detonation pressure and detonation velocity,respectively.The accuracy refers to the prediction of tested explosive formulations which consist of TNT,RDX and HMX.For the prediction of the equation of state for explosives,the correlation coefficient between the prediction and the reference curves was above 0.99.For the prediction of the inverse prediction model,the prediction error of the explosive equation was within 9%.This indicates that the models have utility in engineering.展开更多
It is generally believed that intelligent management for sewage treatment plants(STPs) is essential to the sustainable engineering of future smart cities.The core of management lies in the precise prediction of daily ...It is generally believed that intelligent management for sewage treatment plants(STPs) is essential to the sustainable engineering of future smart cities.The core of management lies in the precise prediction of daily volumes of sewage.The generation of sewage is the result of multiple factors from the whole social system.Characterized by strong process abstraction ability,data mining techniques have been viewed as promising prediction methods to realize intelligent STP management.However,existing data mining-based methods for this purpose just focus on a single factor such as an economical or meteorological factor and ignore their collaborative effects.To address this challenge,a deep learning-based intelligent management mechanism for STPs is proposed,to predict business volume.Specifically,the grey relation algorithm(GRA) and gated recursive unit network(GRU) are combined into a prediction model(GRAGRU).The GRA is utilized to select the factors that have a significant impact on the sewage business volume,and the GRU is set up to output the prediction results.We conducted a large number of experiments to verify the efficiency of the proposed GRA-GRU model.展开更多
The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to...The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified.展开更多
Nyquist Folding Receiver(NYFR)is a perceptron structure that realizes a low probability of intercept(LPI)signal analog to information.Aiming at the problem of LPI radar signal receiving,the time domain,frequency domai...Nyquist Folding Receiver(NYFR)is a perceptron structure that realizes a low probability of intercept(LPI)signal analog to information.Aiming at the problem of LPI radar signal receiving,the time domain,frequency domain,and time-frequency domain problems of signals intercepted by NYFR structure are studied.Combined with the time-frequency analysis(TFA)method,a radar recognition scheme based on deep learning(DL)is introduced,which can reliably classify common LPI radar signals.First,the structure of NYFR and its characteristics in the time domain,frequency domain,and time and frequency domain are analyzed.Then,the received signal is then converted into a time-frequency image(TFI).Finally,four kinds of DL algorithms are used to classify LPI radar signals.Simulation results demonstrate the correctness of the NYFR structure,and the effectiveness of the proposed recognition method is verified by comparison experiments.展开更多
The issue of small-angle maneuvering targets inverse synthetic aperture radar(ISAR)imaging has been successfully addressed by popular motion compensation algorithms.However,when the target’s rotational velocity is su...The issue of small-angle maneuvering targets inverse synthetic aperture radar(ISAR)imaging has been successfully addressed by popular motion compensation algorithms.However,when the target’s rotational velocity is sufficiently high during the dwell time of the radar,such compensation algorithms cannot obtain a high quality image.This paper proposes an ISAR imaging algorithm based on keystone transform and deep learning algorithm.The keystone transform is used to coarsely compensate for the target’s rotational motion and translational motion,and the deep learning algorithm is used to achieve a super-resolution image.The uniformly distributed point target data are used as the data set of the training u-net network.In addition,this method does not require estimating the motion parameters of the target,which simplifies the algorithm steps.Finally,several experiments are performed to demonstrate the effectiveness of the proposed algorithm.展开更多
As the representative of flexibility in optical imaging media,in recent years,fiber bundles have emerged as a promising architecture in the development of compact visual systems.Dedicated to tackling the problems of u...As the representative of flexibility in optical imaging media,in recent years,fiber bundles have emerged as a promising architecture in the development of compact visual systems.Dedicated to tackling the problems of universal honeycomb artifacts and low signal-to-noise ratio(SNR)imaging in fiber bundles,the iterative super-resolution reconstruction network based on a physical model is proposed.Under the constraint of solving the two subproblems of data fidelity and prior regularization term alternately,the network can efficiently“regenerate”the lost spatial resolution with deep learning.By building and calibrating a dual-path imaging system,the real-world dataset where paired low-resolution(LR)-high-resolution(HR)images on the same scene can be generated simultaneously.Numerical results on both the United States Air Force(USAF)resolution target and complex target objects demonstrate that the algorithm can restore high-contrast images without pixilated noise.On the basis of super-resolution reconstruction,compound eye image composition based on fiber bundle is also embedded in this paper for the actual imaging requirements.The proposed work is the first to apply a physical model-based deep learning network to fiber bundle imaging in the infrared band,effectively promoting the engineering application of thermal radiation detection.展开更多
Social infrastructures such as dams are likely to be exposed to high risk of terrorist and military attacks,leading to increasing attentions on their vulnerability and catastrophic consequences under such events.This ...Social infrastructures such as dams are likely to be exposed to high risk of terrorist and military attacks,leading to increasing attentions on their vulnerability and catastrophic consequences under such events.This paper tries to develop advanced deep learning approaches for structural dynamic response prediction and dam health diagnosis.At first,the improved long short-term memory(LSTM)networks are proposed for data-driven structural dynamic response analysis with the data generated by a single degree of freedom(SDOF)and the finite numerical simulation,due to the unavailability of abundant practical structural response data of concrete gravity dam under blast events.Three kinds of LSTM-based models are discussed with the various cases of noise-contaminated signals,and the results prove that LSTM-based models have the potential for quick structural response estimation under blast loads.Furthermore,the damage indicators(i.e.,peak vibration velocity and domain frequency)are extracted from the predicted velocity histories,and their relationship with the dam damage status from the numerical simulation is established.This study provides a deep-learning based structural health monitoring(SHM)framework for quick assessment of dam experienced underwater explosions through blastinduced monitoring data.展开更多
Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this pap...Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes.展开更多
The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the mai...The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%.展开更多
To solve the path following control problem for unmanned surface vehicles(USVs),a control method based on deep reinforcement learning(DRL)with long short-term memory(LSTM)networks is proposed.A distributed proximal po...To solve the path following control problem for unmanned surface vehicles(USVs),a control method based on deep reinforcement learning(DRL)with long short-term memory(LSTM)networks is proposed.A distributed proximal policy opti-mization(DPPO)algorithm,which is a modified actor-critic-based type of reinforcement learning algorithm,is adapted to improve the controller performance in repeated trials.The LSTM network structure is introduced to solve the strong temporal cor-relation USV control problem.In addition,a specially designed path dataset,including straight and curved paths,is established to simulate various sailing scenarios so that the reinforcement learning controller can obtain as much handling experience as possible.Extensive numerical simulation results demonstrate that the proposed method has better control performance under missions involving complex maneuvers than trained with limited scenarios and can potentially be applied in practice.展开更多
An obstacle perception system for intelligent vehicle is proposed.The proposed system combines the stereo version technique and the deep learning network model,and is applied to obstacle perception tasks in complex en...An obstacle perception system for intelligent vehicle is proposed.The proposed system combines the stereo version technique and the deep learning network model,and is applied to obstacle perception tasks in complex environment.In this paper,we provide a complete system design project,which includes the hardware parameters,software framework,algorithm principle,and optimization method.In addition,special experiments are designed to demonstrate that the performance of the proposed system meets the requirements of actual application.The experiment results show that the proposed system is valid to both standard obstacles and non-standard obstacles,and suitable for different weather and lighting conditions in complex environment.It announces that the proposed system is flexible and robust to the intelligent vehicle.展开更多
With the warming up and continuous development of machine learning,especially deep learning,the research on visual question answering field has made significant progress,with important theoretical research significanc...With the warming up and continuous development of machine learning,especially deep learning,the research on visual question answering field has made significant progress,with important theoretical research significance and practical application value.Therefore,it is necessary to summarize the current research and provide some reference for researchers in this field.This article conducted a detailed and in-depth analysis and summarized of relevant research and typical methods of visual question answering field.First,relevant background knowledge about VQA(Visual Question Answering)was introduced.Secondly,the issues and challenges of visual question answering were discussed,and at the same time,some promising discussion on the particular methodologies was given.Thirdly,the key sub-problems affecting visual question answering were summarized and analyzed.Then,the current commonly used data sets and evaluation indicators were summarized.Next,in view of the popular algorithms and models in VQA research,comparison of the algorithms and models was summarized and listed.Finally,the future development trend and conclusion of visual question answering were prospected.展开更多
Objective To observe the value of deep learning echocardiographic intelligent model for evaluation on left ventricular(LV)regional wall motion abnormalities(RWMA).Methods Apical two-chamber,three-chamber and four-cham...Objective To observe the value of deep learning echocardiographic intelligent model for evaluation on left ventricular(LV)regional wall motion abnormalities(RWMA).Methods Apical two-chamber,three-chamber and four-chamber views two-dimensional echocardiograms were obtained prospectively in 205 patients with coronary heart disease.The model for evaluating LV regional contractile function was constructed using a five-fold cross-validation method to automatically identify the presence of RWMA or not,and the performance of this model was assessed taken manual interpretation of RWMA as standards.Results Among 205 patients,RWMA was detected in totally 650 segments in 83 cases.LV myocardial segmentation model demonstrated good efficacy for delineation of LV myocardium.The average Dice similarity coefficient for LV myocardial segmentation results in the apical two-chamber,three-chamber and four-chamber views was 0.85,0.82 and 0.88,respectively.LV myocardial segmentation model accurately segmented LV myocardium in apical two-chamber,three-chamber and four-chamber views.The mean area under the curve(AUC)of RWMA identification model was 0.843±0.071,with sensitivity of(64.19±14.85)%,specificity of(89.44±7.31)%and accuracy of(85.22±4.37)%.Conclusion Deep learning echocardiographic intelligent model could be used to automatically evaluate LV regional contractile function,hence rapidly and accurately identifying RWMA.展开更多
基金supported by the National Key Research and Development Program of China(2016YFB0500901)the Natural Science Foundation of Shanghai(18ZR1437200)the Satellite Mapping Technology and Application National Key Laboratory of Geographical Information Bureau(KLSMTA-201709)
文摘According to the oversampling imaging characteristics, an infrared small target detection method based on deep learning is proposed. A 7-layer deep convolutional neural network(CNN) is designed to automatically extract small target features and suppress clutters in an end-to-end manner. The input of CNN is an original oversampling image while the output is a cluttersuppressed feature map. The CNN contains only convolution and non-linear operations, and the resolution of the output feature map is the same as that of the input image. The L1-norm loss function is used, and a mass of training data is generated to train the network effectively. Results show that compared with several baseline methods, the proposed method improves the signal clutter ratio gain and background suppression factor by 3–4 orders of magnitude, and has more powerful target detection performance.
基金Supported by 2017 Harbin Application Technology Research and Development Funds Innovation Talent Project(2017RAQXJ079)
文摘To accurately identify soybean pests and diseases, in this paper, a kind of deep convolution network model was used to determine whether or not a soybean crop possessed pests and diseases. The proposed deep convolution network could learn the highdimensional feature representation of images by using their depth. An inception module was used to construct a neural network. In the inception module, multiscale convolution kernels were used to extract the distributed characteristics of soybean pests and diseases at different scales and to perform cascade fusion. The model then trained the SoftMax classifier in a uniformed framework. This realized the model of soybean pests and diseases so as to verify the effectiveness of this method. In this study, 800 images of soybean leaf images were taken as the experimental objects. Of these 800 images, 400 were selected for network training, and the remaining 400 images were used for the network test. Furthermore, the classical convolutional neural network was optimized. The accuracies before and after optimization were 96.25% and 95.81%, respectively, in terms of extracting image features. This type of research might be applied to achieve a degree of automation in agricultural field management.
基金Project(52072412)supported by the National Natural Science Foundation of ChinaProject(2019CX005)supported by the Innovation Driven Project of the Central South University,China。
文摘PM_(2.5) forecasting technology can provide a scientific and effective way to assist environmental governance and protect public health.To forecast PM_(2.5),an enhanced hybrid ensemble deep learning model is proposed in this research.The whole framework of the proposed model can be generalized as follows:the original PM_(2.5) series is decomposed into 8 sub-series with different frequency characteristics by variational mode decomposition(VMD);the long short-term memory(LSTM)network,echo state network(ESN),and temporal convolutional network(TCN)are applied for parallel forecasting for 8 different frequency PM_(2.5) sub-series;the gradient boosting decision tree(GBDT)is applied to assemble and reconstruct the forecasting results of LSTM,ESN and TCN.By comparing the forecasting data of the models over 3 PM_(2.5) series collected from Shenyang,Changsha and Shenzhen,the conclusions can be drawn that GBDT is a more effective method to integrate the forecasting result than traditional heuristic algorithms;MAE values of the proposed model on 3 PM_(2.5) series are 1.587,1.718 and 1.327μg/m3,respectively and the proposed model achieves more accurate results for all experiments than sixteen alternative forecasting models which contain three state-of-the-art models.
文摘The degradation process of lithium-ion batteries is intricately linked to their entire lifecycle as power sources and energy storage devices,encompassing aspects such as performance delivery and cycling utilization.Consequently,the accurate and expedient estimation or prediction of the aging state of lithium-ion batteries has garnered extensive attention.Nonetheless,prevailing research predominantly concentrates on either aging estimation or prediction,neglecting the dynamic fusion of both facets.This paper proposes a hybrid model for capacity aging estimation and prediction based on deep learning,wherein salient features highly pertinent to aging are extracted from charge and discharge relaxation processes.By amalgamating historical capacity decay data,the model dynamically furnishes estimations of the present capacity and forecasts of future capacity for lithium-ion batteries.Our approach is validated against a novel dataset involving charge and discharge cycles at varying rates.Specifically,under a charging condition of 0.25 C,a mean absolute percentage error(MAPE)of 0.29%is achieved.This outcome underscores the model's adeptness in harnessing relaxation processes commonly encountered in the real world and synergizing with historical capacity records within battery management systems(BMS),thereby affording estimations and prognostications of capacity decline with heightened precision.
文摘The accurate and efficient prediction of explosive detonation properties has important engineering significance for weapon design.Traditional methods for predicting detonation performance include empirical formulas,equations of state,and quantum chemical calculation methods.In recent years,with the development of computer performance and deep learning methods,researchers have begun to apply deep learning methods to the prediction of explosive detonation performance.The deep learning method has the advantage of simple and rapid prediction of explosive detonation properties.However,some problems remain in the study of detonation properties based on deep learning.For example,there are few studies on the prediction of mixed explosives,on the prediction of the parameters of the equation of state of explosives,and on the application of explosive properties to predict the formulation of explosives.Based on an artificial neural network model and a one-dimensional convolutional neural network model,three improved deep learning models were established in this work with the aim of solving these problems.The training data for these models,called the detonation parameters prediction model,JWL equation of state(EOS)prediction model,and inverse prediction model,was obtained through the KHT thermochemical code.After training,the model was tested for overfitting using the validation-set test.Through the model-accuracy test,the prediction accuracy of the model for real explosive formulations was tested by comparing the predicted value with the reference value.The results show that the model errors were within 10%and 3%for the prediction of detonation pressure and detonation velocity,respectively.The accuracy refers to the prediction of tested explosive formulations which consist of TNT,RDX and HMX.For the prediction of the equation of state for explosives,the correlation coefficient between the prediction and the reference curves was above 0.99.For the prediction of the inverse prediction model,the prediction error of the explosive equation was within 9%.This indicates that the models have utility in engineering.
基金Project(KJZD-M202000801) supported by the Major Project of Chongqing Municipal Education Commission,ChinaProject(2016YFE0205600) supported by the National Key Research&Development Program of China+1 种基金Project(CXQT19023) supported by the Chongqing University Innovation Group Project,ChinaProjects(KFJJ2018069,1853061,1856033) supported by the Key Platform Opening Project of Chongqing Technology and Business University,China。
文摘It is generally believed that intelligent management for sewage treatment plants(STPs) is essential to the sustainable engineering of future smart cities.The core of management lies in the precise prediction of daily volumes of sewage.The generation of sewage is the result of multiple factors from the whole social system.Characterized by strong process abstraction ability,data mining techniques have been viewed as promising prediction methods to realize intelligent STP management.However,existing data mining-based methods for this purpose just focus on a single factor such as an economical or meteorological factor and ignore their collaborative effects.To address this challenge,a deep learning-based intelligent management mechanism for STPs is proposed,to predict business volume.Specifically,the grey relation algorithm(GRA) and gated recursive unit network(GRU) are combined into a prediction model(GRAGRU).The GRA is utilized to select the factors that have a significant impact on the sewage business volume,and the GRU is set up to output the prediction results.We conducted a large number of experiments to verify the efficiency of the proposed GRA-GRU model.
基金the Project of National Natural Science Foundation of China(Grant No.62106283)the Project of National Natural Science Foundation of China(Grant No.72001214)to provide fund for conducting experimentsthe Project of Natural Science Foundation of Shaanxi Province(Grant No.2020JQ-484)。
文摘The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified.
基金supported by the National Defence Pre-research Foundation of China。
文摘Nyquist Folding Receiver(NYFR)is a perceptron structure that realizes a low probability of intercept(LPI)signal analog to information.Aiming at the problem of LPI radar signal receiving,the time domain,frequency domain,and time-frequency domain problems of signals intercepted by NYFR structure are studied.Combined with the time-frequency analysis(TFA)method,a radar recognition scheme based on deep learning(DL)is introduced,which can reliably classify common LPI radar signals.First,the structure of NYFR and its characteristics in the time domain,frequency domain,and time and frequency domain are analyzed.Then,the received signal is then converted into a time-frequency image(TFI).Finally,four kinds of DL algorithms are used to classify LPI radar signals.Simulation results demonstrate the correctness of the NYFR structure,and the effectiveness of the proposed recognition method is verified by comparison experiments.
基金This work was supported by the National Natural Science Foundation of China(61571388,61871465,62071414)the Project of Introducing Overseas Students in Hebei Province(C20200367).
文摘The issue of small-angle maneuvering targets inverse synthetic aperture radar(ISAR)imaging has been successfully addressed by popular motion compensation algorithms.However,when the target’s rotational velocity is sufficiently high during the dwell time of the radar,such compensation algorithms cannot obtain a high quality image.This paper proposes an ISAR imaging algorithm based on keystone transform and deep learning algorithm.The keystone transform is used to coarsely compensate for the target’s rotational motion and translational motion,and the deep learning algorithm is used to achieve a super-resolution image.The uniformly distributed point target data are used as the data set of the training u-net network.In addition,this method does not require estimating the motion parameters of the target,which simplifies the algorithm steps.Finally,several experiments are performed to demonstrate the effectiveness of the proposed algorithm.
基金the National Natural Science Foundation of China(Grant Nos.61905115,62105151,62175109,U21B2033)Leading Technology of Jiangsu Basic Research Plan(Grant No.BK20192003)+2 种基金Youth Foundation of Jiangsu Province(Grant Nos.BK20190445,BK20210338)Fundamental Research Funds for the Central Universities(Grant No.30920032101)Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging&Intelligent Sense(Grant No.JSGP202105)to provide fund for conducting experiments。
文摘As the representative of flexibility in optical imaging media,in recent years,fiber bundles have emerged as a promising architecture in the development of compact visual systems.Dedicated to tackling the problems of universal honeycomb artifacts and low signal-to-noise ratio(SNR)imaging in fiber bundles,the iterative super-resolution reconstruction network based on a physical model is proposed.Under the constraint of solving the two subproblems of data fidelity and prior regularization term alternately,the network can efficiently“regenerate”the lost spatial resolution with deep learning.By building and calibrating a dual-path imaging system,the real-world dataset where paired low-resolution(LR)-high-resolution(HR)images on the same scene can be generated simultaneously.Numerical results on both the United States Air Force(USAF)resolution target and complex target objects demonstrate that the algorithm can restore high-contrast images without pixilated noise.On the basis of super-resolution reconstruction,compound eye image composition based on fiber bundle is also embedded in this paper for the actual imaging requirements.The proposed work is the first to apply a physical model-based deep learning network to fiber bundle imaging in the infrared band,effectively promoting the engineering application of thermal radiation detection.
基金supported by a grant from the National Natural Science Foundation of China(Grant No.52109163 and 51979188).
文摘Social infrastructures such as dams are likely to be exposed to high risk of terrorist and military attacks,leading to increasing attentions on their vulnerability and catastrophic consequences under such events.This paper tries to develop advanced deep learning approaches for structural dynamic response prediction and dam health diagnosis.At first,the improved long short-term memory(LSTM)networks are proposed for data-driven structural dynamic response analysis with the data generated by a single degree of freedom(SDOF)and the finite numerical simulation,due to the unavailability of abundant practical structural response data of concrete gravity dam under blast events.Three kinds of LSTM-based models are discussed with the various cases of noise-contaminated signals,and the results prove that LSTM-based models have the potential for quick structural response estimation under blast loads.Furthermore,the damage indicators(i.e.,peak vibration velocity and domain frequency)are extracted from the predicted velocity histories,and their relationship with the dam damage status from the numerical simulation is established.This study provides a deep-learning based structural health monitoring(SHM)framework for quick assessment of dam experienced underwater explosions through blastinduced monitoring data.
基金supported by the National Key R&D Program of China(2018YFB1802004)111 Project(B08038)。
文摘Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes.
基金supported by the Aeronautical Science Foundation(2017ZC53033).
文摘The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%.
基金supported by the National Natural Science Foundation(61601491)the Natural Science Foundation of Hubei Province(2018CFC865)the China Postdoctoral Science Foundation Funded Project(2016T45686).
文摘To solve the path following control problem for unmanned surface vehicles(USVs),a control method based on deep reinforcement learning(DRL)with long short-term memory(LSTM)networks is proposed.A distributed proximal policy opti-mization(DPPO)algorithm,which is a modified actor-critic-based type of reinforcement learning algorithm,is adapted to improve the controller performance in repeated trials.The LSTM network structure is introduced to solve the strong temporal cor-relation USV control problem.In addition,a specially designed path dataset,including straight and curved paths,is established to simulate various sailing scenarios so that the reinforcement learning controller can obtain as much handling experience as possible.Extensive numerical simulation results demonstrate that the proposed method has better control performance under missions involving complex maneuvers than trained with limited scenarios and can potentially be applied in practice.
基金supported by the National Natural Science Foundation of China(61673381)the National Key R&D Program of China(2018AAA0103103)the Science and Technology Development Fund(0024/2018/A1)。
文摘An obstacle perception system for intelligent vehicle is proposed.The proposed system combines the stereo version technique and the deep learning network model,and is applied to obstacle perception tasks in complex environment.In this paper,we provide a complete system design project,which includes the hardware parameters,software framework,algorithm principle,and optimization method.In addition,special experiments are designed to demonstrate that the performance of the proposed system meets the requirements of actual application.The experiment results show that the proposed system is valid to both standard obstacles and non-standard obstacles,and suitable for different weather and lighting conditions in complex environment.It announces that the proposed system is flexible and robust to the intelligent vehicle.
基金Project(61702063)supported by the National Natural Science Foundation of China。
文摘With the warming up and continuous development of machine learning,especially deep learning,the research on visual question answering field has made significant progress,with important theoretical research significance and practical application value.Therefore,it is necessary to summarize the current research and provide some reference for researchers in this field.This article conducted a detailed and in-depth analysis and summarized of relevant research and typical methods of visual question answering field.First,relevant background knowledge about VQA(Visual Question Answering)was introduced.Secondly,the issues and challenges of visual question answering were discussed,and at the same time,some promising discussion on the particular methodologies was given.Thirdly,the key sub-problems affecting visual question answering were summarized and analyzed.Then,the current commonly used data sets and evaluation indicators were summarized.Next,in view of the popular algorithms and models in VQA research,comparison of the algorithms and models was summarized and listed.Finally,the future development trend and conclusion of visual question answering were prospected.
文摘Objective To observe the value of deep learning echocardiographic intelligent model for evaluation on left ventricular(LV)regional wall motion abnormalities(RWMA).Methods Apical two-chamber,three-chamber and four-chamber views two-dimensional echocardiograms were obtained prospectively in 205 patients with coronary heart disease.The model for evaluating LV regional contractile function was constructed using a five-fold cross-validation method to automatically identify the presence of RWMA or not,and the performance of this model was assessed taken manual interpretation of RWMA as standards.Results Among 205 patients,RWMA was detected in totally 650 segments in 83 cases.LV myocardial segmentation model demonstrated good efficacy for delineation of LV myocardium.The average Dice similarity coefficient for LV myocardial segmentation results in the apical two-chamber,three-chamber and four-chamber views was 0.85,0.82 and 0.88,respectively.LV myocardial segmentation model accurately segmented LV myocardium in apical two-chamber,three-chamber and four-chamber views.The mean area under the curve(AUC)of RWMA identification model was 0.843±0.071,with sensitivity of(64.19±14.85)%,specificity of(89.44±7.31)%and accuracy of(85.22±4.37)%.Conclusion Deep learning echocardiographic intelligent model could be used to automatically evaluate LV regional contractile function,hence rapidly and accurately identifying RWMA.