The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably a...The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.展开更多
Deep neural networks(DNN)are widely used in image recognition,image classification,and other fields.However,as the model size increases,the DNN hardware accelerators face the challenge of higher area overhead and ener...Deep neural networks(DNN)are widely used in image recognition,image classification,and other fields.However,as the model size increases,the DNN hardware accelerators face the challenge of higher area overhead and energy consumption.In recent years,stochastic computing(SC)has been considered a way to realize deep neural networks and reduce hardware consumption.A probabilistic compensation algorithm is proposed to solve the accuracy problem of stochastic calculation,and a fully parallel neural network accelerator based on a deterministic method is designed.The software simulation results show that the accuracy of the probability compensation algorithm on the CIFAR-10 data set is 95.32%,which is 14.98%higher than that of the traditional SC algorithm.The accuracy of the deterministic algorithm on the CIFAR-10 dataset is 95.06%,which is 14.72%higher than that of the traditional SC algorithm.The results of Very Large Scale Integration Circuit(VLSI)hardware tests show that the normalized energy efficiency of the fully parallel neural network accelerator based on the deterministic method is improved by 31%compared with the circuit based on binary computing.展开更多
Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been c...Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been considered as one of the powerful tools in promoting the fields of imaging processing and object recognition.However,the existing optical system architecture cannot be reconstructed to the realization of multi-functional artificial intelligence systems simultaneously.To push the development of this issue,we propose the pluggable diffractive neural networks(P-DNN),a general paradigm resorting to the cascaded metasurfaces,which can be applied to recognize various tasks by switching internal plug-ins.As the proof-of-principle,the recognition functions of six types of handwritten digits and six types of fashions are numerical simulated and experimental demonstrated at near-infrared regimes.Encouragingly,the proposed paradigm not only improves the flexibility of the optical neural networks but paves the new route for achieving high-speed,low-power and versatile artificial intelligence systems.展开更多
To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) s...To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) sequestration models do not adequately consider multiple transport mechanisms.Moreover,the evaluation of CO_(2) storage processes usually involves laborious and time-consuming numerical simulations unsuitable for practical prediction and decision-making.In this paper,an integrated model involving gas diffusion,adsorption,dissolution,slip flow,and Darcy flow is proposed to accurately characterize CO_(2) storage in depleted shale reservoirs,supporting the establishment of a training database.On this basis,a hybrid physics-informed data-driven neural network(HPDNN)is developed as a deep learning surrogate for prediction and inversion.By incorporating multiple sources of scientific knowledge,the HPDNN can be configured with limited simulation resources,significantly accelerating the forward and inversion processes.Furthermore,the HPDNN can more intelligently predict injection performance,precisely perform reservoir parameter inversion,and reasonably evaluate the CO_(2) storage capacity under complicated scenarios.The validation and test results demonstrate that the HPDNN can ensure high accuracy and strong robustness across an extensive applicability range when dealing with field data with multiple noise sources.This study has tremendous potential to replace traditional modeling tools for predicting and making decisions about CO_(2) storage projects in depleted shale reservoirs.展开更多
Icing is an important factor threatening aircraft flight safety.According to the requirements of airworthiness regulations,aircraft icing safety assessment is needed to be carried out based on the ice shapes formed un...Icing is an important factor threatening aircraft flight safety.According to the requirements of airworthiness regulations,aircraft icing safety assessment is needed to be carried out based on the ice shapes formed under different icing conditions.Due to the complexity of the icing process,the rapid assessment of ice shape remains an important challenge.In this paper,an efficient prediction model of aircraft icing is established based on the deep belief network(DBN)and the stacked auto-encoder(SAE),which are all deep neural networks.The detailed network structures are designed and then the networks are trained according to the samples obtained by the icing numerical computation.After that the model is applied on the ice shape evaluation of NACA0012 airfoil.The results show that the model can accurately capture the nonlinear behavior of aircraft icing and thus make an excellent ice shape prediction.The model provides an important tool for aircraft icing analysis.展开更多
The limited labeled sample data in the field of advanced security threats detection seriously restricts the effective development of research work.Learning the sample labels from the labeled and unlabeled data has rec...The limited labeled sample data in the field of advanced security threats detection seriously restricts the effective development of research work.Learning the sample labels from the labeled and unlabeled data has received a lot of research attention and various universal labeling methods have been proposed.However,the labeling task of malicious communication samples targeted at advanced threats has to face the two practical challenges:the difficulty of extracting effective features in advance and the complexity of the actual sample types.To address these problems,we proposed a sample labeling method for malicious communication based on semi-supervised deep neural network.This method supports continuous learning and optimization feature representation while labeling sample,and can handle uncertain samples that are outside the concerned sample types.According to the experimental results,our proposed deep neural network can automatically learn effective feature representation,and the validity of features is close to or even higher than that of features which extracted based on expert knowledge.Furthermore,our proposed method can achieve the labeling accuracy of 97.64%~98.50%,which is more accurate than the train-then-detect,kNN and LPA methodsin any labeled-sample proportion condition.The problem of insufficient labeled samples in many network attack detecting scenarios,and our proposed work can function as a reference for the sample labeling tasks in the similar real-world scenarios.展开更多
With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and th...With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.展开更多
Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel dat...Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel data-driven framework named convolutional and transformer-based deep neural network(CTDNN)is proposed to improve the classification performance.CTDNN can be divided into four modules,i.e.,convolutional neural network(CNN)backbone,transition module,transformer module,and final classifier.In the CNN backbone,a wide and deep convolution structure is designed,which consists of 1×15 convolution kernels and intensive cross-layer connections instead of traditional 1×3 kernels and sequential connections.In the transition module,a 1×1 convolution layer is utilized to compress the channels of the previous multi-scale CNN features.In the transformer module,three self-attention layers are designed for extracting global features and generating the classification vector.In the classifier,the final decision is made based on the maximum a posterior probability.Extensive simulations are conducted,and the result shows that our proposed CTDNN can achieve superior classification performance than traditional deep models.展开更多
This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matchi...This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matching.The proposed method consists of a DNN-based inverse model with SAE-encoded static data and iterative updates of supervised-learning data are based on distance-based clustering schemes.DNN functions as an inverse model and results in encoded flattened data,while SAE,as a pre-trained neural network,successfully reduces dimensionality and reliably reconstructs geomodels.The iterative-learning method can improve the training data for DNN by showing the error reduction achieved with each iteration step.The proposed workflow shows the small mean absolute percentage error below 4%for all objective functions,while a typical multi-objective evolutionary algorithm fails to significantly reduce the initial population uncertainty.Iterative learning-based manyobjective history matching estimates the trends in water cuts that are not reliably included in dynamicdata matching.This confirms the proposed workflow constructs more plausible geo-models.The workflow would be a reliable alternative to overcome the less-convergent Pareto-based multi-objective evolutionary algorithm in the presence of geological uncertainty and varying objective functions.展开更多
Blind image quality assessment(BIQA)is of fundamental importance in low-level computer vision community.Increasing interest has been drawn in exploiting deep neural networks for BIQA.Despite of the notable success ach...Blind image quality assessment(BIQA)is of fundamental importance in low-level computer vision community.Increasing interest has been drawn in exploiting deep neural networks for BIQA.Despite of the notable success achieved,there is a broad consensus that training deep convolutional neural networks(DCNN)heavily relies on massive annotated data.Unfortunately,BIQA is typically a small sample problem,resulting the generalization ability of BIQA severely restricted.In order to improve the accuracy and generalization ability of BIQA metrics,this work proposed a totally opinion-unaware BIQA in which no subjective annotations are involved in the training stage.Multiple full-reference image quality assessment(FR-IQA)metrics are employed to label the distorted image as a substitution of subjective quality annotation.A deep neural network(DNN)is trained to blindly predict the multiple FR-IQA score in absence of corresponding pristine image.In the end,a selfsupervised FR-IQA score aggregator implemented by adversarial auto-encoder pools the predictions of multiple FR-IQA scores into the final quality predicting score.Even though none of subjective scores are involved in the training stage,experimental results indicate that our proposed full reference induced BIQA framework is as competitive as state-of-the-art BIQA metrics.展开更多
In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases consi...In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases considerably because most gesture recognition systems cannot accommodate both user differentiation and gesture diversity.To overcome the limitations of existing methods,we designed a onedimensional parallel long short-term memory–fully convolutional network(LSTM–FCN)model to extract gesture features of different dimensions.LSTM can learn complex time dynamic information,whereas FCN can predict gestures efficiently by extracting the deep,abstract features of gestures in the spatial dimension.In the experiment,50 types of gestures of five users were collected and evaluated.The experimental results demonstrate the effectiveness of this system and robustness to various gestures and individual changes.Statistical analysis of the recognition results indicated that an average accuracy of approximately 98.9% was achieved.展开更多
Accurate diagnosis of fracture geometry and conductivity is of great challenge due to the complex morphology of volumetric fracture network. In this study, a DNN (deep neural network) model was proposed to predict fra...Accurate diagnosis of fracture geometry and conductivity is of great challenge due to the complex morphology of volumetric fracture network. In this study, a DNN (deep neural network) model was proposed to predict fracture parameters for the evaluation of the fracturing effects. Field experience and the law of fracture volume conservation were incorporated as physical constraints to improve the prediction accuracy due to small amount of data. A combined neural network was adopted to input both static geological and dynamic fracturing data. The structure of the DNN was optimized and the model was validated through k-fold cross-validation. Results indicate that this DNN model is capable of predicting the fracture parameters accurately with a low relative error of under 10% and good generalization ability. The adoptions of the combined neural network, physical constraints, and k-fold cross-validation improve the model performance. Specifically, the root-mean-square error (RMSE) of the model decreases by 71.9% and 56% respectively with the combined neural network as the input model and the consideration of physical constraints. The mean square error (MRE) of fracture parameters reduces by 75% because the k-fold cross-validation improves the rationality of data set dividing. The model based on the DNN with physical constraints proposed in this study provides foundations for the optimization of fracturing design and improves the efficiency of fracture diagnosis in tight oil and gas reservoirs.展开更多
The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a promi...The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach.展开更多
Accurate and fast prediction of aerodynamic noise has always been a research hotspot in fluid mechanics and aeroacoustics.The conventional prediction methods based on numerical simulation often demand huge computation...Accurate and fast prediction of aerodynamic noise has always been a research hotspot in fluid mechanics and aeroacoustics.The conventional prediction methods based on numerical simulation often demand huge computational resources,which are difficult to balance between accuracy and efficiency.Here,we present a data-driven deep neural network(DNN)method to realize fast aerodynamic noise prediction while maintaining accuracy.The proposed deep learning method can predict the spatial distributions of aerodynamic noise information under different working conditions.Based on the large eddy simulation turbulence model and the Ffowcs Williams-Hawkings acoustic analogy theory,a dataset composed of 1216samples is established.With reference to the deep learning method,a DNN framework is proposed to map the relationship between spatial coordinates,inlet velocity and overall sound pressure level.The root-mean-square-errors of prediction are below 0.82 dB in the test dataset,and the directivity of aerodynamic noise predicted by the DNN framework are basically consistent with the numerical simulation.This work paves a novel way for fast prediction of aerodynamic noise with high accuracy and has application potential in acoustic field prediction.展开更多
Orbital angular momentum(OAM)has the characteristics of mutual orthogonality between modes,and has been applied to underwater wireless optical communication(UWOC)systems to increase the channel capacity.In this work,w...Orbital angular momentum(OAM)has the characteristics of mutual orthogonality between modes,and has been applied to underwater wireless optical communication(UWOC)systems to increase the channel capacity.In this work,we propose a diffractive deep neural network(DDNN)based OAM mode recognition scheme,where the DDNN is trained to capture the features of the intensity distribution of the OAM modes and output the corresponding azimuthal indices and radial indices.The results show that the proposed scheme can recognize the azimuthal indices and radial indices of the OAM modes accurately and quickly.In addition,the proposed scheme can resist weak oceanic turbulence(OT),and exhibit excellent ability to recognize OAM modes in a strong OT environment.The DDNN-based OAM mode recognition scheme has potential applications in UWOC systems.展开更多
Non-blind audio bandwidth extension is a standard technique within contemporary audio codecs to efficiently code audio signals at low bitrates. In existing methods, in most cases high frequencies signal is usually gen...Non-blind audio bandwidth extension is a standard technique within contemporary audio codecs to efficiently code audio signals at low bitrates. In existing methods, in most cases high frequencies signal is usually generated by a duplication of the corresponding low frequencies and some parameters of high frequencies. However, the perception quality of coding will significantly degrade if the correlation between high frequencies and low frequencies becomes weak. In this paper, we quantitatively analyse the correlation via computing mutual information value. The analysis results show the correlation also exists in low frequency signal of the context dependent frames besides the current frame. In order to improve the perception quality of coding, we propose a novel method of high frequency coarse spectrum generation to improve the conventional replication method. In the proposed method, the coarse high frequency spectrums are generated by a nonlinear mapping model using deep recurrent neural network. The experiments confirm that the proposed method shows better performance than the reference methods.展开更多
The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.H...The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.However,heterogeneous cache nodes have different communication modes and limited caching capacities.In addition,the high mobility of vehicles renders the more complicated caching environment.Therefore,performing efficient cooperative caching becomes a key issue.In this paper,we propose a cross-tier cooperative caching architecture for all contents,which allows the distributed cache nodes to cooperate.Then,we devise the communication link and content caching model to facilitate timely content delivery.Aiming at minimizing transmission delay and cache cost,an optimization problem is formulated.Furthermore,we use a multi-agent deep reinforcement learning(MADRL)approach to model the decision-making process for caching among heterogeneous cache nodes,where each agent interacts with the environment collectively,receives observations yet a common reward,and learns its own optimal policy.Extensive simulations validate that the MADRL approach can enhance hit ratio while reducing transmission delay and cache cost.展开更多
Because of computational complexity,the deep neural network(DNN)in embedded devices is usually trained on high-performance computers or graphic processing units(GPUs),and only the inference phase is implemented in emb...Because of computational complexity,the deep neural network(DNN)in embedded devices is usually trained on high-performance computers or graphic processing units(GPUs),and only the inference phase is implemented in embedded devices.Data processed by embedded devices,such as smartphones and wearables,are usually personalized,so the DNN model trained on public data sets may have poor accuracy when inferring the personalized data.As a result,retraining DNN with personalized data collected locally in embedded devices is necessary.Nevertheless,retraining needs labeled data sets,while the data collected locally are unlabeled,then how to retrain DNN with unlabeled data is a problem to be solved.This paper proves the necessity of retraining DNN model with personalized data collected in embedded devices after trained with public data sets.It also proposes a label generation method by which a fake label is generated for each unlabeled training case according to users’feedback,thus retraining can be performed with unlabeled data collected in embedded devices.The experimental results show that our fake label generation method has both good training effects and wide applicability.The advanced neural networks can be trained with unlabeled data from embedded devices and the individualized accuracy of the DNN model can be gradually improved along with personal using.展开更多
A recent trend in machine learning is to use deep architectures to discover multiple levels of features from data,which has achieved impressive results on various natural language processing(NLP)tasks.We propose a dee...A recent trend in machine learning is to use deep architectures to discover multiple levels of features from data,which has achieved impressive results on various natural language processing(NLP)tasks.We propose a deep neural network-based solution to Chinese semantic role labeling(SRL)with its application on message analysis.The solution adopts a six-step strategy:text normalization,named entity recognition(NER),Chinese word segmentation and part-of-speech(POS)tagging,theme classification,SRL,and slot filling.For each step,a novel deep neural network-based model is designed and optimized,particularly for smart phone applications.Experiment results on all the NLP sub-tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost.The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requiring real-time response,highlighting the potential of the proposed solution for practical NLP systems.展开更多
To obtain excellent regression results under the condition of small sample hyperspectral data,a deep neural network with simulated annealing(SA-DNN)is proposed.According to the characteristics of data,the attention me...To obtain excellent regression results under the condition of small sample hyperspectral data,a deep neural network with simulated annealing(SA-DNN)is proposed.According to the characteristics of data,the attention mechanism was applied to make the network pay more attention to effective features,thereby improving the operating efficiency.By introducing an improved activation function,the data correlation was reduced based on increasing the operation rate,and the problem of over-fitting was alleviated.By introducing simulated annealing,the network chose the optimal learning rate by itself,which avoided falling into the local optimum to the greatest extent.To evaluate the performance of the SA-DNN,the coefficient of determination(R^(2)),root mean square error(RMSE),and other metrics were used to evaluate the model.The results show that the performance of the SA-DNN is significantly better than other traditional methods.展开更多
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62375140 and 62001249)the Open Research Fund of National Laboratory of Solid State Microstructures(Grant No.M36055).
文摘The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.
文摘Deep neural networks(DNN)are widely used in image recognition,image classification,and other fields.However,as the model size increases,the DNN hardware accelerators face the challenge of higher area overhead and energy consumption.In recent years,stochastic computing(SC)has been considered a way to realize deep neural networks and reduce hardware consumption.A probabilistic compensation algorithm is proposed to solve the accuracy problem of stochastic calculation,and a fully parallel neural network accelerator based on a deterministic method is designed.The software simulation results show that the accuracy of the probability compensation algorithm on the CIFAR-10 data set is 95.32%,which is 14.98%higher than that of the traditional SC algorithm.The accuracy of the deterministic algorithm on the CIFAR-10 dataset is 95.06%,which is 14.72%higher than that of the traditional SC algorithm.The results of Very Large Scale Integration Circuit(VLSI)hardware tests show that the normalized energy efficiency of the fully parallel neural network accelerator based on the deterministic method is improved by 31%compared with the circuit based on binary computing.
基金The authors acknowledge the funding provided by the National Key R&D Program of China(2021YFA1401200)Beijing Outstanding Young Scientist Program(BJJWZYJH01201910007022)+2 种基金National Natural Science Foundation of China(No.U21A20140,No.92050117,No.62005017)programBeijing Municipal Science&Technology Commission,Administrative Commission of Zhongguancun Science Park(No.Z211100004821009)This work was supported by the Synergetic Extreme Condition User Facility(SECUF).
文摘Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been considered as one of the powerful tools in promoting the fields of imaging processing and object recognition.However,the existing optical system architecture cannot be reconstructed to the realization of multi-functional artificial intelligence systems simultaneously.To push the development of this issue,we propose the pluggable diffractive neural networks(P-DNN),a general paradigm resorting to the cascaded metasurfaces,which can be applied to recognize various tasks by switching internal plug-ins.As the proof-of-principle,the recognition functions of six types of handwritten digits and six types of fashions are numerical simulated and experimental demonstrated at near-infrared regimes.Encouragingly,the proposed paradigm not only improves the flexibility of the optical neural networks but paves the new route for achieving high-speed,low-power and versatile artificial intelligence systems.
基金This work is funded by National Natural Science Foundation of China(Nos.42202292,42141011)the Program for Jilin University(JLU)Science and Technology Innovative Research Team(No.2019TD-35).The authors would also like to thank the reviewers and editors whose critical comments are very helpful in preparing this article.
文摘To reduce CO_(2) emissions in response to global climate change,shale reservoirs could be ideal candidates for long-term carbon geo-sequestration involving multi-scale transport processes.However,most current CO_(2) sequestration models do not adequately consider multiple transport mechanisms.Moreover,the evaluation of CO_(2) storage processes usually involves laborious and time-consuming numerical simulations unsuitable for practical prediction and decision-making.In this paper,an integrated model involving gas diffusion,adsorption,dissolution,slip flow,and Darcy flow is proposed to accurately characterize CO_(2) storage in depleted shale reservoirs,supporting the establishment of a training database.On this basis,a hybrid physics-informed data-driven neural network(HPDNN)is developed as a deep learning surrogate for prediction and inversion.By incorporating multiple sources of scientific knowledge,the HPDNN can be configured with limited simulation resources,significantly accelerating the forward and inversion processes.Furthermore,the HPDNN can more intelligently predict injection performance,precisely perform reservoir parameter inversion,and reasonably evaluate the CO_(2) storage capacity under complicated scenarios.The validation and test results demonstrate that the HPDNN can ensure high accuracy and strong robustness across an extensive applicability range when dealing with field data with multiple noise sources.This study has tremendous potential to replace traditional modeling tools for predicting and making decisions about CO_(2) storage projects in depleted shale reservoirs.
基金supported in part by the National Natural Science Foundation of China(No.51606213)the National Major Science and Technology Projects(No.J2019-III-0010-0054)。
文摘Icing is an important factor threatening aircraft flight safety.According to the requirements of airworthiness regulations,aircraft icing safety assessment is needed to be carried out based on the ice shapes formed under different icing conditions.Due to the complexity of the icing process,the rapid assessment of ice shape remains an important challenge.In this paper,an efficient prediction model of aircraft icing is established based on the deep belief network(DBN)and the stacked auto-encoder(SAE),which are all deep neural networks.The detailed network structures are designed and then the networks are trained according to the samples obtained by the icing numerical computation.After that the model is applied on the ice shape evaluation of NACA0012 airfoil.The results show that the model can accurately capture the nonlinear behavior of aircraft icing and thus make an excellent ice shape prediction.The model provides an important tool for aircraft icing analysis.
基金partially funded by the National Natural Science Foundation of China (Grant No. 61272447)National Entrepreneurship & Innovation Demonstration Base of China (Grant No. C700011)Key Research & Development Project of Sichuan Province of China (Grant No. 2018G20100)
文摘The limited labeled sample data in the field of advanced security threats detection seriously restricts the effective development of research work.Learning the sample labels from the labeled and unlabeled data has received a lot of research attention and various universal labeling methods have been proposed.However,the labeling task of malicious communication samples targeted at advanced threats has to face the two practical challenges:the difficulty of extracting effective features in advance and the complexity of the actual sample types.To address these problems,we proposed a sample labeling method for malicious communication based on semi-supervised deep neural network.This method supports continuous learning and optimization feature representation while labeling sample,and can handle uncertain samples that are outside the concerned sample types.According to the experimental results,our proposed deep neural network can automatically learn effective feature representation,and the validity of features is close to or even higher than that of features which extracted based on expert knowledge.Furthermore,our proposed method can achieve the labeling accuracy of 97.64%~98.50%,which is more accurate than the train-then-detect,kNN and LPA methodsin any labeled-sample proportion condition.The problem of insufficient labeled samples in many network attack detecting scenarios,and our proposed work can function as a reference for the sample labeling tasks in the similar real-world scenarios.
文摘With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.
基金supported in part by the National Natural Science Foundation of China under Grant(62171045,62201090)in part by the National Key Research and Development Program of China under Grants(2020YFB1807602,2019YFB1804404).
文摘Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel data-driven framework named convolutional and transformer-based deep neural network(CTDNN)is proposed to improve the classification performance.CTDNN can be divided into four modules,i.e.,convolutional neural network(CNN)backbone,transition module,transformer module,and final classifier.In the CNN backbone,a wide and deep convolution structure is designed,which consists of 1×15 convolution kernels and intensive cross-layer connections instead of traditional 1×3 kernels and sequential connections.In the transition module,a 1×1 convolution layer is utilized to compress the channels of the previous multi-scale CNN features.In the transformer module,three self-attention layers are designed for extracting global features and generating the classification vector.In the classifier,the final decision is made based on the maximum a posterior probability.Extensive simulations are conducted,and the result shows that our proposed CTDNN can achieve superior classification performance than traditional deep models.
基金supported by the basic science research program through the National Research Foundation of Korea(NRF)(2020R1F1A1073395)the basic research project of the Korea Institute of Geoscience and Mineral Resources(KIGAM)(GP2021-011,GP2020-031,21-3117)funded by the Ministry of Science and ICT,Korea。
文摘This paper presents an innovative data-integration that uses an iterative-learning method,a deep neural network(DNN)coupled with a stacked autoencoder(SAE)to solve issues encountered with many-objective history matching.The proposed method consists of a DNN-based inverse model with SAE-encoded static data and iterative updates of supervised-learning data are based on distance-based clustering schemes.DNN functions as an inverse model and results in encoded flattened data,while SAE,as a pre-trained neural network,successfully reduces dimensionality and reliably reconstructs geomodels.The iterative-learning method can improve the training data for DNN by showing the error reduction achieved with each iteration step.The proposed workflow shows the small mean absolute percentage error below 4%for all objective functions,while a typical multi-objective evolutionary algorithm fails to significantly reduce the initial population uncertainty.Iterative learning-based manyobjective history matching estimates the trends in water cuts that are not reliably included in dynamicdata matching.This confirms the proposed workflow constructs more plausible geo-models.The workflow would be a reliable alternative to overcome the less-convergent Pareto-based multi-objective evolutionary algorithm in the presence of geological uncertainty and varying objective functions.
基金supported by the Public Welfare Technology Application Research Project of Zhejiang Province,China(No.LGF21F010001)the Key Research and Development Program of Zhejiang Province,China(Grant No.2019C01002)the Key Research and Development Program of Zhejiang Province,China(Grant No.2021C03138)。
文摘Blind image quality assessment(BIQA)is of fundamental importance in low-level computer vision community.Increasing interest has been drawn in exploiting deep neural networks for BIQA.Despite of the notable success achieved,there is a broad consensus that training deep convolutional neural networks(DCNN)heavily relies on massive annotated data.Unfortunately,BIQA is typically a small sample problem,resulting the generalization ability of BIQA severely restricted.In order to improve the accuracy and generalization ability of BIQA metrics,this work proposed a totally opinion-unaware BIQA in which no subjective annotations are involved in the training stage.Multiple full-reference image quality assessment(FR-IQA)metrics are employed to label the distorted image as a substitution of subjective quality annotation.A deep neural network(DNN)is trained to blindly predict the multiple FR-IQA score in absence of corresponding pristine image.In the end,a selfsupervised FR-IQA score aggregator implemented by adversarial auto-encoder pools the predictions of multiple FR-IQA scores into the final quality predicting score.Even though none of subjective scores are involved in the training stage,experimental results indicate that our proposed full reference induced BIQA framework is as competitive as state-of-the-art BIQA metrics.
基金supported in part by the National Natural Science Foundation of China under Grant 61461013in part of the Natural Science Foundation of Guangxi Province under Grant 2018GXNSFAA281179in part of the Dean Project of Guangxi Key Laboratory of Wireless Broadband Communication and Signal Processing under Grant GXKL06160103.
文摘In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases considerably because most gesture recognition systems cannot accommodate both user differentiation and gesture diversity.To overcome the limitations of existing methods,we designed a onedimensional parallel long short-term memory–fully convolutional network(LSTM–FCN)model to extract gesture features of different dimensions.LSTM can learn complex time dynamic information,whereas FCN can predict gestures efficiently by extracting the deep,abstract features of gestures in the spatial dimension.In the experiment,50 types of gestures of five users were collected and evaluated.The experimental results demonstrate the effectiveness of this system and robustness to various gestures and individual changes.Statistical analysis of the recognition results indicated that an average accuracy of approximately 98.9% was achieved.
基金supported by the National Natural Science Foundation of China(Grant No.52174044,52004302)Science Foundation of China University of Petroleum,Beijing(No.ZX20200134,2462021YXZZ012)the Strategic Cooperation Technology Projects of CNPC and CUPB(ZLZX 2020-01-07).
文摘Accurate diagnosis of fracture geometry and conductivity is of great challenge due to the complex morphology of volumetric fracture network. In this study, a DNN (deep neural network) model was proposed to predict fracture parameters for the evaluation of the fracturing effects. Field experience and the law of fracture volume conservation were incorporated as physical constraints to improve the prediction accuracy due to small amount of data. A combined neural network was adopted to input both static geological and dynamic fracturing data. The structure of the DNN was optimized and the model was validated through k-fold cross-validation. Results indicate that this DNN model is capable of predicting the fracture parameters accurately with a low relative error of under 10% and good generalization ability. The adoptions of the combined neural network, physical constraints, and k-fold cross-validation improve the model performance. Specifically, the root-mean-square error (RMSE) of the model decreases by 71.9% and 56% respectively with the combined neural network as the input model and the consideration of physical constraints. The mean square error (MRE) of fracture parameters reduces by 75% because the k-fold cross-validation improves the rationality of data set dividing. The model based on the DNN with physical constraints proposed in this study provides foundations for the optimization of fracturing design and improves the efficiency of fracture diagnosis in tight oil and gas reservoirs.
基金supported by the Universiti Tunku Abdul Rahman (UTAR) Malaysia under UTARRF (IPSR/RMC/UTARRF/2021-C1/T05)
文摘The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach.
基金supported by the National Key Research and Development Program of China(Grant No.2017YFA0303700)the National Natural Science Foundation of China(Grants Nos.12174190,11634006,12074286,and 81127901)the Innovation Special Zone of the National Defense Science and Technology,High-Performance Computing Center of Collaborative Innovation Center of Advanced Microstructures,and the Priority Academic Program Development of Jiangsu Higher Education Institutions。
文摘Accurate and fast prediction of aerodynamic noise has always been a research hotspot in fluid mechanics and aeroacoustics.The conventional prediction methods based on numerical simulation often demand huge computational resources,which are difficult to balance between accuracy and efficiency.Here,we present a data-driven deep neural network(DNN)method to realize fast aerodynamic noise prediction while maintaining accuracy.The proposed deep learning method can predict the spatial distributions of aerodynamic noise information under different working conditions.Based on the large eddy simulation turbulence model and the Ffowcs Williams-Hawkings acoustic analogy theory,a dataset composed of 1216samples is established.With reference to the deep learning method,a DNN framework is proposed to map the relationship between spatial coordinates,inlet velocity and overall sound pressure level.The root-mean-square-errors of prediction are below 0.82 dB in the test dataset,and the directivity of aerodynamic noise predicted by the DNN framework are basically consistent with the numerical simulation.This work paves a novel way for fast prediction of aerodynamic noise with high accuracy and has application potential in acoustic field prediction.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61871234 and 62001249)the Postgraduate Research and Practice Innovation Program of Jiangsu Province,China(Grant No.KYCX200718)。
文摘Orbital angular momentum(OAM)has the characteristics of mutual orthogonality between modes,and has been applied to underwater wireless optical communication(UWOC)systems to increase the channel capacity.In this work,we propose a diffractive deep neural network(DDNN)based OAM mode recognition scheme,where the DDNN is trained to capture the features of the intensity distribution of the OAM modes and output the corresponding azimuthal indices and radial indices.The results show that the proposed scheme can recognize the azimuthal indices and radial indices of the OAM modes accurately and quickly.In addition,the proposed scheme can resist weak oceanic turbulence(OT),and exhibit excellent ability to recognize OAM modes in a strong OT environment.The DDNN-based OAM mode recognition scheme has potential applications in UWOC systems.
基金supported by the National Natural Science Foundation of China under Grant No. 61762005, 61231015, 61671335, 61702472, 61701194, 61761044, 61471271National High Technology Research and Development Program of China (863 Program) under Grant No. 2015AA016306+2 种基金 Hubei Province Technological Innovation Major Project under Grant No. 2016AAA015the Science Project of Education Department of Jiangxi Province under No. GJJ150585The Opening Project of Collaborative Innovation Center for Economics Crime Investigation and Prevention Technology, Jiangxi Province, under Grant No. JXJZXTCX-025
文摘Non-blind audio bandwidth extension is a standard technique within contemporary audio codecs to efficiently code audio signals at low bitrates. In existing methods, in most cases high frequencies signal is usually generated by a duplication of the corresponding low frequencies and some parameters of high frequencies. However, the perception quality of coding will significantly degrade if the correlation between high frequencies and low frequencies becomes weak. In this paper, we quantitatively analyse the correlation via computing mutual information value. The analysis results show the correlation also exists in low frequency signal of the context dependent frames besides the current frame. In order to improve the perception quality of coding, we propose a novel method of high frequency coarse spectrum generation to improve the conventional replication method. In the proposed method, the coarse high frequency spectrums are generated by a nonlinear mapping model using deep recurrent neural network. The experiments confirm that the proposed method shows better performance than the reference methods.
基金supported by the National Natural Science Foundation of China(62231020,62101401)the Youth Innovation Team of Shaanxi Universities。
文摘The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.However,heterogeneous cache nodes have different communication modes and limited caching capacities.In addition,the high mobility of vehicles renders the more complicated caching environment.Therefore,performing efficient cooperative caching becomes a key issue.In this paper,we propose a cross-tier cooperative caching architecture for all contents,which allows the distributed cache nodes to cooperate.Then,we devise the communication link and content caching model to facilitate timely content delivery.Aiming at minimizing transmission delay and cache cost,an optimization problem is formulated.Furthermore,we use a multi-agent deep reinforcement learning(MADRL)approach to model the decision-making process for caching among heterogeneous cache nodes,where each agent interacts with the environment collectively,receives observations yet a common reward,and learns its own optimal policy.Extensive simulations validate that the MADRL approach can enhance hit ratio while reducing transmission delay and cache cost.
基金supported by the National Natural Science Foundation of China under Grants No.61534002,No.61761136015,No.61701095.
文摘Because of computational complexity,the deep neural network(DNN)in embedded devices is usually trained on high-performance computers or graphic processing units(GPUs),and only the inference phase is implemented in embedded devices.Data processed by embedded devices,such as smartphones and wearables,are usually personalized,so the DNN model trained on public data sets may have poor accuracy when inferring the personalized data.As a result,retraining DNN with personalized data collected locally in embedded devices is necessary.Nevertheless,retraining needs labeled data sets,while the data collected locally are unlabeled,then how to retrain DNN with unlabeled data is a problem to be solved.This paper proves the necessity of retraining DNN model with personalized data collected in embedded devices after trained with public data sets.It also proposes a label generation method by which a fake label is generated for each unlabeled training case according to users’feedback,thus retraining can be performed with unlabeled data collected in embedded devices.The experimental results show that our fake label generation method has both good training effects and wide applicability.The advanced neural networks can be trained with unlabeled data from embedded devices and the individualized accuracy of the DNN model can be gradually improved along with personal using.
文摘A recent trend in machine learning is to use deep architectures to discover multiple levels of features from data,which has achieved impressive results on various natural language processing(NLP)tasks.We propose a deep neural network-based solution to Chinese semantic role labeling(SRL)with its application on message analysis.The solution adopts a six-step strategy:text normalization,named entity recognition(NER),Chinese word segmentation and part-of-speech(POS)tagging,theme classification,SRL,and slot filling.For each step,a novel deep neural network-based model is designed and optimized,particularly for smart phone applications.Experiment results on all the NLP sub-tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost.The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requiring real-time response,highlighting the potential of the proposed solution for practical NLP systems.
基金supported by the National Natural Science Foundation of China(Nos.62001023,61922013)Beijing Natural Science Foundation(No.4232013).
文摘To obtain excellent regression results under the condition of small sample hyperspectral data,a deep neural network with simulated annealing(SA-DNN)is proposed.According to the characteristics of data,the attention mechanism was applied to make the network pay more attention to effective features,thereby improving the operating efficiency.By introducing an improved activation function,the data correlation was reduced based on increasing the operation rate,and the problem of over-fitting was alleviated.By introducing simulated annealing,the network chose the optimal learning rate by itself,which avoided falling into the local optimum to the greatest extent.To evaluate the performance of the SA-DNN,the coefficient of determination(R^(2)),root mean square error(RMSE),and other metrics were used to evaluate the model.The results show that the performance of the SA-DNN is significantly better than other traditional methods.