With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and th...With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.展开更多
Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the detai...Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the details of coronary calcification in vivo.In routine clinical practice,it is a time-consuming and laborious task for clinicians to review the over 250 images in a single pullback.Besides,the imbalance label distribution within the entire pullbacks is another problem,which could lead to the failure of the classifier model.Given the success of deep learning methods with other imaging modalities,a thorough understanding of calcified plaque detection using Convolutional Neural Networks(CNNs)within pullbacks for future clinical decision was required.Methods All 33 IVOCT clinical pullbacks of 33 patients were taken from Affiliated Drum Tower Hospital,Nanjing University between December 2017 and December 2018.For ground-truth annotation,three trained experts determined the type of plaque that was present in a B-Scan.The experts assigned the labels'no calcified plaque','calcified plaque'for each OCT image.All experts were provided the all images for labeling.The final label was determined based on consensus between the experts,different opinions on the plaque type were resolved by asking the experts for a repetition of their evaluation.Before the implement of algorithm,all OCT images was resized to a resolution of 300×300,which matched the range used with standard architectures in the natural image domain.In the study,we randomly selected 26 pullbacks for training,the remaining data were testing.While,imbalance label distribution within entire pullbacks was great challenge for various CNNs architecture.In order to resolve the problem,we designed the following experiment.First,we fine-tuned twenty different CNNs architecture,including customize CNN architectures and pretrained CNN architectures.Considering the nature of OCT images,customize CNN architectures were designed that the layers were fewer than 25 layers.Then,three with good performance were selected and further deep fine-tuned to train three different models.The difference of CNNs was mainly in the model architecture,such as depth-based residual networks,width-based inception networks.Finally,the three CNN models were used to majority voting,the predicted labels were from the most voting.Areas under the receiver operating characteristic curve(ROC AUC)were used as the evaluation metric for the imbalance label distribution.Results The imbalance label distribution within pullbacks affected both convergence during the training phase and generalization of a CNN model.Different labels of OCT images could be classified with excellent performance by fine tuning parameters of CNN architectures.Overall,we find that our final result performed best with an accuracy of 90%of'calcified plaque'class,which the numbers were less than'no calcified plaque'class in one pullback.Conclusions The obtained results showed that the method is fast and effective to classify calcific plaques with imbalance label distribution in each pullback.The results suggest that the proposed method could be facilitating our understanding of coronary artery calcification in the process of atherosclerosis andhelping guide complex interventional strategies in coronary arteries with superficial calcification.展开更多
Non-orthogonal multiple access(NOMA), featuring high spectrum efficiency, massive connectivity and low latency, holds immense potential to be a novel multi-access technique in fifth-generation(5G) communication. Succe...Non-orthogonal multiple access(NOMA), featuring high spectrum efficiency, massive connectivity and low latency, holds immense potential to be a novel multi-access technique in fifth-generation(5G) communication. Successive interference cancellation(SIC) is proved to be an effective method to detect the NOMA signal by ordering the power of received signals and then decoding them. However, the error accumulation effect referred to as error propagation is an inevitable problem. In this paper,we propose a convolutional neural networks(CNNs) approach to restore the desired signal impaired by the multiple input multiple output(MIMO) channel. Especially in the uplink NOMA scenario,the proposed method can decode multiple users' information in a cluster instantaneously without any traditional communication signal processing steps. Simulation experiments are conducted in the Rayleigh channel and the results demonstrate that the error performance of the proposed learning system outperforms that of the classic SIC detection. Consequently, deep learning has disruptive potential to replace the conventional signal detection method.展开更多
The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit ...The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit faults of Voltage Source Inverter(VSI). The stator current serves as a common indicator for detecting open-circuit faults. Due to the identical changes of the stator current between the open-phase faults in the PMSM and failures of double switches within the same leg of the VSI, this paper utilizes the zero-sequence voltage component as an additional diagnostic criterion to differentiate them.Considering the variable conditions and substantial noise of the ARDS, a novel Multi-resolution Network(Mr Net) is proposed, which can extract multi-resolution perceptual information and enhance robustness to the noise. Meanwhile, a feature weighted layer is introduced to allocate higher weights to characteristics situated near the feature frequency. Both simulation and experiment results validate that the proposed fault diagnosis method can diagnose 25 types of open-circuit faults and achieve more than98.28% diagnostic accuracy. In addition, the experiment results also demonstrate that Mr Net has the capability of diagnosing the fault types accurately under the interference of noise signals(Laplace noise and Gaussian noise).展开更多
Acoustic source localization(ASL)and sound event detection(SED)are two widely pursued independent research fields.In recent years,in order to achieve a more complete spatial and temporal representation of sound field,...Acoustic source localization(ASL)and sound event detection(SED)are two widely pursued independent research fields.In recent years,in order to achieve a more complete spatial and temporal representation of sound field,sound event localization and detection(SELD)has become a very active research topic.This paper presents a deep learning-based multioverlapping sound event localization and detection algorithm in three-dimensional space.Log-Mel spectrum and generalized cross-correlation spectrum are joined together in channel dimension as input features.These features are classified and regressed in parallel after training by a neural network to obtain sound recognition and localization results respectively.The channel attention mechanism is also introduced in the network to selectively enhance the features containing essential information and suppress the useless features.Finally,a thourough comparison confirms the efficiency and effectiveness of the proposed SELD algorithm.Field experiments show that the proposed algorithm is robust to reverberation and environment and can achieve higher recognition and localization accuracy compared with the baseline method.展开更多
[Objective]Urban floods are occurring more frequently because of global climate change and urbanization.Accordingly,urban rainstorm and flood forecasting has become a priority in urban hydrology research.However,two-d...[Objective]Urban floods are occurring more frequently because of global climate change and urbanization.Accordingly,urban rainstorm and flood forecasting has become a priority in urban hydrology research.However,two-dimensional hydrodynamic models execute calculations slowly,hindering the rapid simulation and forecasting of urban floods.To overcome this limitation and accelerate the speed and improve the accuracy of urban flood simulations and forecasting,numerical simulations and deep learning were combined to develop a more effective urban flood forecasting method.[Methods]Specifically,a cellular automata model was used to simulate the urban flood process and address the need to include a large number of datasets in the deep learning process.Meanwhile,to shorten the time required for urban flood forecasting,a convolutional neural network model was used to establish the mapping relationship between rainfall and inundation depth.[Results]The results show that the relative error of forecasting the maximum inundation depth in flood-prone locations is less than 10%,and the Nash efficiency coefficient of forecasting inundation depth series in flood-prone locations is greater than 0.75.[Conclusion]The result demonstrated that the proposed method could execute highly accurate simulations and quickly produce forecasts,illustrating its superiority as an urban flood forecasting technique.展开更多
Deep learning has achieved excellent results in various tasks in the field of computer vision,especially in fine-grained visual categorization.It aims to distinguish the subordinate categories of the label-level categ...Deep learning has achieved excellent results in various tasks in the field of computer vision,especially in fine-grained visual categorization.It aims to distinguish the subordinate categories of the label-level categories.Due to high intra-class variances and high inter-class similarity,the fine-grained visual categorization is extremely challenging.This paper first briefly introduces and analyzes the related public datasets.After that,some of the latest methods are reviewed.Based on the feature types,the feature processing methods,and the overall structure used in the model,we divide them into three types of methods:methods based on general convolutional neural network(CNN)and strong supervision of parts,methods based on single feature processing,and meth-ods based on multiple feature processing.Most methods of the first type have a relatively simple structure,which is the result of the initial research.The methods of the other two types include models that have special structures and training processes,which are helpful to obtain discriminative features.We conduct a specific analysis on several methods with high accuracy on pub-lic datasets.In addition,we support that the focus of the future research is to solve the demand of existing methods for the large amount of the data and the computing power.In terms of tech-nology,the extraction of the subtle feature information with the burgeoning vision transformer(ViT)network is also an important research direction.展开更多
文摘With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.
基金supported in part by the National Natural Science Foundation of China ( NSFC ) ( 11772093)ARC ( FT140101152)
文摘Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the details of coronary calcification in vivo.In routine clinical practice,it is a time-consuming and laborious task for clinicians to review the over 250 images in a single pullback.Besides,the imbalance label distribution within the entire pullbacks is another problem,which could lead to the failure of the classifier model.Given the success of deep learning methods with other imaging modalities,a thorough understanding of calcified plaque detection using Convolutional Neural Networks(CNNs)within pullbacks for future clinical decision was required.Methods All 33 IVOCT clinical pullbacks of 33 patients were taken from Affiliated Drum Tower Hospital,Nanjing University between December 2017 and December 2018.For ground-truth annotation,three trained experts determined the type of plaque that was present in a B-Scan.The experts assigned the labels'no calcified plaque','calcified plaque'for each OCT image.All experts were provided the all images for labeling.The final label was determined based on consensus between the experts,different opinions on the plaque type were resolved by asking the experts for a repetition of their evaluation.Before the implement of algorithm,all OCT images was resized to a resolution of 300×300,which matched the range used with standard architectures in the natural image domain.In the study,we randomly selected 26 pullbacks for training,the remaining data were testing.While,imbalance label distribution within entire pullbacks was great challenge for various CNNs architecture.In order to resolve the problem,we designed the following experiment.First,we fine-tuned twenty different CNNs architecture,including customize CNN architectures and pretrained CNN architectures.Considering the nature of OCT images,customize CNN architectures were designed that the layers were fewer than 25 layers.Then,three with good performance were selected and further deep fine-tuned to train three different models.The difference of CNNs was mainly in the model architecture,such as depth-based residual networks,width-based inception networks.Finally,the three CNN models were used to majority voting,the predicted labels were from the most voting.Areas under the receiver operating characteristic curve(ROC AUC)were used as the evaluation metric for the imbalance label distribution.Results The imbalance label distribution within pullbacks affected both convergence during the training phase and generalization of a CNN model.Different labels of OCT images could be classified with excellent performance by fine tuning parameters of CNN architectures.Overall,we find that our final result performed best with an accuracy of 90%of'calcified plaque'class,which the numbers were less than'no calcified plaque'class in one pullback.Conclusions The obtained results showed that the method is fast and effective to classify calcific plaques with imbalance label distribution in each pullback.The results suggest that the proposed method could be facilitating our understanding of coronary artery calcification in the process of atherosclerosis andhelping guide complex interventional strategies in coronary arteries with superficial calcification.
基金supported by the National Natural Science Foundation of China (61471021)。
文摘Non-orthogonal multiple access(NOMA), featuring high spectrum efficiency, massive connectivity and low latency, holds immense potential to be a novel multi-access technique in fifth-generation(5G) communication. Successive interference cancellation(SIC) is proved to be an effective method to detect the NOMA signal by ordering the power of received signals and then decoding them. However, the error accumulation effect referred to as error propagation is an inevitable problem. In this paper,we propose a convolutional neural networks(CNNs) approach to restore the desired signal impaired by the multiple input multiple output(MIMO) channel. Especially in the uplink NOMA scenario,the proposed method can decode multiple users' information in a cluster instantaneously without any traditional communication signal processing steps. Simulation experiments are conducted in the Rayleigh channel and the results demonstrate that the error performance of the proposed learning system outperforms that of the classic SIC detection. Consequently, deep learning has disruptive potential to replace the conventional signal detection method.
基金supported by the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20210347)。
文摘The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit faults of Voltage Source Inverter(VSI). The stator current serves as a common indicator for detecting open-circuit faults. Due to the identical changes of the stator current between the open-phase faults in the PMSM and failures of double switches within the same leg of the VSI, this paper utilizes the zero-sequence voltage component as an additional diagnostic criterion to differentiate them.Considering the variable conditions and substantial noise of the ARDS, a novel Multi-resolution Network(Mr Net) is proposed, which can extract multi-resolution perceptual information and enhance robustness to the noise. Meanwhile, a feature weighted layer is introduced to allocate higher weights to characteristics situated near the feature frequency. Both simulation and experiment results validate that the proposed fault diagnosis method can diagnose 25 types of open-circuit faults and achieve more than98.28% diagnostic accuracy. In addition, the experiment results also demonstrate that Mr Net has the capability of diagnosing the fault types accurately under the interference of noise signals(Laplace noise and Gaussian noise).
基金supported by the National Natural Science Foundation of China(61877067)the Foundation of Science and Technology on Near-Surface Detection Laboratory(TCGZ2019A002,TCGZ2021C003,6142414200511)the Natural Science Basic Research Program of Shaanxi(2021JZ-19)。
文摘Acoustic source localization(ASL)and sound event detection(SED)are two widely pursued independent research fields.In recent years,in order to achieve a more complete spatial and temporal representation of sound field,sound event localization and detection(SELD)has become a very active research topic.This paper presents a deep learning-based multioverlapping sound event localization and detection algorithm in three-dimensional space.Log-Mel spectrum and generalized cross-correlation spectrum are joined together in channel dimension as input features.These features are classified and regressed in parallel after training by a neural network to obtain sound recognition and localization results respectively.The channel attention mechanism is also introduced in the network to selectively enhance the features containing essential information and suppress the useless features.Finally,a thourough comparison confirms the efficiency and effectiveness of the proposed SELD algorithm.Field experiments show that the proposed algorithm is robust to reverberation and environment and can achieve higher recognition and localization accuracy compared with the baseline method.
文摘[Objective]Urban floods are occurring more frequently because of global climate change and urbanization.Accordingly,urban rainstorm and flood forecasting has become a priority in urban hydrology research.However,two-dimensional hydrodynamic models execute calculations slowly,hindering the rapid simulation and forecasting of urban floods.To overcome this limitation and accelerate the speed and improve the accuracy of urban flood simulations and forecasting,numerical simulations and deep learning were combined to develop a more effective urban flood forecasting method.[Methods]Specifically,a cellular automata model was used to simulate the urban flood process and address the need to include a large number of datasets in the deep learning process.Meanwhile,to shorten the time required for urban flood forecasting,a convolutional neural network model was used to establish the mapping relationship between rainfall and inundation depth.[Results]The results show that the relative error of forecasting the maximum inundation depth in flood-prone locations is less than 10%,and the Nash efficiency coefficient of forecasting inundation depth series in flood-prone locations is greater than 0.75.[Conclusion]The result demonstrated that the proposed method could execute highly accurate simulations and quickly produce forecasts,illustrating its superiority as an urban flood forecasting technique.
基金supported by the National Natural Science Foundation of China(61571453,61806218).
文摘Deep learning has achieved excellent results in various tasks in the field of computer vision,especially in fine-grained visual categorization.It aims to distinguish the subordinate categories of the label-level categories.Due to high intra-class variances and high inter-class similarity,the fine-grained visual categorization is extremely challenging.This paper first briefly introduces and analyzes the related public datasets.After that,some of the latest methods are reviewed.Based on the feature types,the feature processing methods,and the overall structure used in the model,we divide them into three types of methods:methods based on general convolutional neural network(CNN)and strong supervision of parts,methods based on single feature processing,and meth-ods based on multiple feature processing.Most methods of the first type have a relatively simple structure,which is the result of the initial research.The methods of the other two types include models that have special structures and training processes,which are helpful to obtain discriminative features.We conduct a specific analysis on several methods with high accuracy on pub-lic datasets.In addition,we support that the focus of the future research is to solve the demand of existing methods for the large amount of the data and the computing power.In terms of tech-nology,the extraction of the subtle feature information with the burgeoning vision transformer(ViT)network is also an important research direction.