Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel dat...Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel data-driven framework named convolutional and transformer-based deep neural network(CTDNN)is proposed to improve the classification performance.CTDNN can be divided into four modules,i.e.,convolutional neural network(CNN)backbone,transition module,transformer module,and final classifier.In the CNN backbone,a wide and deep convolution structure is designed,which consists of 1×15 convolution kernels and intensive cross-layer connections instead of traditional 1×3 kernels and sequential connections.In the transition module,a 1×1 convolution layer is utilized to compress the channels of the previous multi-scale CNN features.In the transformer module,three self-attention layers are designed for extracting global features and generating the classification vector.In the classifier,the final decision is made based on the maximum a posterior probability.Extensive simulations are conducted,and the result shows that our proposed CTDNN can achieve superior classification performance than traditional deep models.展开更多
The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a promi...The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach.展开更多
Numerical Weather Prediction(NWP)is a necessary input for short-term wind power forecasting.Existing NWP models are all based on purely physical models.This requires mainframe computers to perform large-scale numerica...Numerical Weather Prediction(NWP)is a necessary input for short-term wind power forecasting.Existing NWP models are all based on purely physical models.This requires mainframe computers to perform large-scale numerical calculations and the technical threshold of the assimilation process is high.There is a need to further improve the timeliness and accuracy of the assimilation process.In order to solve the above problems,NWP method based on artificial intelligence is proposed in this paper.It uses a convolutional neural network algorithm and a downscaling model from the global background field to establish a given wind turbine hub height position.We considered the actual data of a wind farm in north China as an example to analyze the calculation example.The results show that the prediction accuracy of the proposed method is equivalent to that of the traditional purely physical model.The prediction accuracy in some months is better than that of the purely physical model,and the calculation efficiency is considerably improved.The validity and advantages of the proposed method are verified from the results,and the traditional NWP method is replaced to a certain extent.展开更多
A recent trend in machine learning is to use deep architectures to discover multiple levels of features from data,which has achieved impressive results on various natural language processing(NLP)tasks.We propose a dee...A recent trend in machine learning is to use deep architectures to discover multiple levels of features from data,which has achieved impressive results on various natural language processing(NLP)tasks.We propose a deep neural network-based solution to Chinese semantic role labeling(SRL)with its application on message analysis.The solution adopts a six-step strategy:text normalization,named entity recognition(NER),Chinese word segmentation and part-of-speech(POS)tagging,theme classification,SRL,and slot filling.For each step,a novel deep neural network-based model is designed and optimized,particularly for smart phone applications.Experiment results on all the NLP sub-tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost.The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requiring real-time response,highlighting the potential of the proposed solution for practical NLP systems.展开更多
In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated ...In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications.展开更多
A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain...A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain the spatial structure information of human motion and extract the correlation in the time series of human motion.The residual structure is applied to the proposed network model to alleviate the problem of gradient disappearance in the deep network.Experiments on the Human 3.6M dataset demonstrate that the proposed method effectively reduces the errors of motion prediction compared with previous methods,especially of long-term prediction.展开更多
The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit ...The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit faults of Voltage Source Inverter(VSI). The stator current serves as a common indicator for detecting open-circuit faults. Due to the identical changes of the stator current between the open-phase faults in the PMSM and failures of double switches within the same leg of the VSI, this paper utilizes the zero-sequence voltage component as an additional diagnostic criterion to differentiate them.Considering the variable conditions and substantial noise of the ARDS, a novel Multi-resolution Network(Mr Net) is proposed, which can extract multi-resolution perceptual information and enhance robustness to the noise. Meanwhile, a feature weighted layer is introduced to allocate higher weights to characteristics situated near the feature frequency. Both simulation and experiment results validate that the proposed fault diagnosis method can diagnose 25 types of open-circuit faults and achieve more than98.28% diagnostic accuracy. In addition, the experiment results also demonstrate that Mr Net has the capability of diagnosing the fault types accurately under the interference of noise signals(Laplace noise and Gaussian noise).展开更多
The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of...The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of focused Cherenkov detectors.Its objective is to observe transient sources such as gamma-ray bursts and the counterparts of gravitational waves.This study aims to utilize the latest AI technology to enhance the sensitivity of HADAR experiments.Training datasets and models with distinctive creativity were constructed by incorporating the relevant physical theories for various applications.These models can determine the type,energy,and direction of the incident particles after careful design.We obtained a background identification accuracy of 98.6%,a relative energy reconstruction error of 10.0%,and an angular resolution of 0.22°in a test dataset at 10 TeV.These findings demonstrate the significant potential for enhancing the precision and dependability of detector data analysis in astrophysical research.By using deep learning techniques,the HADAR experiment’s observational sensitivity to the Crab Nebula has surpassed that of MAGIC and H.E.S.S.at energies below 0.5 TeV and remains competitive with conventional narrow-field Cherenkov telescopes at higher energies.In addition,our experiment offers a new approach for dealing with strongly connected,scattered data.展开更多
In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep rei...In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy.展开更多
In recent years,deep learning has been gradually used in communication physical layer receivers and has achieved excellent performance.In this paper,we employ deep learning to establish covert communication systems,en...In recent years,deep learning has been gradually used in communication physical layer receivers and has achieved excellent performance.In this paper,we employ deep learning to establish covert communication systems,enabling the transmission of signals through high-power signals present in the prevailing environment while maintaining covertness,and propose a convolutional neural network(CNN)based model for covert communication receivers,namely Deep CCR.This model leverages CNN to execute the signal separation and recovery tasks commonly performed by traditional receivers.It enables the direct recovery of covert information from the received signal.The simulation results show that the proposed Deep CCR exhibits significant advantages in bit error rate(BER)compared to traditional receivers in the face of noise and multipath fading.We verify the covert performance of the covert method proposed in this paper using the maximum-minimum eigenvalue ratio-based method and the frequency domain entropy-based method.The results indicate that this method has excellent covert performance.We also evaluate the mutual influence between covert signals and opportunity signals,indicating that using opportunity signals as cover can cause certain performance losses to covert signals.When the interference-tosignal power ratio(ISR)is large,the impact of covert signals on opportunity signals is minimal.展开更多
Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the i...Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.展开更多
[Objective]Urban floods are occurring more frequently because of global climate change and urbanization.Accordingly,urban rainstorm and flood forecasting has become a priority in urban hydrology research.However,two-d...[Objective]Urban floods are occurring more frequently because of global climate change and urbanization.Accordingly,urban rainstorm and flood forecasting has become a priority in urban hydrology research.However,two-dimensional hydrodynamic models execute calculations slowly,hindering the rapid simulation and forecasting of urban floods.To overcome this limitation and accelerate the speed and improve the accuracy of urban flood simulations and forecasting,numerical simulations and deep learning were combined to develop a more effective urban flood forecasting method.[Methods]Specifically,a cellular automata model was used to simulate the urban flood process and address the need to include a large number of datasets in the deep learning process.Meanwhile,to shorten the time required for urban flood forecasting,a convolutional neural network model was used to establish the mapping relationship between rainfall and inundation depth.[Results]The results show that the relative error of forecasting the maximum inundation depth in flood-prone locations is less than 10%,and the Nash efficiency coefficient of forecasting inundation depth series in flood-prone locations is greater than 0.75.[Conclusion]The result demonstrated that the proposed method could execute highly accurate simulations and quickly produce forecasts,illustrating its superiority as an urban flood forecasting technique.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant(62171045,62201090)in part by the National Key Research and Development Program of China under Grants(2020YFB1807602,2019YFB1804404).
文摘Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel data-driven framework named convolutional and transformer-based deep neural network(CTDNN)is proposed to improve the classification performance.CTDNN can be divided into four modules,i.e.,convolutional neural network(CNN)backbone,transition module,transformer module,and final classifier.In the CNN backbone,a wide and deep convolution structure is designed,which consists of 1×15 convolution kernels and intensive cross-layer connections instead of traditional 1×3 kernels and sequential connections.In the transition module,a 1×1 convolution layer is utilized to compress the channels of the previous multi-scale CNN features.In the transformer module,three self-attention layers are designed for extracting global features and generating the classification vector.In the classifier,the final decision is made based on the maximum a posterior probability.Extensive simulations are conducted,and the result shows that our proposed CTDNN can achieve superior classification performance than traditional deep models.
基金supported by the Universiti Tunku Abdul Rahman (UTAR) Malaysia under UTARRF (IPSR/RMC/UTARRF/2021-C1/T05)
文摘The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach.
基金supported by the Science and Technology Project of State Grid Corporation of China:Key technology for high-resolution and centralized wind power forecasting for deep-offshore wind power base (No. SGSXDK00YJJS2000879)
文摘Numerical Weather Prediction(NWP)is a necessary input for short-term wind power forecasting.Existing NWP models are all based on purely physical models.This requires mainframe computers to perform large-scale numerical calculations and the technical threshold of the assimilation process is high.There is a need to further improve the timeliness and accuracy of the assimilation process.In order to solve the above problems,NWP method based on artificial intelligence is proposed in this paper.It uses a convolutional neural network algorithm and a downscaling model from the global background field to establish a given wind turbine hub height position.We considered the actual data of a wind farm in north China as an example to analyze the calculation example.The results show that the prediction accuracy of the proposed method is equivalent to that of the traditional purely physical model.The prediction accuracy in some months is better than that of the purely physical model,and the calculation efficiency is considerably improved.The validity and advantages of the proposed method are verified from the results,and the traditional NWP method is replaced to a certain extent.
文摘A recent trend in machine learning is to use deep architectures to discover multiple levels of features from data,which has achieved impressive results on various natural language processing(NLP)tasks.We propose a deep neural network-based solution to Chinese semantic role labeling(SRL)with its application on message analysis.The solution adopts a six-step strategy:text normalization,named entity recognition(NER),Chinese word segmentation and part-of-speech(POS)tagging,theme classification,SRL,and slot filling.For each step,a novel deep neural network-based model is designed and optimized,particularly for smart phone applications.Experiment results on all the NLP sub-tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost.The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requiring real-time response,highlighting the potential of the proposed solution for practical NLP systems.
基金supported by the National Natural Science Foundation of China(No.12005198).
文摘In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications.
文摘A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain the spatial structure information of human motion and extract the correlation in the time series of human motion.The residual structure is applied to the proposed network model to alleviate the problem of gradient disappearance in the deep network.Experiments on the Human 3.6M dataset demonstrate that the proposed method effectively reduces the errors of motion prediction compared with previous methods,especially of long-term prediction.
基金supported by the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20210347)。
文摘The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit faults of Voltage Source Inverter(VSI). The stator current serves as a common indicator for detecting open-circuit faults. Due to the identical changes of the stator current between the open-phase faults in the PMSM and failures of double switches within the same leg of the VSI, this paper utilizes the zero-sequence voltage component as an additional diagnostic criterion to differentiate them.Considering the variable conditions and substantial noise of the ARDS, a novel Multi-resolution Network(Mr Net) is proposed, which can extract multi-resolution perceptual information and enhance robustness to the noise. Meanwhile, a feature weighted layer is introduced to allocate higher weights to characteristics situated near the feature frequency. Both simulation and experiment results validate that the proposed fault diagnosis method can diagnose 25 types of open-circuit faults and achieve more than98.28% diagnostic accuracy. In addition, the experiment results also demonstrate that Mr Net has the capability of diagnosing the fault types accurately under the interference of noise signals(Laplace noise and Gaussian noise).
文摘The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of focused Cherenkov detectors.Its objective is to observe transient sources such as gamma-ray bursts and the counterparts of gravitational waves.This study aims to utilize the latest AI technology to enhance the sensitivity of HADAR experiments.Training datasets and models with distinctive creativity were constructed by incorporating the relevant physical theories for various applications.These models can determine the type,energy,and direction of the incident particles after careful design.We obtained a background identification accuracy of 98.6%,a relative energy reconstruction error of 10.0%,and an angular resolution of 0.22°in a test dataset at 10 TeV.These findings demonstrate the significant potential for enhancing the precision and dependability of detector data analysis in astrophysical research.By using deep learning techniques,the HADAR experiment’s observational sensitivity to the Crab Nebula has surpassed that of MAGIC and H.E.S.S.at energies below 0.5 TeV and remains competitive with conventional narrow-field Cherenkov telescopes at higher energies.In addition,our experiment offers a new approach for dealing with strongly connected,scattered data.
基金Supported by the China National Petroleum Corporation Limited-China University of Petroleum(Beijing)Strategic Cooperation Science and Technology Project(ZLZX2020-03).
文摘In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy.
基金supported in part by the National Natural Science Foundation of China under Grants U19B2016,62271447 and 61871348。
文摘In recent years,deep learning has been gradually used in communication physical layer receivers and has achieved excellent performance.In this paper,we employ deep learning to establish covert communication systems,enabling the transmission of signals through high-power signals present in the prevailing environment while maintaining covertness,and propose a convolutional neural network(CNN)based model for covert communication receivers,namely Deep CCR.This model leverages CNN to execute the signal separation and recovery tasks commonly performed by traditional receivers.It enables the direct recovery of covert information from the received signal.The simulation results show that the proposed Deep CCR exhibits significant advantages in bit error rate(BER)compared to traditional receivers in the face of noise and multipath fading.We verify the covert performance of the covert method proposed in this paper using the maximum-minimum eigenvalue ratio-based method and the frequency domain entropy-based method.The results indicate that this method has excellent covert performance.We also evaluate the mutual influence between covert signals and opportunity signals,indicating that using opportunity signals as cover can cause certain performance losses to covert signals.When the interference-tosignal power ratio(ISR)is large,the impact of covert signals on opportunity signals is minimal.
基金funded by the Natural Science Foundation of Shandong Province (ZR2021MD061ZR2023QD025)+3 种基金China Postdoctoral Science Foundation (2022M721972)National Natural Science Foundation of China (41174098)Young Talents Foundation of Inner Mongolia University (10000-23112101/055)Qingdao Postdoctoral Science Foundation (QDBSH20230102094)。
文摘Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.
文摘[Objective]Urban floods are occurring more frequently because of global climate change and urbanization.Accordingly,urban rainstorm and flood forecasting has become a priority in urban hydrology research.However,two-dimensional hydrodynamic models execute calculations slowly,hindering the rapid simulation and forecasting of urban floods.To overcome this limitation and accelerate the speed and improve the accuracy of urban flood simulations and forecasting,numerical simulations and deep learning were combined to develop a more effective urban flood forecasting method.[Methods]Specifically,a cellular automata model was used to simulate the urban flood process and address the need to include a large number of datasets in the deep learning process.Meanwhile,to shorten the time required for urban flood forecasting,a convolutional neural network model was used to establish the mapping relationship between rainfall and inundation depth.[Results]The results show that the relative error of forecasting the maximum inundation depth in flood-prone locations is less than 10%,and the Nash efficiency coefficient of forecasting inundation depth series in flood-prone locations is greater than 0.75.[Conclusion]The result demonstrated that the proposed method could execute highly accurate simulations and quickly produce forecasts,illustrating its superiority as an urban flood forecasting technique.