期刊文献+
共找到1,524篇文章
< 1 2 77 >
每页显示 20 50 100
A Convolutional and Transformer Based Deep Neural Network for Automatic Modulation Classification 被引量:2
1
作者 Shanchuan Ying Sai Huang +3 位作者 Shuo Chang Zheng Yang Zhiyong Feng Ningyan Guo 《China Communications》 SCIE CSCD 2023年第5期135-147,共13页
Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel dat... Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel data-driven framework named convolutional and transformer-based deep neural network(CTDNN)is proposed to improve the classification performance.CTDNN can be divided into four modules,i.e.,convolutional neural network(CNN)backbone,transition module,transformer module,and final classifier.In the CNN backbone,a wide and deep convolution structure is designed,which consists of 1×15 convolution kernels and intensive cross-layer connections instead of traditional 1×3 kernels and sequential connections.In the transition module,a 1×1 convolution layer is utilized to compress the channels of the previous multi-scale CNN features.In the transformer module,three self-attention layers are designed for extracting global features and generating the classification vector.In the classifier,the final decision is made based on the maximum a posterior probability.Extensive simulations are conducted,and the result shows that our proposed CTDNN can achieve superior classification performance than traditional deep models. 展开更多
关键词 automatic modulation classification deep neural network convolutional neural network TRANSFORMER
在线阅读 下载PDF
Convolutional Neural Network-Based Deep Q-Network (CNN-DQN) Resource Management in Cloud Radio Access Network 被引量:2
2
作者 Amjad Iqbal Mau-Luen Tham Yoong Choon Chang 《China Communications》 SCIE CSCD 2022年第10期129-142,共14页
The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a promi... The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach. 展开更多
关键词 energy efficiency(EE) markov decision process(MDP) convolutional neural network(CNN) cloud RAN deep Q-network(DQN)
在线阅读 下载PDF
基于VMD-1DCNN-GRU的轴承故障诊断
3
作者 宋金波 刘锦玲 +2 位作者 闫荣喜 王鹏 路敬祎 《吉林大学学报(信息科学版)》 2025年第1期34-42,共9页
针对滚动轴承信号含噪声导致诊断模型训练困难的问题,提出了一种基于变分模态分解(VMD:Variational Mode Decomposition)和深度学习相结合的轴承故障诊断模型。首先,该方法通过VMD对轴承信号进行模态分解,并且通过豪斯多夫距离(HD:Hausd... 针对滚动轴承信号含噪声导致诊断模型训练困难的问题,提出了一种基于变分模态分解(VMD:Variational Mode Decomposition)和深度学习相结合的轴承故障诊断模型。首先,该方法通过VMD对轴承信号进行模态分解,并且通过豪斯多夫距离(HD:Hausdorff Distance)完成去噪,尽可能保留原始信号的特征。其次,将选择的有效信号输入一维卷积神经网络(1DCNN:1D Convolutional Neural Networks)和门控循环单元(GRU:Gate Recurrent Unit)相结合的网络结构(1DCNN-GRU)中完成数据的分类,实现轴承的故障诊断。通过与常见的轴承故障诊断方法比较,所提VMD-1DCNN-GRU模型具有最高的准确性。实验结果验证了该模型对轴承故障有效分类的可行性,具有一定的研究意义。 展开更多
关键词 故障诊断 深度学习 变分模态分解 一维卷积神经网络 门控循环单元
在线阅读 下载PDF
Statistical downscaling of numerical weather prediction based on convolutional neural networks 被引量:1
4
作者 Hongwei Yang Jie Yan +1 位作者 Yongqian Liu Zongpeng Song 《Global Energy Interconnection》 EI CAS CSCD 2022年第2期217-225,共9页
Numerical Weather Prediction(NWP)is a necessary input for short-term wind power forecasting.Existing NWP models are all based on purely physical models.This requires mainframe computers to perform large-scale numerica... Numerical Weather Prediction(NWP)is a necessary input for short-term wind power forecasting.Existing NWP models are all based on purely physical models.This requires mainframe computers to perform large-scale numerical calculations and the technical threshold of the assimilation process is high.There is a need to further improve the timeliness and accuracy of the assimilation process.In order to solve the above problems,NWP method based on artificial intelligence is proposed in this paper.It uses a convolutional neural network algorithm and a downscaling model from the global background field to establish a given wind turbine hub height position.We considered the actual data of a wind farm in north China as an example to analyze the calculation example.The results show that the prediction accuracy of the proposed method is equivalent to that of the traditional purely physical model.The prediction accuracy in some months is better than that of the purely physical model,and the calculation efficiency is considerably improved.The validity and advantages of the proposed method are verified from the results,and the traditional NWP method is replaced to a certain extent. 展开更多
关键词 convolutional neural network deep learning Numerical Weather Prediction
在线阅读 下载PDF
Deep Neural Network-Based Chinese Semantic Role Labeling
5
作者 ZHENG Xiaoqing CHEN Jun SHANG Guoqiang 《ZTE Communications》 2017年第B12期58-64,共7页
A recent trend in machine learning is to use deep architectures to discover multiple levels of features from data,which has achieved impressive results on various natural language processing(NLP)tasks.We propose a dee... A recent trend in machine learning is to use deep architectures to discover multiple levels of features from data,which has achieved impressive results on various natural language processing(NLP)tasks.We propose a deep neural network-based solution to Chinese semantic role labeling(SRL)with its application on message analysis.The solution adopts a six-step strategy:text normalization,named entity recognition(NER),Chinese word segmentation and part-of-speech(POS)tagging,theme classification,SRL,and slot filling.For each step,a novel deep neural network-based model is designed and optimized,particularly for smart phone applications.Experiment results on all the NLP sub-tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost.The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requiring real-time response,highlighting the potential of the proposed solution for practical NLP systems. 展开更多
关键词 deep learning SEQUENCE LABELING natural language under.standing convolutional neural network RECURRENT neural net.work
在线阅读 下载PDF
Continuum estimation in low-resolution gamma-ray spectra based on deep learning
6
作者 Ri Zhao Li-Ye Liu +5 位作者 Xin Liu Zhao-Xing Liu Run-Cheng Liang Ren-Jing Ling-Hu Jing Zhang Fa-Guo Chen 《Nuclear Science and Techniques》 2025年第2期5-17,共13页
In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated ... In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications. 展开更多
关键词 Gamma-ray spectrum Continuum estimation deep learning convolutional neural network End-to-end prediction
在线阅读 下载PDF
A Lightweight Temporal Convolutional Network for Human Motion Prediction 被引量:1
7
作者 WANG You QIAO Bing 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2022年第S01期150-157,共8页
A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain... A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain the spatial structure information of human motion and extract the correlation in the time series of human motion.The residual structure is applied to the proposed network model to alleviate the problem of gradient disappearance in the deep network.Experiments on the Human 3.6M dataset demonstrate that the proposed method effectively reduces the errors of motion prediction compared with previous methods,especially of long-term prediction. 展开更多
关键词 human motion prediction temporal convolutional network short-term prediction long-term prediction deep neural network
在线阅读 下载PDF
基于GAN-DCNN的树叶识别 被引量:1
8
作者 徐竞怡 张志 +1 位作者 闫飞 张雯悦 《林业科学》 EI CAS CSCD 北大核心 2024年第4期40-51,共12页
【目的】利用深度学习进行树叶识别时需要大量训练样本,当样本量不足、图像风格单一会导致识别准确率不稳定。研究利用少量的样本进行树叶图像增殖和风格转换,可极大减轻数据采集的负担,为提升林业调查信息化、智能化提供有效的技术手... 【目的】利用深度学习进行树叶识别时需要大量训练样本,当样本量不足、图像风格单一会导致识别准确率不稳定。研究利用少量的样本进行树叶图像增殖和风格转换,可极大减轻数据采集的负担,为提升林业调查信息化、智能化提供有效的技术手段和理论支撑。【方法】采集6种树种的树叶图像建立数据集,引入light-weight GAN对图像进行增殖和风格转换,扩充人工拍摄的树叶数据集,通过在该数据集与原数据集上分别应用AlexNet、GoogLeNet、ResNet34和ShuffleNetV2四种深度卷积神经网络进行训练,分析生成对抗网络的图像增殖技术在树叶识别中的作用。综合模型准确率和训练时间等性能指标选择最优模型,同时对模型的学习率进行调整。使用测试样本对参数优化后的模型进行验证,分析该方法在实践中的可行性和意义。【结果】基于生成对抗网络生成的样本具有高清晰度,高保真性,能够有效地辅助神经网络模型的训练工作,同时也丰富了样本类别,使之获得包含更多不同季节、形状、健康状况的树叶图像。与原始数据集相比,AlexNet、GoogLeNet、ResNet34和ShuffleNetV2四种网络在新数据集的训练上均表现出训练误差更小、验证精度更高的特点,其中学习率为0.01的ShuffleNetV2模型对该数据集的训练效果最好,训练时最高验证精度为99.7%。使用未参与训练的测试样本对该模型进行验证,模型对各树叶的识别效果较好,模型的总体识别准确率高达99.8%。与未使用GAN技术的普通深度卷积神经网络相比,本文提出的模型对树叶识别准确率明显提升。【结论】生成对抗网络可以有效地扩充图像数量,对图像进行风格转换,与深度卷积神经网络相结合,可以显著提高树叶识别准确率,适合应用于林业树叶识别领域。 展开更多
关键词 树叶识别 生成对抗网络 深度卷积神经网络
在线阅读 下载PDF
A novel multi-resolution network for the open-circuit faults diagnosis of automatic ramming drive system 被引量:1
9
作者 Liuxuan Wei Linfang Qian +3 位作者 Manyi Wang Minghao Tong Yilin Jiang Ming Li 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第4期225-237,共13页
The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit ... The open-circuit fault is one of the most common faults of the automatic ramming drive system(ARDS),and it can be categorized into the open-phase faults of Permanent Magnet Synchronous Motor(PMSM)and the open-circuit faults of Voltage Source Inverter(VSI). The stator current serves as a common indicator for detecting open-circuit faults. Due to the identical changes of the stator current between the open-phase faults in the PMSM and failures of double switches within the same leg of the VSI, this paper utilizes the zero-sequence voltage component as an additional diagnostic criterion to differentiate them.Considering the variable conditions and substantial noise of the ARDS, a novel Multi-resolution Network(Mr Net) is proposed, which can extract multi-resolution perceptual information and enhance robustness to the noise. Meanwhile, a feature weighted layer is introduced to allocate higher weights to characteristics situated near the feature frequency. Both simulation and experiment results validate that the proposed fault diagnosis method can diagnose 25 types of open-circuit faults and achieve more than98.28% diagnostic accuracy. In addition, the experiment results also demonstrate that Mr Net has the capability of diagnosing the fault types accurately under the interference of noise signals(Laplace noise and Gaussian noise). 展开更多
关键词 Fault diagnosis deep learning Multi-scale convolution Open-circuit convolutional neural network
在线阅读 下载PDF
基于SSA-VMD-WDCNN的水电机组故障诊断
10
作者 欧阳慧泉 杨峰 +3 位作者 单定军 肖龙 周迪 李超顺 《水电能源科学》 北大核心 2024年第12期147-151,共5页
为提高水电机组故障诊断的诊断精度和诊断速度,提出了一种自适应变分模态分解与第一层为宽卷积核的深度卷积神经网络相融合的水电机组故障诊断方法。首先利用麻雀搜索算法对VMD分解参数进行寻优,利用最优分解参数对水电机组振动信号进行... 为提高水电机组故障诊断的诊断精度和诊断速度,提出了一种自适应变分模态分解与第一层为宽卷积核的深度卷积神经网络相融合的水电机组故障诊断方法。首先利用麻雀搜索算法对VMD分解参数进行寻优,利用最优分解参数对水电机组振动信号进行VMD分解,实现振动信号的最优自适应分解,再对分解后IMF分量进行归一化处理,最后将处理后的分量输入到WDCNN模型中进行训练和测试,得到故障诊断结果。以实测水电机组振动信号进行对比试验,结果表明所提方法具有最优的诊断精度及良好的训练速度和降噪效果,在实际的水电机组故障诊断中有一定的参考作用。 展开更多
关键词 水电机组 故障诊断 麻雀搜索算法 自适应变分模态分解 深度卷积神经网络
在线阅读 下载PDF
脉冲噪声下基于DCNN的LFM信号去噪方法
11
作者 卢景琳 郭勇 杨立东 《现代雷达》 CSCD 北大核心 2024年第10期104-114,共11页
由于脉冲噪声具有明显的尖峰脉冲特性,使得基于高斯假设的传统去噪方法无法有效滤除脉冲噪声。针对这个问题,文中提出了一种脉冲噪声下基于深度卷积神经网络(DCNN)的线性调频(LFM)信号去噪方法。首先,生成LFM信号和随机脉冲噪声,构建不... 由于脉冲噪声具有明显的尖峰脉冲特性,使得基于高斯假设的传统去噪方法无法有效滤除脉冲噪声。针对这个问题,文中提出了一种脉冲噪声下基于深度卷积神经网络(DCNN)的线性调频(LFM)信号去噪方法。首先,生成LFM信号和随机脉冲噪声,构建不同广义信噪比下的数据集,输入DCNN进行训练和测试。进而,从时域波形图、分数谱、时频分布三个方面验证模型的去噪能力。最后,对去噪LFM信号进行分数阶傅里叶变换,通过搜寻分数谱中的峰值点来估计LFM信号的参数。仿真实验结果表明,文中方法不仅能够有效去除含噪信号中的随机脉冲噪声,而且还可以保持LFM信号的时域特征、分数谱特征和时频特征基本不变,进而提高了参数估计的噪声鲁棒性。与传统的基于非线性变换的方法相比,本文方法在低信噪比下仍能有效保持信号的分数谱特征和时频特征,具有更好的去噪性能和泛化能力。 展开更多
关键词 脉冲噪声 深度卷积神经网络 线性调频信号 分数阶傅里叶变换
在线阅读 下载PDF
Application of deep learning methods combined with physical background in wide field of view imaging atmospheric Cherenkov telescopes
12
作者 Ao-Yan Cheng Hao Cai +25 位作者 Shi Chen Tian-Lu Chen Xiang Dong You-Liang Feng Qi Gao Quan-Bu Gou Yi-Qing Guo Hong-Bo Hu Ming-Ming Kang Hai-Jin Li Chen Liu Mao-Yuan Liu Wei Liu Fang-Sheng Min Chu-Cheng Pan Bing-Qiang Qiao Xiang-Li Qian Hui-Ying Sun Yu-Chang Sun Ao-Bo Wang Xu Wang Zhen Wang Guang-Guang Xin Yu-Hua Yao Qiang Yuan Yi Zhang 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第4期208-220,共13页
The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of... The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of focused Cherenkov detectors.Its objective is to observe transient sources such as gamma-ray bursts and the counterparts of gravitational waves.This study aims to utilize the latest AI technology to enhance the sensitivity of HADAR experiments.Training datasets and models with distinctive creativity were constructed by incorporating the relevant physical theories for various applications.These models can determine the type,energy,and direction of the incident particles after careful design.We obtained a background identification accuracy of 98.6%,a relative energy reconstruction error of 10.0%,and an angular resolution of 0.22°in a test dataset at 10 TeV.These findings demonstrate the significant potential for enhancing the precision and dependability of detector data analysis in astrophysical research.By using deep learning techniques,the HADAR experiment’s observational sensitivity to the Crab Nebula has surpassed that of MAGIC and H.E.S.S.at energies below 0.5 TeV and remains competitive with conventional narrow-field Cherenkov telescopes at higher energies.In addition,our experiment offers a new approach for dealing with strongly connected,scattered data. 展开更多
关键词 VHE gamma-ray astronomy HADAR deep learning convolutional neural networks
在线阅读 下载PDF
Automatic depth matching method of well log based on deep reinforcement learning
13
作者 XIONG Wenjun XIAO Lizhi +1 位作者 YUAN Jiangru YUE Wenzheng 《Petroleum Exploration and Development》 SCIE 2024年第3期634-646,共13页
In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep rei... In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy. 展开更多
关键词 artificial intelligence machine learning depth matching well log multi-agent deep reinforcement learning convolutional neural network double deep Q-network
在线阅读 下载PDF
Deep Learning for Covert Communication
14
作者 Shen Weiguo Chen Jiepeng +4 位作者 Zheng Shilian Zhang Luxin Pei Zhangbin Lu Weidang Yang Xiaoniu 《China Communications》 SCIE CSCD 2024年第9期40-59,共20页
In recent years,deep learning has been gradually used in communication physical layer receivers and has achieved excellent performance.In this paper,we employ deep learning to establish covert communication systems,en... In recent years,deep learning has been gradually used in communication physical layer receivers and has achieved excellent performance.In this paper,we employ deep learning to establish covert communication systems,enabling the transmission of signals through high-power signals present in the prevailing environment while maintaining covertness,and propose a convolutional neural network(CNN)based model for covert communication receivers,namely Deep CCR.This model leverages CNN to execute the signal separation and recovery tasks commonly performed by traditional receivers.It enables the direct recovery of covert information from the received signal.The simulation results show that the proposed Deep CCR exhibits significant advantages in bit error rate(BER)compared to traditional receivers in the face of noise and multipath fading.We verify the covert performance of the covert method proposed in this paper using the maximum-minimum eigenvalue ratio-based method and the frequency domain entropy-based method.The results indicate that this method has excellent covert performance.We also evaluate the mutual influence between covert signals and opportunity signals,indicating that using opportunity signals as cover can cause certain performance losses to covert signals.When the interference-tosignal power ratio(ISR)is large,the impact of covert signals on opportunity signals is minimal. 展开更多
关键词 convolutional neural network covert communication deep learning
在线阅读 下载PDF
Deep learning CNN-APSO-LSSVM hybrid fusion model for feature optimization and gas-bearing prediction
15
作者 Jiu-Qiang Yang Nian-Tian Lin +3 位作者 Kai Zhang Yan Cui Chao Fu Dong Zhang 《Petroleum Science》 SCIE EI CAS CSCD 2024年第4期2329-2344,共16页
Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the i... Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs. 展开更多
关键词 Multicomponent seismic data deep learning Adaptive particle swarm optimization convolutional neural network Least squares support vector machine Feature optimization Gas-bearing distribution prediction
在线阅读 下载PDF
Rapid urban flood forecasting based on cellular automata and deep learning
16
作者 BAI Bing DONG Fei +1 位作者 LI Chuanqi WANG Wei 《水利水电技术(中英文)》 北大核心 2024年第12期17-28,共12页
[Objective]Urban floods are occurring more frequently because of global climate change and urbanization.Accordingly,urban rainstorm and flood forecasting has become a priority in urban hydrology research.However,two-d... [Objective]Urban floods are occurring more frequently because of global climate change and urbanization.Accordingly,urban rainstorm and flood forecasting has become a priority in urban hydrology research.However,two-dimensional hydrodynamic models execute calculations slowly,hindering the rapid simulation and forecasting of urban floods.To overcome this limitation and accelerate the speed and improve the accuracy of urban flood simulations and forecasting,numerical simulations and deep learning were combined to develop a more effective urban flood forecasting method.[Methods]Specifically,a cellular automata model was used to simulate the urban flood process and address the need to include a large number of datasets in the deep learning process.Meanwhile,to shorten the time required for urban flood forecasting,a convolutional neural network model was used to establish the mapping relationship between rainfall and inundation depth.[Results]The results show that the relative error of forecasting the maximum inundation depth in flood-prone locations is less than 10%,and the Nash efficiency coefficient of forecasting inundation depth series in flood-prone locations is greater than 0.75.[Conclusion]The result demonstrated that the proposed method could execute highly accurate simulations and quickly produce forecasts,illustrating its superiority as an urban flood forecasting technique. 展开更多
关键词 urban flooding flood-prone location cellular automata deep learning convolutional neural network rapid forecasting
在线阅读 下载PDF
基于DeepLabV3+模型改进的图像分割方法研究
17
作者 李武攀 梁玉琦 《现代信息科技》 2024年第19期39-43,共5页
近年来,计算机视觉快速发展,其中图像分割在计算机视觉中也举足轻重,在城市现代化建设,智能驾驶,地理勘测等方面都得到了充分应用。但是,大多数分割方法只关注图像特征的纵向深层特征与浅层特征的简单融合,而忽略了同一层图像特征的横... 近年来,计算机视觉快速发展,其中图像分割在计算机视觉中也举足轻重,在城市现代化建设,智能驾驶,地理勘测等方面都得到了充分应用。但是,大多数分割方法只关注图像特征的纵向深层特征与浅层特征的简单融合,而忽略了同一层图像特征的横向远程关系。针对此问题,基于DeepLabV3+框架,加入Swin-Transformer block,利用其自注意力机制特点,进行网络特征提取,以提高图像分割的全局和细节优化。其次,改进DeepLabV3+中上采样方法,利用CARAFE上采样模块取代简单的双线性插值法。实验表明,改进后的模型相较于基线模型MIoU提升2%,ACC提升1%。 展开更多
关键词 图像分割 深度学习 自注意力机制 上采样方法 卷积神经网络
在线阅读 下载PDF
基于改进一维卷积神经网络模型的蛋清粉近红外光谱真实性检测
18
作者 祝志慧 李沃霖 +4 位作者 韩雨彤 金永涛 叶文杰 王巧华 马美湖 《食品科学》 北大核心 2025年第6期245-253,共9页
引入近红外光谱检测技术,构建改进一维卷积神经网络(one-dimensional convolutional neural network,1D-CNN)蛋清粉真实性检测模型。该模型基于1D-CNN模型,无需对光谱数据进行预处理;同时在网络中加入有效通道注意力模块和一维全局平均... 引入近红外光谱检测技术,构建改进一维卷积神经网络(one-dimensional convolutional neural network,1D-CNN)蛋清粉真实性检测模型。该模型基于1D-CNN模型,无需对光谱数据进行预处理;同时在网络中加入有效通道注意力模块和一维全局平均池化层,提高模型提取光谱特征的能力,减少噪声干扰。结果表明,改进后的EG-1D-CNN模型可判别蛋清粉样本的真伪,对于掺假蛋清粉的检测率可达到97.80%,总准确率(AAR)为98.93%,最低检测限(LLRC)在淀粉、大豆分离蛋白、三聚氰胺、尿素和甘氨酸5种单掺杂物质上分别可达到1%、5%、0.1%、1%、5%,在多掺杂中可达到0.1%~1%,平均检测时间(AATS)可达到0.004 4 s。与传统1D-CNN网络结构及其他改进算法相比,改进后的EG-1D-CNN模型在蛋清粉真实性检测上具有更高精度,检测速度快,且模型占用空间小,更适合部署在嵌入式设备中。该研究可为后续开发针对蛋粉质量检测的便携式近红外光谱检测仪提供一定的理论基础。 展开更多
关键词 蛋清粉 近红外光谱 真实性检测 一维卷积神经网络 深度学习
在线阅读 下载PDF
面向高分辨率图像传输的CNN网络编码方案研究
19
作者 刘娜 杨颜博 +2 位作者 张嘉伟 李宝山 马建峰 《西安电子科技大学学报》 北大核心 2025年第2期225-238,共14页
网络编码技术可以有效提升网络的吞吐率,然而,传统网络编码的编解码复杂度高且难以自适应环境噪声等动态因素的影响而容易导致解码失真,近年来有研究者引入神经网络以优化网络编码过程,但在高分辨率图像传输任务中,现有的神经网络编码... 网络编码技术可以有效提升网络的吞吐率,然而,传统网络编码的编解码复杂度高且难以自适应环境噪声等动态因素的影响而容易导致解码失真,近年来有研究者引入神经网络以优化网络编码过程,但在高分辨率图像传输任务中,现有的神经网络编码方案对高维度空间信息的捕捉能力不足,带来较大的通信及计算开销。为此,文中提出采用二维卷积神经网络(CNN)对各网络节点的编解码器进行参数化设计的联合源的深度学习网络编码方案,通过CNN捕捉深层空间结构信息并降低网络节点的计算复杂度。在信源节点,通过卷积层运算实现对传输数据的降维处理,提升数据的传输速率;在中间节点,接收来自两个信源的数据并通过CNN编码压缩至单个信道传输;在信宿节点,对接收到的数据利用CNN进行升维解码而恢复出原始图像。实验表明,在不同信道带宽占用比和信道噪声水平下,该方案在峰值信噪比和结构相似度上展现出优良的解码性能。 展开更多
关键词 网络编码 深度学习 卷积神经网络 高分辨率图像 图像通信
在线阅读 下载PDF
基于卷积神经网络和多标签分类的复杂结构损伤诊断
20
作者 李书进 杨繁繁 张远进 《建筑科学与工程学报》 北大核心 2025年第1期101-111,共11页
为研究复杂空间框架节点损伤识别问题,利用多标签分类的优势,构建了多标签单输出和多标签多输出两种卷积神经网络模型,用于框架结构节点损伤位置的判断和损伤程度诊断。针对复杂结构损伤位置判断时工况多、识别准确率不高等问题,提出了... 为研究复杂空间框架节点损伤识别问题,利用多标签分类的优势,构建了多标签单输出和多标签多输出两种卷积神经网络模型,用于框架结构节点损伤位置的判断和损伤程度诊断。针对复杂结构损伤位置判断时工况多、识别准确率不高等问题,提出了一种能对结构进行分层(或分区)处理并同时完成损伤诊断的多标签多输出卷积神经网络模型。分别构建了适用于多标签分类的浅层、深层和深层残差多输出卷积神经网络模型,并对其泛化性能进行了研究。结果表明:提出的模型具有较高的损伤诊断准确率和一定的抗噪能力,特别是经过分层(分区)处理后的多标签多输出网络模型更具高效性,有更快的收敛速度和更高的诊断准确率;利用多标签多输出残差卷积神经网络模型可以从训练工况中提取到足够多的损伤信息,在面对未经过学习的工况时也能较准确判断各节点的损伤等级。 展开更多
关键词 损伤诊断 卷积神经网络 多标签分类 框架结构 深度学习
在线阅读 下载PDF
上一页 1 2 77 下一页 到第
使用帮助 返回顶部