A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan....A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.展开更多
Load forecasting is of great significance to the development of new power systems.With the advancement of smart grids,the integration and distribution of distributed renewable energy sources and power electronics devi...Load forecasting is of great significance to the development of new power systems.With the advancement of smart grids,the integration and distribution of distributed renewable energy sources and power electronics devices have made power load data increasingly complex and volatile.This places higher demands on the prediction and analysis of power loads.In order to improve the prediction accuracy of short-term power load,a CNN-BiLSTMTPA short-term power prediction model based on the Improved Whale Optimization Algorithm(IWOA)with mixed strategies was proposed.Firstly,the model combined the Convolutional Neural Network(CNN)with the Bidirectional Long Short-Term Memory Network(BiLSTM)to fully extract the spatio-temporal characteristics of the load data itself.Then,the Temporal Pattern Attention(TPA)mechanism was introduced into the CNN-BiLSTM model to automatically assign corresponding weights to the hidden states of the BiLSTM.This allowed the model to differentiate the importance of load sequences at different time intervals.At the same time,in order to solve the problem of the difficulties of selecting the parameters of the temporal model,and the poor global search ability of the whale algorithm,which is easy to fall into the local optimization,the whale algorithm(IWOA)was optimized by using the hybrid strategy of Tent chaos mapping and Levy flight strategy,so as to better search the parameters of the model.In this experiment,the real load data of a region in Zhejiang was taken as an example to analyze,and the prediction accuracy(R2)of the proposed method reached 98.83%.Compared with the prediction models such as BP,WOA-CNN-BiLSTM,SSA-CNN-BiLSTM,CNN-BiGRU-Attention,etc.,the experimental results showed that the model proposed in this study has a higher prediction accuracy.展开更多
针对现有能耗模型对动态工作负载波动具有低敏感性和低精度的问题,该文基于卷积长短期记忆(convolutional long short-term memory, ConvLSTM)神经网络,提出了用于移动边缘计算的服务器智能能耗模型(intelligence server energy consump...针对现有能耗模型对动态工作负载波动具有低敏感性和低精度的问题,该文基于卷积长短期记忆(convolutional long short-term memory, ConvLSTM)神经网络,提出了用于移动边缘计算的服务器智能能耗模型(intelligence server energy consumption model,IECM),用于预测和优化服务器的能量消耗。通过收集服务器运行时间参数,使用熵值法筛选和保留显著影响服务器能耗的参数。基于选定的参数,利用ConvLSTM神经网络训练服务器能耗模型的深度网络。与现有的能耗模型相比,IECM在CPU密集型、I/O密集型、内存密集型和混合型任务上,能够适应服务器工作负载的动态变化,并在能耗预测上具有更好的准确性。展开更多
When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ...When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.展开更多
Named entity recognition(NER)is an important part in knowledge extraction and one of the main tasks in constructing knowledge graphs.In today’s Chinese named entity recognition(CNER)task,the BERT-BiLSTM-CRF model is ...Named entity recognition(NER)is an important part in knowledge extraction and one of the main tasks in constructing knowledge graphs.In today’s Chinese named entity recognition(CNER)task,the BERT-BiLSTM-CRF model is widely used and often yields notable results.However,recognizing each entity with high accuracy remains challenging.Many entities do not appear as single words but as part of complex phrases,making it difficult to achieve accurate recognition using word embedding information alone because the intricate lexical structure often impacts the performance.To address this issue,we propose an improved Bidirectional Encoder Representations from Transformers(BERT)character word conditional random field(CRF)(BCWC)model.It incorporates a pre-trained word embedding model using the skip-gram with negative sampling(SGNS)method,alongside traditional BERT embeddings.By comparing datasets with different word segmentation tools,we obtain enhanced word embedding features for segmented data.These features are then processed using the multi-scale convolution and iterated dilated convolutional neural networks(IDCNNs)with varying expansion rates to capture features at multiple scales and extract diverse contextual information.Additionally,a multi-attention mechanism is employed to fuse word and character embeddings.Finally,CRFs are applied to learn sequence constraints and optimize entity label annotations.A series of experiments are conducted on three public datasets,demonstrating that the proposed method outperforms the recent advanced baselines.BCWC is capable to address the challenge of recognizing complex entities by combining character-level and word-level embedding information,thereby improving the accuracy of CNER.Such a model is potential to the applications of more precise knowledge extraction such as knowledge graph construction and information retrieval,particularly in domain-specific natural language processing tasks that require high entity recognition precision.展开更多
Owing to the expansion of the grid interconnection scale,the spatiotemporal distribution characteristics of the frequency response of power systems after the occurrence of disturbances have become increasingly importa...Owing to the expansion of the grid interconnection scale,the spatiotemporal distribution characteristics of the frequency response of power systems after the occurrence of disturbances have become increasingly important.These characteristics can provide effective support in coordinated security control.However,traditional model-based frequencyprediction methods cannot satisfactorily meet the requirements of online applications owing to the long calculation time and accurate power-system models.Therefore,this study presents a rolling frequency-prediction model based on a graph convolutional network(GCN)and a long short-term memory(LSTM)spatiotemporal network and named as STGCN-LSTM.In the proposed method,the measurement data from phasor measurement units after the occurrence of disturbances are used to construct the spatiotemporal input.An improved GCN embedded with topology information is used to extract the spatial features,while the LSTM network is used to extract the temporal features.The spatiotemporal-network-regression model is further trained,and asynchronous-frequency-sequence prediction is realized by utilizing the rolling update of measurement information.The proposed spatiotemporal-network-based prediction model can achieve accurate frequency prediction by considering the spatiotemporal distribution characteristics of the frequency response.The noise immunity and robustness of the proposed method are verified on the IEEE 39-bus and IEEE 118-bus systems.展开更多
To correct spectral peak drift and obtain more reliable net counts,this study proposes a long short-term memory(LSTM)model fused with a convolutional neural network(CNN)to accurately estimate the relevant parameters o...To correct spectral peak drift and obtain more reliable net counts,this study proposes a long short-term memory(LSTM)model fused with a convolutional neural network(CNN)to accurately estimate the relevant parameters of a nuclear pulse signal by learning of samples.A predefined mathematical model was used to train the CNN-LSTM model and generate a dataset composed of distorted pulse sequences.The trained model was validated using simulated pulses.The relative errors in the amplitude estimation of pulse sequences with different degrees of distortion were obtained using triangular shaping,CNN-LSTM,and LSTM models.As a result,for severely distorted pulses,the relative error of the CNN-LSTM model in estimating the pulse parameters was reduced by 14.35%compared with that of the triangular shaping algorithm.For slightly distorted pulses,the relative error of the CNN-LSTM model was reduced by 0.33%compared with that of the triangular shaping algorithm.The model was then evaluated considering two performance indicators,the correction ratio and the efficiency ratio,which represent the proportion of the increase in peak area of the two characteristic peak regions of interest(ROIs)to the peak area of the corrected characteristic peak ROI and the proportion of the increase in peak area of the two characteristic peak ROIs to the peak areas of the two shadow peak ROI,respectively.Ten measurement results of the iron ore samples indicate that approximately 86.27%of the decreased peak area of the shadow peak ROI was corrected to the characteristic peak ROI,and the proportion of the corrected peak area to the peak area of the characteristic peak ROI was approximately 1.72%.The proposed CNN-LSTM model can be applied to X-ray energy spectrum correction,which is of great significance for X-ray spectroscopy and elemental content analyses.展开更多
Video Description aims to automatically generate descriptive natural language for videos.Due to the large volume of multi-modal data and successful implementations of Deep Neural Networks(DNNs),a wide range of models ...Video Description aims to automatically generate descriptive natural language for videos.Due to the large volume of multi-modal data and successful implementations of Deep Neural Networks(DNNs),a wide range of models have been proposed.However,previous models learn insufficient linguistic information or correlation between visual and textual modalities.In order to address those problems,this paper proposes an integrated model using Long Short-Term Memory(LSTM).This proposed model consists of triple channels in parallel:a primary video description channel,a sentence-to-sentence channel for language learning,and a channel to integrate visual and textual information.Additionally,the parallel three channels are connected by LSTM weight matrices during training.The VD-ivt model is evaluated on two publicly available datasets,i.e.Youtube2 Text and LSMDC.Experimental results demonstrate that the performance of the proposed model outperforms those benchmarks.展开更多
The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional con...The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification.展开更多
基金National Key Research and Development Program of China (Grant No. 2022YFE0102700)National Natural Science Foundation of China (Grant No. 52102420)+2 种基金research project “Safe Da Batt” (03EMF0409A) funded by the German Federal Ministry of Digital and Transport (BMDV)China Postdoctoral Science Foundation (Grant No. 2023T160085)Sichuan Science and Technology Program (Grant No. 2024NSFSC0938)。
文摘A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.
文摘Load forecasting is of great significance to the development of new power systems.With the advancement of smart grids,the integration and distribution of distributed renewable energy sources and power electronics devices have made power load data increasingly complex and volatile.This places higher demands on the prediction and analysis of power loads.In order to improve the prediction accuracy of short-term power load,a CNN-BiLSTMTPA short-term power prediction model based on the Improved Whale Optimization Algorithm(IWOA)with mixed strategies was proposed.Firstly,the model combined the Convolutional Neural Network(CNN)with the Bidirectional Long Short-Term Memory Network(BiLSTM)to fully extract the spatio-temporal characteristics of the load data itself.Then,the Temporal Pattern Attention(TPA)mechanism was introduced into the CNN-BiLSTM model to automatically assign corresponding weights to the hidden states of the BiLSTM.This allowed the model to differentiate the importance of load sequences at different time intervals.At the same time,in order to solve the problem of the difficulties of selecting the parameters of the temporal model,and the poor global search ability of the whale algorithm,which is easy to fall into the local optimization,the whale algorithm(IWOA)was optimized by using the hybrid strategy of Tent chaos mapping and Levy flight strategy,so as to better search the parameters of the model.In this experiment,the real load data of a region in Zhejiang was taken as an example to analyze,and the prediction accuracy(R2)of the proposed method reached 98.83%.Compared with the prediction models such as BP,WOA-CNN-BiLSTM,SSA-CNN-BiLSTM,CNN-BiGRU-Attention,etc.,the experimental results showed that the model proposed in this study has a higher prediction accuracy.
文摘针对现有能耗模型对动态工作负载波动具有低敏感性和低精度的问题,该文基于卷积长短期记忆(convolutional long short-term memory, ConvLSTM)神经网络,提出了用于移动边缘计算的服务器智能能耗模型(intelligence server energy consumption model,IECM),用于预测和优化服务器的能量消耗。通过收集服务器运行时间参数,使用熵值法筛选和保留显著影响服务器能耗的参数。基于选定的参数,利用ConvLSTM神经网络训练服务器能耗模型的深度网络。与现有的能耗模型相比,IECM在CPU密集型、I/O密集型、内存密集型和混合型任务上,能够适应服务器工作负载的动态变化,并在能耗预测上具有更好的准确性。
文摘When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.
基金supported by the International Research Center of Big Data for Sustainable Development Goals under Grant No.CBAS2022GSP05the Open Fund of State Key Laboratory of Remote Sensing Science under Grant No.6142A01210404the Hubei Key Laboratory of Intelligent Geo-Information Processing under Grant No.KLIGIP-2022-B03.
文摘Named entity recognition(NER)is an important part in knowledge extraction and one of the main tasks in constructing knowledge graphs.In today’s Chinese named entity recognition(CNER)task,the BERT-BiLSTM-CRF model is widely used and often yields notable results.However,recognizing each entity with high accuracy remains challenging.Many entities do not appear as single words but as part of complex phrases,making it difficult to achieve accurate recognition using word embedding information alone because the intricate lexical structure often impacts the performance.To address this issue,we propose an improved Bidirectional Encoder Representations from Transformers(BERT)character word conditional random field(CRF)(BCWC)model.It incorporates a pre-trained word embedding model using the skip-gram with negative sampling(SGNS)method,alongside traditional BERT embeddings.By comparing datasets with different word segmentation tools,we obtain enhanced word embedding features for segmented data.These features are then processed using the multi-scale convolution and iterated dilated convolutional neural networks(IDCNNs)with varying expansion rates to capture features at multiple scales and extract diverse contextual information.Additionally,a multi-attention mechanism is employed to fuse word and character embeddings.Finally,CRFs are applied to learn sequence constraints and optimize entity label annotations.A series of experiments are conducted on three public datasets,demonstrating that the proposed method outperforms the recent advanced baselines.BCWC is capable to address the challenge of recognizing complex entities by combining character-level and word-level embedding information,thereby improving the accuracy of CNER.Such a model is potential to the applications of more precise knowledge extraction such as knowledge graph construction and information retrieval,particularly in domain-specific natural language processing tasks that require high entity recognition precision.
基金supported by the National Natural Science Foundation of China(Grant Nos.51627811,51725702)the Science and Technology Project of State Grid Corporation of Beijing(Grant No.SGBJDK00DWJS2100164).
文摘Owing to the expansion of the grid interconnection scale,the spatiotemporal distribution characteristics of the frequency response of power systems after the occurrence of disturbances have become increasingly important.These characteristics can provide effective support in coordinated security control.However,traditional model-based frequencyprediction methods cannot satisfactorily meet the requirements of online applications owing to the long calculation time and accurate power-system models.Therefore,this study presents a rolling frequency-prediction model based on a graph convolutional network(GCN)and a long short-term memory(LSTM)spatiotemporal network and named as STGCN-LSTM.In the proposed method,the measurement data from phasor measurement units after the occurrence of disturbances are used to construct the spatiotemporal input.An improved GCN embedded with topology information is used to extract the spatial features,while the LSTM network is used to extract the temporal features.The spatiotemporal-network-regression model is further trained,and asynchronous-frequency-sequence prediction is realized by utilizing the rolling update of measurement information.The proposed spatiotemporal-network-based prediction model can achieve accurate frequency prediction by considering the spatiotemporal distribution characteristics of the frequency response.The noise immunity and robustness of the proposed method are verified on the IEEE 39-bus and IEEE 118-bus systems.
基金This work was supported by the Open Project of the Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology(No.NLK2022-05)Central Government Guidance Funds for Local Scientific and Technological Development,China(No.Guike ZY22096024)+3 种基金Sichuan Natural Science Youth Fund Project(No.2023NSFSC1366)Open Research Fund of the National Engineering Research Center for Agro-Ecological Big Data Analysis&Application,Anhui University(No.AE202209)Research Fund of Guangxi Key Lab of Multi-source Information Mining&Security(MIMS22-04)National Natural Science Youth Foundation of China(No.12305214).
文摘To correct spectral peak drift and obtain more reliable net counts,this study proposes a long short-term memory(LSTM)model fused with a convolutional neural network(CNN)to accurately estimate the relevant parameters of a nuclear pulse signal by learning of samples.A predefined mathematical model was used to train the CNN-LSTM model and generate a dataset composed of distorted pulse sequences.The trained model was validated using simulated pulses.The relative errors in the amplitude estimation of pulse sequences with different degrees of distortion were obtained using triangular shaping,CNN-LSTM,and LSTM models.As a result,for severely distorted pulses,the relative error of the CNN-LSTM model in estimating the pulse parameters was reduced by 14.35%compared with that of the triangular shaping algorithm.For slightly distorted pulses,the relative error of the CNN-LSTM model was reduced by 0.33%compared with that of the triangular shaping algorithm.The model was then evaluated considering two performance indicators,the correction ratio and the efficiency ratio,which represent the proportion of the increase in peak area of the two characteristic peak regions of interest(ROIs)to the peak area of the corrected characteristic peak ROI and the proportion of the increase in peak area of the two characteristic peak ROIs to the peak areas of the two shadow peak ROI,respectively.Ten measurement results of the iron ore samples indicate that approximately 86.27%of the decreased peak area of the shadow peak ROI was corrected to the characteristic peak ROI,and the proportion of the corrected peak area to the peak area of the characteristic peak ROI was approximately 1.72%.The proposed CNN-LSTM model can be applied to X-ray energy spectrum correction,which is of great significance for X-ray spectroscopy and elemental content analyses.
基金supported in part by National Science Fund under Grant No.61273365111 Project under Grant No.B08004
文摘Video Description aims to automatically generate descriptive natural language for videos.Due to the large volume of multi-modal data and successful implementations of Deep Neural Networks(DNNs),a wide range of models have been proposed.However,previous models learn insufficient linguistic information or correlation between visual and textual modalities.In order to address those problems,this paper proposes an integrated model using Long Short-Term Memory(LSTM).This proposed model consists of triple channels in parallel:a primary video description channel,a sentence-to-sentence channel for language learning,and a channel to integrate visual and textual information.Additionally,the parallel three channels are connected by LSTM weight matrices during training.The VD-ivt model is evaluated on two publicly available datasets,i.e.Youtube2 Text and LSMDC.Experimental results demonstrate that the performance of the proposed model outperforms those benchmarks.
基金supported by the Key Project of National Natural Science Foundation of China-Civil Aviation Joint Fund under Grant No.U2033212。
文摘The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification.