In the process of quantum key distribution(QKD), the communicating parties need to randomly determine quantum states and measurement bases. To ensure the security of key distribution, we aim to use true random sequenc...In the process of quantum key distribution(QKD), the communicating parties need to randomly determine quantum states and measurement bases. To ensure the security of key distribution, we aim to use true random sequences generated by true random number generators as the source of randomness. In practical systems, due to the difficulty of obtaining true random numbers, pseudo-random number generators are used instead. Although the random numbers generated by pseudorandom number generators are statistically random, meeting the requirements of uniform distribution and independence,they rely on an initial seed to generate corresponding pseudo-random sequences. Attackers may predict future elements from the initial elements of the random sequence, posing a security risk to quantum key distribution. This paper analyzes the problems existing in current pseudo-random number generators and proposes corresponding attack methods and applicable scenarios based on the vulnerabilities in the pseudo-random sequence generation process. Under certain conditions, it is possible to obtain the keys of the communicating parties with very low error rates, thus effectively attacking the quantum key system. This paper presents new requirements for the use of random numbers in quantum key systems, which can effectively guide the security evaluation of quantum key distribution protocols.展开更多
In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clu...In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.展开更多
Low-earth-orbit(LEO)satellite network has become a critical component of the satelliteterrestrial integrated network(STIN)due to its superior signal quality and minimal communication latency.However,the highly dynamic...Low-earth-orbit(LEO)satellite network has become a critical component of the satelliteterrestrial integrated network(STIN)due to its superior signal quality and minimal communication latency.However,the highly dynamic nature of LEO satellites leads to limited and rapidly varying contact time between them and Earth stations(ESs),making it difficult to timely download massive communication and remote sensing data within the limited time window.To address this challenge in heterogeneous satellite networks with coexisting geostationary-earth-orbit(GEO)and LEO satellites,this paper proposes a dynamic collaborative inter-satellite data download strategy to optimize the long-term weighted energy consumption and data downloads within the constraints of on-board power,backlog stability and time-varying contact.Specifically,the Lyapunov optimization theory is applied to transform the long-term stochastic optimization problem,subject to time-varying contact time and on-board power constraints,into multiple deterministic single time slot problems,based on which online distributed algorithms are developed to enable each satellite to independently obtain the transmit power allocation and data processing decisions in closed-form.Finally,the simulation results demonstrate the superiority of the proposed scheme over benchmarks,e.g.,achieving asymptotic optimality of the weighted energy consumption and data downloads,while maintaining stability of the on-board backlog.展开更多
Site index(SI)is determined from the top height development and is a proxy for forest productivity,defined as the expected top height for a given species at a certain index age.In Norway,an index age of 40 years is us...Site index(SI)is determined from the top height development and is a proxy for forest productivity,defined as the expected top height for a given species at a certain index age.In Norway,an index age of 40 years is used.By using bi-temporal airborne laser scanning(ALS)data,SI can be determined using models estimated from SI observed on field plots(the direct approach)or from predicted top heights at two points in time(the height differential approach).Time series of ALS data may enhance SI determination compared to conventional methods used in operational forest inventory by providing more detailed information about the top height development.We used longitudinal data comprising spatially consistent field and ALS data collected from training plots in 1999,2010,and 2022 to determine SI using the direct and height differential approaches using all combinations of years and performed an external validation.We also evaluated the use of data assimilation.Values of root mean square error obtained from external validation were in the ranges of 16.3%–21.4%and 12.8%–20.6%of the mean fieldregistered SI for the direct approach and the height differential approach,respectively.There were no statistically significant effects of time series length or the number of points in time on the obtained accuracies.Data assimilation did not result in any substantial improvement in the obtained accuracies.Although a time series of ALS data did not yield greater accuracies compared to using only two points in time,a larger proportion of the study area could be used in ALS-based determination of SI when a time series was available.This was because areas that were unsuitable for SI determination between two points in time could be subject to SI determination based on data from another part of the time series.展开更多
Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpe...Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.展开更多
This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key de...This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.展开更多
Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL...Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.展开更多
Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the g...Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.展开更多
Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted ...Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted multi-user IDET system is studied,where all the received signals at the users are exploited for energy harvesting without the degradation of wireless data transfer(WDT)performance.The joint IDET performance is then analysed theoretically by conceiving a practical time-dependent wireless channel.With the aid of the AO based algorithm,the average effective data rate among users are maximized by ensuring the BER and the wireless energy transfer(WET)performance.Simulation results validate and evaluate the IDET performance of the EHM assisted system,which also demonstrates that the optimal number of user clusters and IDET time slots should be allocated,in order to improve the WET and WDT performance.展开更多
A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°an...A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.展开更多
For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and all...For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and allows privacy information to be preserved.Data owners can tightly manage their data with efficient revocation and only grant one-time adaptive access for the fulfillment of the requester.We prove that our protocol is semanticallly secure,blind,and secure against oblivious requesters and malicious file keepers.We also provide security analysis in the context of four typical attacks.展开更多
The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive st...The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive structure for measuring the worth of data elements,hindering effective navigation of the changing digital environment.This paper aims to fill this research gap by introducing the innovative concept of“data components.”It proposes a graphtheoretic representation model that presents a clear mathematical definition and demonstrates the superiority of data components over traditional processing methods.Additionally,the paper introduces an information measurement model that provides a way to calculate the information entropy of data components and establish their increased informational value.The paper also assesses the value of information,suggesting a pricing mechanism based on its significance.In conclusion,this paper establishes a robust framework for understanding and quantifying the value of implicit information in data,laying the groundwork for future research and practical applications.展开更多
In the quantum Monte Carlo(QMC)method,the pseudo-random number generator(PRNG)plays a crucial role in determining the computation time.However,the hidden structure of the PRNG may lead to serious issues such as the br...In the quantum Monte Carlo(QMC)method,the pseudo-random number generator(PRNG)plays a crucial role in determining the computation time.However,the hidden structure of the PRNG may lead to serious issues such as the breakdown of the Markov process.Here,we systematically analyze the performance of different PRNGs on the widely used QMC method known as the stochastic series expansion(SSE)algorithm.To quantitatively compare them,we introduce a quantity called QMC efficiency that can effectively reflect the efficiency of the algorithms.After testing several representative observables of the Heisenberg model in one and two dimensions,we recommend the linear congruential generator as the best choice of PRNG.Our work not only helps improve the performance of the SSE method but also sheds light on the other Markov-chain-based numerical algorithms.展开更多
With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou...With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.展开更多
Guest Editors Prof.Ling Tian Prof.Jian-Hua Tao University of Electronic Science and Technology of China Tsinghua University lingtian@uestc.edu.cn jhtao@tsinghua.edu.cn Dr.Bin Zhou National University of Defense Techno...Guest Editors Prof.Ling Tian Prof.Jian-Hua Tao University of Electronic Science and Technology of China Tsinghua University lingtian@uestc.edu.cn jhtao@tsinghua.edu.cn Dr.Bin Zhou National University of Defense Technology binzhou@nudt.edu.cn, Since the concept of “Big Data” was first introduced in Nature in 2008, it has been widely applied in fields, such as business, healthcare, national defense, education, transportation, and security. With the maturity of artificial intelligence technology, big data analysis techniques tailored to various fields have made significant progress, but still face many challenges in terms of data quality, algorithms, and computing power.展开更多
Achieving a balance between accuracy and efficiency in target detection applications is an important research topic.To detect abnormal targets on power transmission lines at the power edge,this paper proposes an effec...Achieving a balance between accuracy and efficiency in target detection applications is an important research topic.To detect abnormal targets on power transmission lines at the power edge,this paper proposes an effective method for reducing the data bit width of the network for floating-point quantization.By performing exponent prealignment and mantissa shifting operations,this method avoids the frequent alignment operations of standard floating-point data,thereby further reducing the exponent and mantissa bit width input into the training process.This enables training low-data-bit width models with low hardware-resource consumption while maintaining accuracy.Experimental tests were conducted on a dataset of real-world images of abnormal targets on transmission lines.The results indicate that while maintaining accuracy at a basic level,the proposed method can significantly reduce the data bit width compared with single-precision data.This suggests that the proposed method has a marked ability to enhance the real-time detection of abnormal targets in transmission circuits.Furthermore,a qualitative analysis indicated that the proposed quantization method is particularly suitable for hardware architectures that integrate storage and computation and exhibit good transferability.展开更多
Research data infrastructures form the cornerstone in both cyber and physical spaces,driving the progression of the data-intensive scientific research paradigm.This opinion paper presents an overview of global researc...Research data infrastructures form the cornerstone in both cyber and physical spaces,driving the progression of the data-intensive scientific research paradigm.This opinion paper presents an overview of global research data infrastructure,drawing insights from national roadmaps and strategic documents related to research data infrastructure.It emphasizes the pivotal role of research data infrastructures by delineating four new missions aimed at positioning them at the core of the current scientific research and communication ecosystem.The four new missions of research data infrastructures are:(1)as a pioneer,to transcend the disciplinary border and address complex,cutting-edge scientific and social challenges with problem-and data-oriented insights;(2)as an architect,to establish a digital,intelligent,flexible research and knowledge services environment;(3)as a platform,to foster the high-end academic communication;(4)as a coordinator,to balance scientific openness with ethics needs.展开更多
Traditional Io T systems suffer from high equipment management costs and difficulty in trustworthy data sharing caused by centralization.Blockchain provides a feasible research direction to solve these problems. The m...Traditional Io T systems suffer from high equipment management costs and difficulty in trustworthy data sharing caused by centralization.Blockchain provides a feasible research direction to solve these problems. The main challenge at this stage is to integrate the blockchain from the resourceconstrained Io T devices and ensure the data of Io T system is credible. We provide a general framework for intelligent Io T data acquisition and sharing in an untrusted environment based on the blockchain, where gateways become Oracles. A distributed Oracle network based on Byzantine Fault Tolerant algorithm is used to provide trusted data for the blockchain to make intelligent Io T data trustworthy. An aggregation contract is deployed to collect data from various Oracle and share the credible data to all on-chain users. We also propose a gateway data aggregation scheme based on the REST API event publishing/subscribing mechanism which uses SQL to achieve flexible data aggregation. The experimental results show that the proposed scheme can alleviate the problem of limited performance of Io T equipment, make data reliable, and meet the diverse data needs on the chain.展开更多
A modified multiple-component scattering power decomposition for analyzing polarimetric synthetic aperture radar(PolSAR)data is proposed.The modified decomposition involves two distinct steps.Firstly,ei⁃genvectors of ...A modified multiple-component scattering power decomposition for analyzing polarimetric synthetic aperture radar(PolSAR)data is proposed.The modified decomposition involves two distinct steps.Firstly,ei⁃genvectors of the coherency matrix are used to modify the scattering models.Secondly,the entropy and anisotro⁃py of targets are used to improve the volume scattering power.With the guarantee of high double-bounce scatter⁃ing power in the urban areas,the proposed algorithm effectively improves the volume scattering power of vegeta⁃tion areas.The efficacy of the modified multiple-component scattering power decomposition is validated using ac⁃tual AIRSAR PolSAR data.The scattering power obtained through decomposing the original coherency matrix and the coherency matrix after orientation angle compensation is compared with three algorithms.Results from the experiment demonstrate that the proposed decomposition yields more effective scattering power for different PolSAR data sets.展开更多
文摘In the process of quantum key distribution(QKD), the communicating parties need to randomly determine quantum states and measurement bases. To ensure the security of key distribution, we aim to use true random sequences generated by true random number generators as the source of randomness. In practical systems, due to the difficulty of obtaining true random numbers, pseudo-random number generators are used instead. Although the random numbers generated by pseudorandom number generators are statistically random, meeting the requirements of uniform distribution and independence,they rely on an initial seed to generate corresponding pseudo-random sequences. Attackers may predict future elements from the initial elements of the random sequence, posing a security risk to quantum key distribution. This paper analyzes the problems existing in current pseudo-random number generators and proposes corresponding attack methods and applicable scenarios based on the vulnerabilities in the pseudo-random sequence generation process. Under certain conditions, it is possible to obtain the keys of the communicating parties with very low error rates, thus effectively attacking the quantum key system. This paper presents new requirements for the use of random numbers in quantum key systems, which can effectively guide the security evaluation of quantum key distribution protocols.
基金partially supported by the National Natural Science Foundation of China(62161016)the Key Research and Development Project of Lanzhou Jiaotong University(ZDYF2304)+1 种基金the Beijing Engineering Research Center of Highvelocity Railway Broadband Mobile Communications(BHRC-2022-1)Beijing Jiaotong University。
文摘In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.
基金supported by the National Natural Science Foundation of China under Grant 62371098the National Key Laboratory ofWireless Communications Foundation under Grant IFN20230203the National Key Research and Development Program of China under Grant 2021YFB2900404.
文摘Low-earth-orbit(LEO)satellite network has become a critical component of the satelliteterrestrial integrated network(STIN)due to its superior signal quality and minimal communication latency.However,the highly dynamic nature of LEO satellites leads to limited and rapidly varying contact time between them and Earth stations(ESs),making it difficult to timely download massive communication and remote sensing data within the limited time window.To address this challenge in heterogeneous satellite networks with coexisting geostationary-earth-orbit(GEO)and LEO satellites,this paper proposes a dynamic collaborative inter-satellite data download strategy to optimize the long-term weighted energy consumption and data downloads within the constraints of on-board power,backlog stability and time-varying contact.Specifically,the Lyapunov optimization theory is applied to transform the long-term stochastic optimization problem,subject to time-varying contact time and on-board power constraints,into multiple deterministic single time slot problems,based on which online distributed algorithms are developed to enable each satellite to independently obtain the transmit power allocation and data processing decisions in closed-form.Finally,the simulation results demonstrate the superiority of the proposed scheme over benchmarks,e.g.,achieving asymptotic optimality of the weighted energy consumption and data downloads,while maintaining stability of the on-board backlog.
基金part of the Centre for Research-based Innovation SmartForest:Bringing Industry 4.0 to the Norwegian forest sector(NFR SFI project no.309671,smartforest.no)。
文摘Site index(SI)is determined from the top height development and is a proxy for forest productivity,defined as the expected top height for a given species at a certain index age.In Norway,an index age of 40 years is used.By using bi-temporal airborne laser scanning(ALS)data,SI can be determined using models estimated from SI observed on field plots(the direct approach)or from predicted top heights at two points in time(the height differential approach).Time series of ALS data may enhance SI determination compared to conventional methods used in operational forest inventory by providing more detailed information about the top height development.We used longitudinal data comprising spatially consistent field and ALS data collected from training plots in 1999,2010,and 2022 to determine SI using the direct and height differential approaches using all combinations of years and performed an external validation.We also evaluated the use of data assimilation.Values of root mean square error obtained from external validation were in the ranges of 16.3%–21.4%and 12.8%–20.6%of the mean fieldregistered SI for the direct approach and the height differential approach,respectively.There were no statistically significant effects of time series length or the number of points in time on the obtained accuracies.Data assimilation did not result in any substantial improvement in the obtained accuracies.Although a time series of ALS data did not yield greater accuracies compared to using only two points in time,a larger proportion of the study area could be used in ALS-based determination of SI when a time series was available.This was because areas that were unsuitable for SI determination between two points in time could be subject to SI determination based on data from another part of the time series.
基金supported in part by the National Key Research and Development Program of China under Grant 2024YFE0200600in part by the National Natural Science Foundation of China under Grant 62071425+3 种基金in part by the Zhejiang Key Research and Development Plan under Grant 2022C01093in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LR23F010005in part by the National Key Laboratory of Wireless Communications Foundation under Grant 2023KP01601in part by the Big Data and Intelligent Computing Key Lab of CQUPT under Grant BDIC-2023-B-001.
文摘Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.
基金supported by Poongsan-KAIST Future Research Center Projectthe fund support provided by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(Grant No.2023R1A2C2005661)。
文摘This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.
文摘Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.
基金funded by the National Natural Science Foundation of China(General Program:No.52074314,No.U19B6003-05)National Key Research and Development Program of China(2019YFA0708303-05)。
文摘Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.
基金supported in part by the MOST Major Research and Development Project(Grant No.2021YFB2900204)the National Natural Science Foundation of China(NSFC)(Grant No.62201123,No.62132004,No.61971102)+3 种基金China Postdoctoral Science Foundation(Grant No.2022TQ0056)in part by the financial support of the Sichuan Science and Technology Program(Grant No.2022YFH0022)Sichuan Major R&D Project(Grant No.22QYCX0168)the Municipal Government of Quzhou(Grant No.2022D031)。
文摘Integrated data and energy transfer(IDET)enables the electromagnetic waves to transmit wireless energy at the same time of data delivery for lowpower devices.In this paper,an energy harvesting modulation(EHM)assisted multi-user IDET system is studied,where all the received signals at the users are exploited for energy harvesting without the degradation of wireless data transfer(WDT)performance.The joint IDET performance is then analysed theoretically by conceiving a practical time-dependent wireless channel.With the aid of the AO based algorithm,the average effective data rate among users are maximized by ensuring the BER and the wireless energy transfer(WET)performance.Simulation results validate and evaluate the IDET performance of the EHM assisted system,which also demonstrates that the optimal number of user clusters and IDET time slots should be allocated,in order to improve the WET and WDT performance.
基金This work was supported by the general program(No.1177531)joint funding(No.U2067205)from the National Natural Science Foundation of China.
文摘A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.
基金partially supported by the National Natural Science Foundation of China under grant no.62372245the Foundation of Yunnan Key Laboratory of Blockchain Application Technology under Grant 202105AG070005+1 种基金in part by the Foundation of State Key Laboratory of Public Big Datain part by the Foundation of Key Laboratory of Computational Science and Application of Hainan Province under Grant JSKX202202。
文摘For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and allows privacy information to be preserved.Data owners can tightly manage their data with efficient revocation and only grant one-time adaptive access for the fulfillment of the requester.We prove that our protocol is semanticallly secure,blind,and secure against oblivious requesters and malicious file keepers.We also provide security analysis in the context of four typical attacks.
基金supported by the EU H2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement(Project-DEEP,Grant number:101109045)National Key R&D Program of China with Grant number 2018YFB1800804+2 种基金the National Natural Science Foundation of China(Nos.NSFC 61925105,and 62171257)Tsinghua University-China Mobile Communications Group Co.,Ltd,Joint Institutethe Fundamental Research Funds for the Central Universities,China(No.FRF-NP-20-03)。
文摘The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive structure for measuring the worth of data elements,hindering effective navigation of the changing digital environment.This paper aims to fill this research gap by introducing the innovative concept of“data components.”It proposes a graphtheoretic representation model that presents a clear mathematical definition and demonstrates the superiority of data components over traditional processing methods.Additionally,the paper introduces an information measurement model that provides a way to calculate the information entropy of data components and establish their increased informational value.The paper also assesses the value of information,suggesting a pricing mechanism based on its significance.In conclusion,this paper establishes a robust framework for understanding and quantifying the value of implicit information in data,laying the groundwork for future research and practical applications.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.12274046,11874094,and 12147102)Chongqing Natural Science Foundation(Grant No.CSTB2022NSCQ-JQX0018)Fundamental Research Funds for the Central Universities(Grant No.2021CDJZYJH-003).
文摘In the quantum Monte Carlo(QMC)method,the pseudo-random number generator(PRNG)plays a crucial role in determining the computation time.However,the hidden structure of the PRNG may lead to serious issues such as the breakdown of the Markov process.Here,we systematically analyze the performance of different PRNGs on the widely used QMC method known as the stochastic series expansion(SSE)algorithm.To quantitatively compare them,we introduce a quantity called QMC efficiency that can effectively reflect the efficiency of the algorithms.After testing several representative observables of the Heisenberg model in one and two dimensions,we recommend the linear congruential generator as the best choice of PRNG.Our work not only helps improve the performance of the SSE method but also sheds light on the other Markov-chain-based numerical algorithms.
基金sponsored by the National Natural Science Foundation of China under grant number No. 62172353, No. 62302114, No. U20B2046 and No. 62172115Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education No.1331007 and No. 1311022+1 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No. 17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108
文摘With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.
文摘Guest Editors Prof.Ling Tian Prof.Jian-Hua Tao University of Electronic Science and Technology of China Tsinghua University lingtian@uestc.edu.cn jhtao@tsinghua.edu.cn Dr.Bin Zhou National University of Defense Technology binzhou@nudt.edu.cn, Since the concept of “Big Data” was first introduced in Nature in 2008, it has been widely applied in fields, such as business, healthcare, national defense, education, transportation, and security. With the maturity of artificial intelligence technology, big data analysis techniques tailored to various fields have made significant progress, but still face many challenges in terms of data quality, algorithms, and computing power.
基金supported by State Grid Corporation Basic Foresight Project(5700-202255308A-2-0-QZ).
文摘Achieving a balance between accuracy and efficiency in target detection applications is an important research topic.To detect abnormal targets on power transmission lines at the power edge,this paper proposes an effective method for reducing the data bit width of the network for floating-point quantization.By performing exponent prealignment and mantissa shifting operations,this method avoids the frequent alignment operations of standard floating-point data,thereby further reducing the exponent and mantissa bit width input into the training process.This enables training low-data-bit width models with low hardware-resource consumption while maintaining accuracy.Experimental tests were conducted on a dataset of real-world images of abnormal targets on transmission lines.The results indicate that while maintaining accuracy at a basic level,the proposed method can significantly reduce the data bit width compared with single-precision data.This suggests that the proposed method has a marked ability to enhance the real-time detection of abnormal targets in transmission circuits.Furthermore,a qualitative analysis indicated that the proposed quantization method is particularly suitable for hardware architectures that integrate storage and computation and exhibit good transferability.
基金the National Social Science Fund of China(Grant No.22CTQ031)Special Project on Library Capacity Building of the Chinese Academy of Sciences(Grant No.E2290431).
文摘Research data infrastructures form the cornerstone in both cyber and physical spaces,driving the progression of the data-intensive scientific research paradigm.This opinion paper presents an overview of global research data infrastructure,drawing insights from national roadmaps and strategic documents related to research data infrastructure.It emphasizes the pivotal role of research data infrastructures by delineating four new missions aimed at positioning them at the core of the current scientific research and communication ecosystem.The four new missions of research data infrastructures are:(1)as a pioneer,to transcend the disciplinary border and address complex,cutting-edge scientific and social challenges with problem-and data-oriented insights;(2)as an architect,to establish a digital,intelligent,flexible research and knowledge services environment;(3)as a platform,to foster the high-end academic communication;(4)as a coordinator,to balance scientific openness with ethics needs.
基金supported by the open research fund of Key Lab of Broadband Wireless Communication and Sensor Network Technology(Nanjing University of Posts and Telecommunications),Ministry of Education(No.JZNY202114)Postgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX210734).
文摘Traditional Io T systems suffer from high equipment management costs and difficulty in trustworthy data sharing caused by centralization.Blockchain provides a feasible research direction to solve these problems. The main challenge at this stage is to integrate the blockchain from the resourceconstrained Io T devices and ensure the data of Io T system is credible. We provide a general framework for intelligent Io T data acquisition and sharing in an untrusted environment based on the blockchain, where gateways become Oracles. A distributed Oracle network based on Byzantine Fault Tolerant algorithm is used to provide trusted data for the blockchain to make intelligent Io T data trustworthy. An aggregation contract is deployed to collect data from various Oracle and share the credible data to all on-chain users. We also propose a gateway data aggregation scheme based on the REST API event publishing/subscribing mechanism which uses SQL to achieve flexible data aggregation. The experimental results show that the proposed scheme can alleviate the problem of limited performance of Io T equipment, make data reliable, and meet the diverse data needs on the chain.
基金Supported by the National Natural Science Foundation of China(62376214)the Natural Science Basic Research Program of Shaanxi(2023-JC-YB-533)Foundation of Ministry of Education Key Lab.of Cognitive Radio and Information Processing(Guilin University of Electronic Technology)(CRKL200203)。
文摘A modified multiple-component scattering power decomposition for analyzing polarimetric synthetic aperture radar(PolSAR)data is proposed.The modified decomposition involves two distinct steps.Firstly,ei⁃genvectors of the coherency matrix are used to modify the scattering models.Secondly,the entropy and anisotro⁃py of targets are used to improve the volume scattering power.With the guarantee of high double-bounce scatter⁃ing power in the urban areas,the proposed algorithm effectively improves the volume scattering power of vegeta⁃tion areas.The efficacy of the modified multiple-component scattering power decomposition is validated using ac⁃tual AIRSAR PolSAR data.The scattering power obtained through decomposing the original coherency matrix and the coherency matrix after orientation angle compensation is compared with three algorithms.Results from the experiment demonstrate that the proposed decomposition yields more effective scattering power for different PolSAR data sets.