Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with...The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.展开更多
Freeform surface measurement is a key basic technology for product quality control and reverse engineering in aerospace field.Surface measurement technology based on multi-sensor fusion such as laser scanner and conta...Freeform surface measurement is a key basic technology for product quality control and reverse engineering in aerospace field.Surface measurement technology based on multi-sensor fusion such as laser scanner and contact probe can combine the complementary characteristics of different sensors,and has been widely concerned in industry and academia.The number and distribution of measurement points will significantly affect the efficiency of multisensor fusion and the accuracy of surface reconstruction.An aggregation‑value‑based active sampling method for multisensor freeform surface measurement and reconstruction is proposed.Based on game theory iteration,probe measurement points are generated actively,and the importance of each measurement point on freeform surface to multi-sensor fusion is clearly defined as Shapley value of the measurement point.Thus,the problem of obtaining the optimal measurement point set is transformed into the problem of maximizing the aggregation value of the sample set.Simulation and real measurement results verify that the proposed method can significantly reduce the required probe sample size while ensuring the measurement accuracy of multi-sensor fusion.展开更多
Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the los...Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.展开更多
Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more adva...Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.展开更多
Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spec...Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spectrum sensing,which is subject to the complexity of processing the modulated outputs.In this case,a multipath NYFR architecture with a step-sampling rate for the different paths is proposed.The different numbers of digital channels for each path are designed based on the Chinese remainder theorem(CRT).Then,the detectable frequency range is divided into multiple frequency grids,and the Nyquist zone(NZ) of the input can be obtained by sensing these grids.Thus,high-precision parameter estimation is performed by utilizing the NYFR characteristics.Compared with the existing methods,the scheme proposed in this paper overcomes the challenge of NZ estimation,information damage,many computations,low accuracy,and high false alarm probability.Comparative simulation experiments verify the effectiveness of the proposed architecture in this paper.展开更多
Uniform linear array(ULA)radars are widely used in the collision-avoidance radar systems of small unmanned aerial vehicles(UAVs).In practice,a ULA's multi-target direction of arrival(DOA)estimation performance suf...Uniform linear array(ULA)radars are widely used in the collision-avoidance radar systems of small unmanned aerial vehicles(UAVs).In practice,a ULA's multi-target direction of arrival(DOA)estimation performance suffers from significant performance degradation owing to the limited number of physical elements.To improve the underdetermined DOA estimation performance of a ULA radar mounted on a small UAV platform,we propose a nonuniform linear motion sampling underdetermined DOA estimation method.Using the motion of the UAV platform,the echo signal is sampled at different positions.Then,according to the concept of difference co-array,a virtual ULA with multiple array elements and a large aperture is synthesized to increase the degrees of freedom(DOFs).Through position analysis of the original and motion arrays,we propose a nonuniform linear motion sampling method based on ULA for determining the optimal DOFs.Under the condition of no increase in the aperture of the physical array,the proposed method obtains a high DOF with fewer sampling runs and greatly improves the underdetermined DOA estimation performance of ULA.The results of numerical simulations conducted herein verify the superior performance of the proposed method.展开更多
Frequency sampling is one of the popular methods in FIR digital filter design. In the frequency sampling method the value of transition band samples, which are usually obtained by consulting a table, must be determi...Frequency sampling is one of the popular methods in FIR digital filter design. In the frequency sampling method the value of transition band samples, which are usually obtained by consulting a table, must be determined in order to make the attenuation within the stopband maximal. However, the value obtained by searching for table can not be ensured to be optimal. Evolutionary programming (EP), a multi agent stochastic optimization technique, can lead to global optimal solutions for complex problems. In this paper a new application of EP to frequency sampling method is introduced. Two examples of lowpass and bandpass FIR filters are presented, and the steps of EP realization and experimental results are given. Experimental results show that the value of transition band samples obtained by EP can be ensured to be optimal and the performance of the filter is improved.展开更多
It is essential to investigate the light field camera parameters for the accurate flame temperature measurement because the sampling characteristics of the flame radiation can be varied with them. In this study, novel...It is essential to investigate the light field camera parameters for the accurate flame temperature measurement because the sampling characteristics of the flame radiation can be varied with them. In this study, novel indices of the light field camera were proposed to investigate the directional and spatial sampling characteristics of the flame radiation. Effects of light field camera parameters such as focal length and magnification of the main lens, focal length and magnification of the microlens were investigated. It was observed that the sampling characteristics of the flame are varied with the different parameters of the light field camera. The optimized parameters of the light field camera were then proposed for the flame radiation sampling. The larger sampling angle(23 times larger) is achieved by the optimized parameters compared to the commercial light field camera parameters. A non-negative least square(NNLS) algorithm was used to reconstruct the flame temperature. The reconstruction accuracy was also evaluated by the optimized parameters. The results suggested that the optimized parameters can provide higher reconstruction accuracy for axisymmetric and non-symmetric flame conditions in comparison to the commercial light field camera.展开更多
Ultra-wide-band (UWB) signals are suitable for localization, since their high time resolution can provide precise time of arrival (TOA) estimation. However, one major challenge in UWB signal processing is the requirem...Ultra-wide-band (UWB) signals are suitable for localization, since their high time resolution can provide precise time of arrival (TOA) estimation. However, one major challenge in UWB signal processing is the requirement of high sampling rate which leads to complicated signal processing and expensive hardware. In this paper, we present a novel UWB signal sampling method called UWB signal sampling via temporal sparsity (USSTS). Its sampling rate is much lower than Nyquist rate. Moreover, it is implemented in one step and no extra processing unit is needed. Simulation results show that USSTS can not recover the signal precisely, but for the use in localization, the accuracy of TOA estimation is the same as that in traditional methods. Therefore, USSTS gives a novel and effective solution for the use of UWB signals in localization.展开更多
This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is ...This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.展开更多
Terpenes, aldehydes, ketones, benzene, and toluene are the important volatileorganic compounds (VOCs) emitted from wood composites. A sampling apparatus of VOCs for woodcomposites was designed and manufactured by Nort...Terpenes, aldehydes, ketones, benzene, and toluene are the important volatileorganic compounds (VOCs) emitted from wood composites. A sampling apparatus of VOCs for woodcomposites was designed and manufactured by Northeast Forestry University in China. Theconcentration of VOCs derived from wood based materials, such as flooring, panel wall, finishing,and furniture can be sampled in a small stainless steel chambers. A protocol is also developed inthis study to sample and measure the new and representative specimens. Preliminary research showedthat the properties of the equipment have good stability. The sort and the amount of differentcomponents can be detected from it. The apparatus is practicable.展开更多
In order to solve the cross-channel signal problem caused by the uniform channelized wideband digital receiver when processing wideband signal and the problem that the sensitivity of the system greatly decreases when ...In order to solve the cross-channel signal problem caused by the uniform channelized wideband digital receiver when processing wideband signal and the problem that the sensitivity of the system greatly decreases when the bandwidth of wideband digital receiver increases,which both decrease the wideband radar signal detection performance,a new wideband digital receiver based on the modulated wideband converter(MWC)discrete compressed sampling structure and an energy detection method based on the new receiver are proposed.Firstly,the proposed receiver utilizes periodic pseudo-random sequences to mix wideband signals with baseband and other sub-bands.Then the mixed signals are low-pass filtered and downsampled to obtain the baseband compressed sampling data,which can increase the sensitivity of the system.Meanwhile,the cross-channel signal will all appear in any subbands,so the cross-channel signal problem can be solved easily by processing the baseband compressed sampling data.Secondly,we establish the signal detection model and formulate the criterion of the energy detection method.And we directly utilize the baseband compressed sampling data to carry out signal detection without signal reconstruction,which decreases the complexity of the algorithm and reduces the computational burden.Finally,simulation experiments demonstrate the effectiveness of the proposed receiver and show that the proposed signal detection method is effective in low signal-to-noise ratio(SNR)compared with the conventional energy detection and the probability of detection increases significantly when SNR increases.展开更多
Sampling is a bridge between continuous-time and discrete-time signals,which is import-ant to digital signal processing.The fractional Fourier transform(FrFT)that serves as a generaliz-ation of the FT can characterize...Sampling is a bridge between continuous-time and discrete-time signals,which is import-ant to digital signal processing.The fractional Fourier transform(FrFT)that serves as a generaliz-ation of the FT can characterize signals in multiple fractional Fourier domains,and therefore can provide new perspectives for signal sampling and reconstruction.In this paper,we review recent de-velopments of the sampling theorem associated with the FrFT,including signal reconstruction and fractional spectral analysis of uniform sampling,nonuniform samplings due to various factors,and sub-Nyquist sampling,where bandlimited signals in the fractional Fourier domain are mainly taken into consideration.Moreover,we provide several future research topics of the sampling theorem as-sociated with the FrFT.展开更多
The main aim of this study was to evaluate methods for fixed area and distance sampling in the Zagros open forest area in western Iran. Basic forest management and planning required appropriate quantitative and qualit...The main aim of this study was to evaluate methods for fixed area and distance sampling in the Zagros open forest area in western Iran. Basic forest management and planning required appropriate quantitative and qualitative information. Two sampling methods were compared on the basis of the actual means of characteristics derived from the 100 % survey. In total, 37 sampling plots were systematically installed with a grid of 100 m × 100 m in the study area. Density, crown canopy, and basal area of the stands were measured. The 100 % survey showed that tree density above 12.5 cm diameter at breast height was 68.04 stem ha-1, basal area was 15.16 m2 ha-1 and crown canopy percentage was 35.71% ha-1. The values for the traits determined by the two sampling methods differed significantly (P = 0.05). When the time required for the methods was compared, transect sampling required less than systematic-random sampling. Therefore, the transect sampling method was the more economical method for the Zagros open forests. The transect sampling method was statistically defensible and practical for quantitating characteristics of the Zagros open forests.展开更多
One of the basic parameters in forest management planning is detailed knowledge of growing stock,information collected by forest inventory.Sampling methods must be accurate,inexpensive,and be easy to implement in the ...One of the basic parameters in forest management planning is detailed knowledge of growing stock,information collected by forest inventory.Sampling methods must be accurate,inexpensive,and be easy to implement in the field.This study presents a new sampling method called branching transect for use in the Iranian Zagros forests and similar forests.Features of the new method include greater accuracy,easy implementation in nature,simplicity of statistical calculations,and low cost.In this method,transect is used,which includes some subtransects(side branches).The length of the main transect,side branches,number of trees measured in each side branch,and the number of sub-branches in this method are changeable based on homogeneity,heterogeneity,and density of a forest.In this study,based on the density and heterogeneity of the forest area studied,20-m transects with four and eight side branches were used.Sampling plots(Transects)in four inventory networks(100 m×100 m,100 m×150 m,150 m×150 m and 100 m×200 m)were implemented in the GIS environment.The results of this sampling method were compared to the results of total inventory(100%count)in terms of accuracy,precision(t-test),and inventory error percentage.Branching transect results were statistially similar to total inventory counts in all cases.The results show that this method of estimating density and canopy per hectare can be used in Zagros forests and similar forests.展开更多
Background:Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree.Crown biomass estimation is useful for different purposes including ...Background:Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree.Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products,fuel load assessments and fire management strategies,and wildfire modeling.However,crown biomass is difficult to predict because of the variability within and among species and sites.Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies.In this study,we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass.Methods:Using data collected from 20 destructively sampled trees,we evaluated 11 different sampling strategies using six evaluation statistics:bias,relative bias,root mean square error(RMSE),relative RMSE,amount of biomass sampled,and relative biomass sampled.We also evaluated the performance of the selected sampling strategies when different numbers of branches(3,6,9,and 12)are selected from each tree.Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass.Results:Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled.However,the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled.Under the stratified sampling strategy,selecting unequal number of branches per stratum produced approximately similar results to simple random sampling,but it further decreased RMSE when information on branch diameter is used in the design and estimation phases.Conclusions:Use of auxiliary information in design or estimation phase reduces the RMSE produced by a sampling strategy.However,this is attained by having to sample larger amount of biomass.Based on our finding we would recommend sampling nine branches per tree to be reasonably efficient and limit the amount of fieldwork.展开更多
The unscented Kalman filter is a developed well-known method for nonlinear motion estimation and tracking. However, the standard unscented Kalman filter has the inherent drawbacks, such as numerical instability and mu...The unscented Kalman filter is a developed well-known method for nonlinear motion estimation and tracking. However, the standard unscented Kalman filter has the inherent drawbacks, such as numerical instability and much more time spent on calculation in practical applications. In this paper, we present a novel sampling strong tracking nonlinear unscented Kalman filter, aiming to overcome the difficulty in nonlinear eye tracking. In the above proposed filter, the simplified unscented transform sampling strategy with n+ 2 sigma points leads to the computational efficiency, and suboptimal fading factor of strong tracking filtering is introduced to improve robustness and accuracy of eye tracking. Compared with the related unscented Kalman filter for eye tracking, the proposed filter has potential advantages in robustness, convergence speed, and tracking accuracy. The final experimental results show the validity of our method for eye tracking under realistic conditions.展开更多
In this work the authors develop the n-dimensional sinc function theory in the several complex variables setting. In terms of the corresponding Paley-Wiener theorem the exact sinc interpolation and quadrature are esta...In this work the authors develop the n-dimensional sinc function theory in the several complex variables setting. In terms of the corresponding Paley-Wiener theorem the exact sinc interpolation and quadrature are established. Exponential convergence rate of the error estimates for band-limited functions in n-dimensional strips are obtained.展开更多
Background:The local pivotal method(LPM)utilizing auxiliary data in sample selection has recently been proposed as a sampling method for national forest inventories(NFIs).Its performance compared to simple random samp...Background:The local pivotal method(LPM)utilizing auxiliary data in sample selection has recently been proposed as a sampling method for national forest inventories(NFIs).Its performance compared to simple random sampling(SRS)and LPM with geographical coordinates has produced promising results in simulation studies.In this simulation study we compared all these sampling methods to systematic sampling.The LPM samples were selected solely using the coordinates(LPMxy)or,in addition to that,auxiliary remote sensing-based forest variables(RS variables).We utilized field measurement data(NFI-field)and Multi-Source NFI(MS-NFI)maps as target data,and independent MS-NFI maps as auxiliary data.The designs were compared using relative efficiency(RE);a ratio of mean squared errors of the reference sampling design against the studied design.Applying a method in NFI also requires a proven estimator for the variance.Therefore,three different variance estimators were evaluated against the empirical variance of replications:1)an estimator corresponding to SRS;2)a Grafström-Schelin estimator repurposed for LPM;and 3)a Matérn estimator applied in the Finnish NFI for systematic sampling design.Results:The LPMxy was nearly comparable with the systematic design for the most target variables.The REs of the LPM designs utilizing auxiliary data compared to the systematic design varied between 0.74–1.18,according to the studied target variable.The SRS estimator for variance was expectedly the most biased and conservative estimator.Similarly,the Grafström-Schelin estimator gave overestimates in the case of LPMxy.When the RS variables were utilized as auxiliary data,the Grafström-Schelin estimates tended to underestimate the empirical variance.In systematic sampling the Matérn and Grafström-Schelin estimators performed for practical purposes equally.Conclusions:LPM optimized for a specific variable tended to be more efficient than systematic sampling,but all of the considered LPM designs were less efficient than the systematic sampling design for some target variables.The Grafström-Schelin estimator could be used as such with LPMxy or instead of the Matérn estimator in systematic sampling.Further studies of the variance estimators are needed if other auxiliary variables are to be used in LPM.展开更多
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
基金Project supported by the National Key Research and Development Program of China(Grant No.2023YFF1204402)the National Natural Science Foundation of China(Grant Nos.12074079 and 12374208)+1 种基金the Natural Science Foundation of Shanghai(Grant No.22ZR1406800)the China Postdoctoral Science Foundation(Grant No.2022M720815).
文摘The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.
基金supported by the Na‑tional Key R&D Program of China(No.2022YFB3402600)the National Science Fund for Distinguished Young Scholars(No.51925505)+1 种基金the General Program of National Natural Science Foundation of China(No.52275491)Joint Funds of the National Natural Science Foundation of China(No.U21B2081).
文摘Freeform surface measurement is a key basic technology for product quality control and reverse engineering in aerospace field.Surface measurement technology based on multi-sensor fusion such as laser scanner and contact probe can combine the complementary characteristics of different sensors,and has been widely concerned in industry and academia.The number and distribution of measurement points will significantly affect the efficiency of multisensor fusion and the accuracy of surface reconstruction.An aggregation‑value‑based active sampling method for multisensor freeform surface measurement and reconstruction is proposed.Based on game theory iteration,probe measurement points are generated actively,and the importance of each measurement point on freeform surface to multi-sensor fusion is clearly defined as Shapley value of the measurement point.Thus,the problem of obtaining the optimal measurement point set is transformed into the problem of maximizing the aggregation value of the sample set.Simulation and real measurement results verify that the proposed method can significantly reduce the required probe sample size while ensuring the measurement accuracy of multi-sensor fusion.
基金Project supported by the Key National Natural Science Foundation of China(Grant No.62136005)the National Natural Science Foundation of China(Grant Nos.61922087,61906201,and 62006238)。
文摘Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.
文摘Dispersion fuels,knowned for their excellent safety performance,are widely used in advanced reactors,such as hightemperature gas-cooled reactors.Compared with deterministic methods,the Monte Carlo method has more advantages in the geometric modeling of stochastic media.The explicit modeling method has high computational accuracy and high computational cost.The chord length sampling(CLS)method can improve computational efficiency by sampling the chord length during neutron transport using the matrix chord length?s probability density function.This study shows that the excluded-volume effect in realistic stochastic media can introduce certain deviations into the CLS.A chord length correction approach is proposed to obtain the chord length correction factor by developing the Particle code based on equivalent transmission probability.Through numerical analysis against reference solutions from explicit modeling in the RMC code,it was demonstrated that CLS with the proposed correction method provides good accuracy for addressing the excludedvolume effect in realistic infinite stochastic media.
基金supported by the Key Projects of the 2022 National Defense Science and Technology Foundation Strengthening Plan 173 (Grant No.2022-173ZD-010)the Equipment PreResearch Foundation of The State Key Laboratory (Grant No.6142101200204)。
文摘Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spectrum sensing,which is subject to the complexity of processing the modulated outputs.In this case,a multipath NYFR architecture with a step-sampling rate for the different paths is proposed.The different numbers of digital channels for each path are designed based on the Chinese remainder theorem(CRT).Then,the detectable frequency range is divided into multiple frequency grids,and the Nyquist zone(NZ) of the input can be obtained by sensing these grids.Thus,high-precision parameter estimation is performed by utilizing the NYFR characteristics.Compared with the existing methods,the scheme proposed in this paper overcomes the challenge of NZ estimation,information damage,many computations,low accuracy,and high false alarm probability.Comparative simulation experiments verify the effectiveness of the proposed architecture in this paper.
基金National Natural Science Foundation of China(61973037)National 173 Program Project(2019-JCJQ-ZD-324)。
文摘Uniform linear array(ULA)radars are widely used in the collision-avoidance radar systems of small unmanned aerial vehicles(UAVs).In practice,a ULA's multi-target direction of arrival(DOA)estimation performance suffers from significant performance degradation owing to the limited number of physical elements.To improve the underdetermined DOA estimation performance of a ULA radar mounted on a small UAV platform,we propose a nonuniform linear motion sampling underdetermined DOA estimation method.Using the motion of the UAV platform,the echo signal is sampled at different positions.Then,according to the concept of difference co-array,a virtual ULA with multiple array elements and a large aperture is synthesized to increase the degrees of freedom(DOFs).Through position analysis of the original and motion arrays,we propose a nonuniform linear motion sampling method based on ULA for determining the optimal DOFs.Under the condition of no increase in the aperture of the physical array,the proposed method obtains a high DOF with fewer sampling runs and greatly improves the underdetermined DOA estimation performance of ULA.The results of numerical simulations conducted herein verify the superior performance of the proposed method.
文摘Frequency sampling is one of the popular methods in FIR digital filter design. In the frequency sampling method the value of transition band samples, which are usually obtained by consulting a table, must be determined in order to make the attenuation within the stopband maximal. However, the value obtained by searching for table can not be ensured to be optimal. Evolutionary programming (EP), a multi agent stochastic optimization technique, can lead to global optimal solutions for complex problems. In this paper a new application of EP to frequency sampling method is introduced. Two examples of lowpass and bandpass FIR filters are presented, and the steps of EP realization and experimental results are given. Experimental results show that the value of transition band samples obtained by EP can be ensured to be optimal and the performance of the filter is improved.
基金supported by the National Natural Science Foundation of China(Grant Nos.51676044 and 51327803)the Social Development Project of Jiangsu Province,China(Grant No.BE20187053)+1 种基金the Postgraduate Research and Practice Innovation Program of Jiangsu Province,China(Grant No.KYCX170081)China Scholarship Council
文摘It is essential to investigate the light field camera parameters for the accurate flame temperature measurement because the sampling characteristics of the flame radiation can be varied with them. In this study, novel indices of the light field camera were proposed to investigate the directional and spatial sampling characteristics of the flame radiation. Effects of light field camera parameters such as focal length and magnification of the main lens, focal length and magnification of the microlens were investigated. It was observed that the sampling characteristics of the flame are varied with the different parameters of the light field camera. The optimized parameters of the light field camera were then proposed for the flame radiation sampling. The larger sampling angle(23 times larger) is achieved by the optimized parameters compared to the commercial light field camera parameters. A non-negative least square(NNLS) algorithm was used to reconstruct the flame temperature. The reconstruction accuracy was also evaluated by the optimized parameters. The results suggested that the optimized parameters can provide higher reconstruction accuracy for axisymmetric and non-symmetric flame conditions in comparison to the commercial light field camera.
基金supported by National science foundation(No. 60772035): Key technique study on heterogeneous network convergenceDoctoral grant(No.20070004010)s: Study on cross layer design for heterogeneous network convergence+1 种基金National 863 Hi-Tech Projects(No.2007AA01Z277): Pa-rameter design based electromagnetic compatibility study in cognitive radio communication systemNational science foundation(No. 60830001): Wireless communication fundamentals and key techniuqes for high speed rail way control and safety data transmission
文摘Ultra-wide-band (UWB) signals are suitable for localization, since their high time resolution can provide precise time of arrival (TOA) estimation. However, one major challenge in UWB signal processing is the requirement of high sampling rate which leads to complicated signal processing and expensive hardware. In this paper, we present a novel UWB signal sampling method called UWB signal sampling via temporal sparsity (USSTS). Its sampling rate is much lower than Nyquist rate. Moreover, it is implemented in one step and no extra processing unit is needed. Simulation results show that USSTS can not recover the signal precisely, but for the use in localization, the accuracy of TOA estimation is the same as that in traditional methods. Therefore, USSTS gives a novel and effective solution for the use of UWB signals in localization.
基金This work is partially supported by National Natural Science Foundation of China (10071090 and 10271013).
文摘This article proposes a statistical method for working out reliability sampling plans under Type I censored sample for items whose failure times have either normal or lognormal distributions. The quality statistic is a method of moments estimator of a monotonous function of the unreliability. An approach of choosing a truncation time is recommended. The sample size and acceptability constant are approximately determined by using the Cornish-Fisher expansion for quantiles of distribution. Simulation results show that the method given in this article is feasible.
基金This project is supported by the grand of the Oversea Back Scholar Research Startup of China Education Ministry, Heilongjiang Post-doctorial Research Startup and NEFU Creative Item.
文摘Terpenes, aldehydes, ketones, benzene, and toluene are the important volatileorganic compounds (VOCs) emitted from wood composites. A sampling apparatus of VOCs for woodcomposites was designed and manufactured by Northeast Forestry University in China. Theconcentration of VOCs derived from wood based materials, such as flooring, panel wall, finishing,and furniture can be sampled in a small stainless steel chambers. A protocol is also developed inthis study to sample and measure the new and representative specimens. Preliminary research showedthat the properties of the equipment have good stability. The sort and the amount of differentcomponents can be detected from it. The apparatus is practicable.
基金supported by the National Natural Science Foundation of China(No.61571146)the Fundamental Research Funds for the Central Universities(HEUCF1608)
文摘In order to solve the cross-channel signal problem caused by the uniform channelized wideband digital receiver when processing wideband signal and the problem that the sensitivity of the system greatly decreases when the bandwidth of wideband digital receiver increases,which both decrease the wideband radar signal detection performance,a new wideband digital receiver based on the modulated wideband converter(MWC)discrete compressed sampling structure and an energy detection method based on the new receiver are proposed.Firstly,the proposed receiver utilizes periodic pseudo-random sequences to mix wideband signals with baseband and other sub-bands.Then the mixed signals are low-pass filtered and downsampled to obtain the baseband compressed sampling data,which can increase the sensitivity of the system.Meanwhile,the cross-channel signal will all appear in any subbands,so the cross-channel signal problem can be solved easily by processing the baseband compressed sampling data.Secondly,we establish the signal detection model and formulate the criterion of the energy detection method.And we directly utilize the baseband compressed sampling data to carry out signal detection without signal reconstruction,which decreases the complexity of the algorithm and reduces the computational burden.Finally,simulation experiments demonstrate the effectiveness of the proposed receiver and show that the proposed signal detection method is effective in low signal-to-noise ratio(SNR)compared with the conventional energy detection and the probability of detection increases significantly when SNR increases.
基金supported in part by the National Natural Foundation of China(NSFC)(Nos.62027801 and U1833203)the Beijing Natural Science Foundation(No.L191004).
文摘Sampling is a bridge between continuous-time and discrete-time signals,which is import-ant to digital signal processing.The fractional Fourier transform(FrFT)that serves as a generaliz-ation of the FT can characterize signals in multiple fractional Fourier domains,and therefore can provide new perspectives for signal sampling and reconstruction.In this paper,we review recent de-velopments of the sampling theorem associated with the FrFT,including signal reconstruction and fractional spectral analysis of uniform sampling,nonuniform samplings due to various factors,and sub-Nyquist sampling,where bandlimited signals in the fractional Fourier domain are mainly taken into consideration.Moreover,we provide several future research topics of the sampling theorem as-sociated with the FrFT.
文摘The main aim of this study was to evaluate methods for fixed area and distance sampling in the Zagros open forest area in western Iran. Basic forest management and planning required appropriate quantitative and qualitative information. Two sampling methods were compared on the basis of the actual means of characteristics derived from the 100 % survey. In total, 37 sampling plots were systematically installed with a grid of 100 m × 100 m in the study area. Density, crown canopy, and basal area of the stands were measured. The 100 % survey showed that tree density above 12.5 cm diameter at breast height was 68.04 stem ha-1, basal area was 15.16 m2 ha-1 and crown canopy percentage was 35.71% ha-1. The values for the traits determined by the two sampling methods differed significantly (P = 0.05). When the time required for the methods was compared, transect sampling required less than systematic-random sampling. Therefore, the transect sampling method was the more economical method for the Zagros open forests. The transect sampling method was statistically defensible and practical for quantitating characteristics of the Zagros open forests.
文摘One of the basic parameters in forest management planning is detailed knowledge of growing stock,information collected by forest inventory.Sampling methods must be accurate,inexpensive,and be easy to implement in the field.This study presents a new sampling method called branching transect for use in the Iranian Zagros forests and similar forests.Features of the new method include greater accuracy,easy implementation in nature,simplicity of statistical calculations,and low cost.In this method,transect is used,which includes some subtransects(side branches).The length of the main transect,side branches,number of trees measured in each side branch,and the number of sub-branches in this method are changeable based on homogeneity,heterogeneity,and density of a forest.In this study,based on the density and heterogeneity of the forest area studied,20-m transects with four and eight side branches were used.Sampling plots(Transects)in four inventory networks(100 m×100 m,100 m×150 m,150 m×150 m and 100 m×200 m)were implemented in the GIS environment.The results of this sampling method were compared to the results of total inventory(100%count)in terms of accuracy,precision(t-test),and inventory error percentage.Branching transect results were statistially similar to total inventory counts in all cases.The results show that this method of estimating density and canopy per hectare can be used in Zagros forests and similar forests.
基金the Forest Inventory Analysis Unit for funding the data collection and analysis phases of this project
文摘Background:Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree.Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products,fuel load assessments and fire management strategies,and wildfire modeling.However,crown biomass is difficult to predict because of the variability within and among species and sites.Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies.In this study,we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass.Methods:Using data collected from 20 destructively sampled trees,we evaluated 11 different sampling strategies using six evaluation statistics:bias,relative bias,root mean square error(RMSE),relative RMSE,amount of biomass sampled,and relative biomass sampled.We also evaluated the performance of the selected sampling strategies when different numbers of branches(3,6,9,and 12)are selected from each tree.Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass.Results:Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled.However,the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled.Under the stratified sampling strategy,selecting unequal number of branches per stratum produced approximately similar results to simple random sampling,but it further decreased RMSE when information on branch diameter is used in the design and estimation phases.Conclusions:Use of auxiliary information in design or estimation phase reduces the RMSE produced by a sampling strategy.However,this is attained by having to sample larger amount of biomass.Based on our finding we would recommend sampling nine branches per tree to be reasonably efficient and limit the amount of fieldwork.
基金Project supported by the National Natural Science Foundation of China (Grant No. 60971104)the Fundamental Research Funds for the Cental Universities (Grant No. SWJTU09BR092)the Young Teacher Scientific Research Foundation of Southwest Jiaotong University (Grant No. 2009Q032)
文摘The unscented Kalman filter is a developed well-known method for nonlinear motion estimation and tracking. However, the standard unscented Kalman filter has the inherent drawbacks, such as numerical instability and much more time spent on calculation in practical applications. In this paper, we present a novel sampling strong tracking nonlinear unscented Kalman filter, aiming to overcome the difficulty in nonlinear eye tracking. In the above proposed filter, the simplified unscented transform sampling strategy with n+ 2 sigma points leads to the computational efficiency, and suboptimal fading factor of strong tracking filtering is introduced to improve robustness and accuracy of eye tracking. Compared with the related unscented Kalman filter for eye tracking, the proposed filter has potential advantages in robustness, convergence speed, and tracking accuracy. The final experimental results show the validity of our method for eye tracking under realistic conditions.
文摘In this work the authors develop the n-dimensional sinc function theory in the several complex variables setting. In terms of the corresponding Paley-Wiener theorem the exact sinc interpolation and quadrature are established. Exponential convergence rate of the error estimates for band-limited functions in n-dimensional strips are obtained.
基金the Ministry of Agriculture and Forestry key project“Puuta liikkeelle ja uusia tuotteita metsästä”(“Wood on the move and new products from forest”)Academy of Finland(project numbers 295100 , 306875).
文摘Background:The local pivotal method(LPM)utilizing auxiliary data in sample selection has recently been proposed as a sampling method for national forest inventories(NFIs).Its performance compared to simple random sampling(SRS)and LPM with geographical coordinates has produced promising results in simulation studies.In this simulation study we compared all these sampling methods to systematic sampling.The LPM samples were selected solely using the coordinates(LPMxy)or,in addition to that,auxiliary remote sensing-based forest variables(RS variables).We utilized field measurement data(NFI-field)and Multi-Source NFI(MS-NFI)maps as target data,and independent MS-NFI maps as auxiliary data.The designs were compared using relative efficiency(RE);a ratio of mean squared errors of the reference sampling design against the studied design.Applying a method in NFI also requires a proven estimator for the variance.Therefore,three different variance estimators were evaluated against the empirical variance of replications:1)an estimator corresponding to SRS;2)a Grafström-Schelin estimator repurposed for LPM;and 3)a Matérn estimator applied in the Finnish NFI for systematic sampling design.Results:The LPMxy was nearly comparable with the systematic design for the most target variables.The REs of the LPM designs utilizing auxiliary data compared to the systematic design varied between 0.74–1.18,according to the studied target variable.The SRS estimator for variance was expectedly the most biased and conservative estimator.Similarly,the Grafström-Schelin estimator gave overestimates in the case of LPMxy.When the RS variables were utilized as auxiliary data,the Grafström-Schelin estimates tended to underestimate the empirical variance.In systematic sampling the Matérn and Grafström-Schelin estimators performed for practical purposes equally.Conclusions:LPM optimized for a specific variable tended to be more efficient than systematic sampling,but all of the considered LPM designs were less efficient than the systematic sampling design for some target variables.The Grafström-Schelin estimator could be used as such with LPMxy or instead of the Matérn estimator in systematic sampling.Further studies of the variance estimators are needed if other auxiliary variables are to be used in LPM.