Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
This article studies the optimal proportional reinsurance and investment problem under a constant elasticity of variance (CEV) model. Assume that the insurer's surplus process follows a jump-diffusion process, the ...This article studies the optimal proportional reinsurance and investment problem under a constant elasticity of variance (CEV) model. Assume that the insurer's surplus process follows a jump-diffusion process, the insurer can purchase proportional reinsurance from the reinsurer via the variance principle and invest in a risk-free asset and a risky asset whose price is modeled by a CEV model. The diffusion term can explain the uncertainty associated with the surplus of the insurer or the additional small claims. The objective of the insurer is to maximize the expected exponential utility of terminal wealth. This optimization problem is studied in two cases depending on the diffusion term's explanation. In all cases, by using techniques of stochastic control theory, closed-form expressions for the value functions and optimal strategies are obtained.展开更多
Bayes decision rule of variance components for one-way random effects model is derived and empirical Bayes (EB) decision rules are constructed by kernel estimation method. Under suitable conditions, it is shown that t...Bayes decision rule of variance components for one-way random effects model is derived and empirical Bayes (EB) decision rules are constructed by kernel estimation method. Under suitable conditions, it is shown that the proposed EB decision rules are asymptotically optimal with convergence rates near O(n-1/2). Finally, an example concerning the main result is given.展开更多
In the present paper, a comparison of the performance between moving cutting data-rescaled range analysis (MC- R/S) and moving cutting data-rescaled variance analysis (MC-V/S) is made. The results clearly indicate...In the present paper, a comparison of the performance between moving cutting data-rescaled range analysis (MC- R/S) and moving cutting data-rescaled variance analysis (MC-V/S) is made. The results clearly indicate that the operating efficiency of the MC-R/S algorithm is higher than that of the MC-V/S algorithm. In our numerical test, the computer time consumed by MC-V/S is approximately 25 times that by MC-R/S for an identical window size in artificial data. Except for the difference in operating efficiency, there are no significant differences in performance between MC-R/S and MC-V/S for the abrupt dynamic change detection. Mc-R/s and MC-V/S both display some degree of anti-noise ability. However, it is important to consider the influences of strong noise on the detection results of MC-R/S and MC-V/S in practical application展开更多
Data collected from truck payload management systems at various surface mines shows that the payload variance is significant and must be considered in analysing the mine productivity,energy consumption,greenhouse gas ...Data collected from truck payload management systems at various surface mines shows that the payload variance is significant and must be considered in analysing the mine productivity,energy consumption,greenhouse gas emissions and associated cost.Payload variance causes significant differences in gross vehicle weights.Heavily loaded trucks travel slower up ramps than lightly loaded trucks.Faster trucks are slowed by the presence of slower trucks,resulting in‘bunching’,production losses and increasing fuel consumptions.This paper simulates the truck bunching phenomena in large surface mines to improve truck and shovel systems’efficiency and minimise fuel consumption.The study concentrated on completing a practical simulation model based on a discrete event method which is most commonly used in this field of research in other industries.The simulation model has been validated by a dataset collected from a large surface mine in Arizona state,USA.The results have shown that there is a good agreement between the actual and estimated values of investigated parameters.展开更多
Background:Large area forest inventories often use regular grids(with a single random start)of sample locations to ensure a uniform sampling intensity across the space of the surveyed populations.A design-unbiased est...Background:Large area forest inventories often use regular grids(with a single random start)of sample locations to ensure a uniform sampling intensity across the space of the surveyed populations.A design-unbiased estimator of variance does not exist for this design.Oftentimes,a quasi-default estimator applicable to simple random sampling(SRS)is used,even if it carries with it the likely risk of overestimating the variance by a practically important margin.To better exploit the precision of systematic sampling we assess the performance of five estimators of variance,including the quasi default.In this study,simulated systematic sampling was applied to artificial populations with contrasting covariance structures and with or without linear trends.We compared the results obtained with the SRS,Matern’s,successive difference replication,Ripley’s,and D’Orazio’s variance estimators.Results:The variances obtained with the four alternatives to the SRS estimator of variance were strongly correlated,and in all study settings consistently closer to the target design variance than the estimator for SRS.The latter always produced the greatest overestimation.In populations with a near zero spatial autocorrelation,all estimators,performed equally,and delivered estimates close to the actual design variance.Conclusion:Without a linear trend,the SDR and DOR estimators were best with variance estimates more narrowly distributed around the benchmark;yet in terms of the least average absolute deviation,Matern’s estimator held a narrow lead.With a strong or moderate linear trend,Matern’s estimator is choice.In large populations,and a low sampling intensity,the performance of the investigated estimators becomes more similar.展开更多
A process parameter optimization method for mold wear during die forging process is proposed and a mold life prediction method based on polynomial fitting is presented,by combining the variance analysis method in the ...A process parameter optimization method for mold wear during die forging process is proposed and a mold life prediction method based on polynomial fitting is presented,by combining the variance analysis method in the orthogonal test with the finite element simulation test in the forging process.The process parameters with the greatest influence on the mold wear during the die forging process and the optimal solution of the process parameters to minimize the wear depth of the mold are derived.The hot die forging process is taken as an example,and a mold wear correction model for hot forging processes is derived based on the Archard wear model.Finite element simulation analysis of die wear process in hot die forging based on deform software is performed to study the relationship between the wear depth of the mold working surface and the die forging process parameters during hot forging process.The optimized process parameters suitable for hot forging are derived by orthogonal experimental design and analysis of variance.The average wear amount of the mold during the die forging process is derived by calculating the wear depth of a plurality of key nodes on the mold surface.Mold life for the entire production process is predicted based on average mold wear depth and polynomial fitting.展开更多
A global variance reduction(GVR)method based on the SPN method is proposed.First,the global multi-group cross-sections are obtained by Monte Carlo(MC)global homogenization.Then,the SP3 equation is solved to obtain the...A global variance reduction(GVR)method based on the SPN method is proposed.First,the global multi-group cross-sections are obtained by Monte Carlo(MC)global homogenization.Then,the SP3 equation is solved to obtain the global flux distribution.Finally,the global weight windows are approximated by the global flux distribution,and the GVR simulation is performed.This GVR method is implemented as an automatic process in the RMC code.The SP3-coupled GVR method was tested on a modified version of C5 G7 benchmark with a thickened water shield.The results show that the SP3-coupled GVR method can improve the efficiency of MC criticality calculation.展开更多
The zero-energy variance principle can be exploited in variational quantum eigensolvers for solving general eigenstates but its capacity for obtaining a specified eigenstate,such as ground state,is limited as all eige...The zero-energy variance principle can be exploited in variational quantum eigensolvers for solving general eigenstates but its capacity for obtaining a specified eigenstate,such as ground state,is limited as all eigenstates are of zero energy variance.We propose a variance-based variational quantum eigensolver for solving the ground state by searching in an enlarged space of wavefunction and Hamiltonian.With a mutual variance-Hamiltonian optimization procedure,the Hamiltonian is iteratively updated to guild the state towards to the ground state of the target Hamiltonian by minimizing the energy variance in each iteration.We demonstrate the performance and properties of the algorithm with numeral simulations.Our work suggests an avenue for utilizing guided Hamiltonian in hybrid quantum-classical algorithms.展开更多
Doubled haploid(DH)plants have been widely used for breeding and biological research in crops.Pop ulus spp.have been used as model woody plant species for biological research.However,the induction of DH poplar plants ...Doubled haploid(DH)plants have been widely used for breeding and biological research in crops.Pop ulus spp.have been used as model woody plant species for biological research.However,the induction of DH poplar plants is onerous,and limited biological or breeding work has been carried out on DH individuals or populations.In this study,we provide an effective protocol for poplar haploid induction based on an anther culture method.A total of 96 whole DH plant lines were obtained using an F1hybrid of Populus simonii×P.nigra as a donor tree.The phenotypes of the DH population showed exceptionally high variance when compared to those of half-sib progeny of the donor tree.Each DH line displayed distinct features compared to those of the other DH lines or the donor tree.Additionally,some excellent homozygous lines have the potential to be model plants in genetic and breeding studies.展开更多
To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the ...To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.展开更多
This paper considers the estimation problem of a variance change-point in linear process.Consistency of a SCUSUM type change-point estimator is proved and its rate of convergence is established.The mean-unknown case i...This paper considers the estimation problem of a variance change-point in linear process.Consistency of a SCUSUM type change-point estimator is proved and its rate of convergence is established.The mean-unknown case is also considered.展开更多
Background:The double sampling method known as“big BAF sampling”has been advocated as a way to reduce sampling effort while still maintaining a reasonably precise estimate of volume.A well-known method for variance ...Background:The double sampling method known as“big BAF sampling”has been advocated as a way to reduce sampling effort while still maintaining a reasonably precise estimate of volume.A well-known method for variance determination,Bruce’s method,is customarily used because the volume estimator takes the form of a product of random variables.However,the genesis of Bruce’s method is not known to most foresters who use the method in practice.Methods:We establish that the Taylor series approximation known as the Delta method provides a plausible explanation for the origins of Bruce’s method.Simulations were conducted on two different tree populations to ascertain the similarities of the Delta method to the exact variance of a product.Additionally,two alternative estimators for the variance of individual tree volume-basal area ratios,which are part of the estimation process,were compared within the overall variance estimation procedure.Results:The simulation results demonstrate that Bruce’s method provides a robust method for estimating the variance of inventories conducted with the big BAF method.The simulations also demonstrate that the variance of the mean volume-basal area ratios can be computed using either the usual sample variance of the mean or the ratio variance estimators with equal accuracy,which had not been shown previously for Big BAF sampling.Conclusions:A plausible explanation for the origins of Bruce’s method has been set forth both historically and mathematically in the Delta Method.In most settings,there is evidently no practical difference between applying the exact variance of a product or the Delta method—either can be used.A caution is articulated concerning the aggregation of tree-wise attributes into point-wise summaries in order to test the correlation between the two as a possible indicator of the need for further covariance augmentation.展开更多
It can be difficult to calculate some under-sampled regions in global Monte Carlo radiation transport calculations. The global variance reduction(GVR) method is a useful solution to the problem of variance reduction e...It can be difficult to calculate some under-sampled regions in global Monte Carlo radiation transport calculations. The global variance reduction(GVR) method is a useful solution to the problem of variance reduction everywhere in a phase space. In this research, a GVR procedure was developed and applied to the Chinese Fusion Engineering Testing Reactor(CFETR). A cylindrical CFETR model was utilized for comparing various implementations of the GVR method to find the optimum.It was found that the flux-based GVR method could ensure more reliable statistical results, achieving an efficiency being 7.43 times that of the analog case. A mesh tally of the scalar neutron flux was chosen for the GVR method to simulate global neutron transport in the CFETR model.Particles distributed uniformly in the system were sampled adequately through ten iterations of GVR weight window.All voxels were scored, and the average relative error was 2.4% in the ultimate step of the GVR iteration.展开更多
Underwater acoustic signal processing is one of the research hotspots in underwater acoustics.Noise reduction of underwater acoustic signals is the key to underwater acoustic signal processing.Owing to the complexity ...Underwater acoustic signal processing is one of the research hotspots in underwater acoustics.Noise reduction of underwater acoustic signals is the key to underwater acoustic signal processing.Owing to the complexity of marine environment and the particularity of underwater acoustic channel,noise reduction of underwater acoustic signals has always been a difficult challenge in the field of underwater acoustic signal processing.In order to solve the dilemma,we proposed a novel noise reduction technique for underwater acoustic signals based on complete ensemble empirical mode decomposition with adaptive noise(CEEMDAN),minimum mean square variance criterion(MMSVC) and least mean square adaptive filter(LMSAF).This noise reduction technique,named CEEMDAN-MMSVC-LMSAF,has three main advantages:(i) as an improved algorithm of empirical mode decomposition(EMD) and ensemble EMD(EEMD),CEEMDAN can better suppress mode mixing,and can avoid selecting the number of decomposition in variational mode decomposition(VMD);(ii) MMSVC can identify noisy intrinsic mode function(IMF),and can avoid selecting thresholds of different permutation entropies;(iii) for noise reduction of noisy IMFs,LMSAF overcomes the selection of deco mposition number and basis function for wavelet noise reduction.Firstly,CEEMDAN decomposes the original signal into IMFs,which can be divided into noisy IMFs and real IMFs.Then,MMSVC and LMSAF are used to detect identify noisy IMFs and remove noise components from noisy IMFs.Finally,both denoised noisy IMFs and real IMFs are reconstructed and the final denoised signal is obtained.Compared with other noise reduction techniques,the validity of CEEMDAN-MMSVC-LMSAF can be proved by the analysis of simulation signals and real underwater acoustic signals,which has the better noise reduction effect and has practical application value.CEEMDAN-MMSVC-LMSAF also provides a reliable basis for the detection,feature extraction,classification and recognition of underwater acoustic signals.展开更多
This paper considers local median estimation in fixed design regression problems. The proposed method is employed to estimate the median function and the variance function of a heteroscedastic regression model. Strong...This paper considers local median estimation in fixed design regression problems. The proposed method is employed to estimate the median function and the variance function of a heteroscedastic regression model. Strong convergence rates of the proposed estimators are obtained. Simulation results are given to show the performance of the proposed methods.展开更多
The solid fuel thorium molten salt reactor(TMSR-SF1) is a 10-MWth fluoride-cooled pebble bed reactor. As a new reactor concept, one of the major limiting factors to reactor lifetime is radiation-induced material damag...The solid fuel thorium molten salt reactor(TMSR-SF1) is a 10-MWth fluoride-cooled pebble bed reactor. As a new reactor concept, one of the major limiting factors to reactor lifetime is radiation-induced material damage. The fast neutron flux(E > 0.1 MeV) can be used to assess possible radiation damage. Hence, a method for calculating high-resolution fast neutron flux distribution of the full-scale TMSR-SF1 reactor is required. In this study,a two-step subsection approach based on MCNP5 involving a global variance reduction method, referred to as forward-weighted consistent adjoint-driven importance sampling, was implemented to provide fast neutron flux distribution throughout the TMSR-SF1 facility. In addition,instead of using the general source specification cards, the user-provided SOURCE subroutine in MCNP5 source code was employed to implement a source biasing technique specialized for TMSR-SF1. In contrast to the one-step analog approach, the two-step subsection approach eliminates zero-scored mesh tally cells and obtains tally results with extremely uniform and low relative uncertainties.Furthermore, the maximum fast neutron fluxes of the main components in TMSR-SF1 are provided, which can be used for radiation damage assessment of the structural materials.展开更多
Passive neutron multiplicity counting is widely used as a nondestructive assay technique to quantify mass of plutonium material. One goal of this technique is to achieve good precision in a short measurement time. In ...Passive neutron multiplicity counting is widely used as a nondestructive assay technique to quantify mass of plutonium material. One goal of this technique is to achieve good precision in a short measurement time. In this paper, we describe a procedure to derive mass assay variance for multiplicity counting based on the threeparameter model, and analytical equations are established using the measured neutron multiplicity distribution.Monte Carlo simulations are performed to evaluate precision versus plutonium mass under a fixed measurement time with the equations. Experimental data of seven weapons-grade plutonium samples are presented to test the expected performance. This variance analysis has been used for the counter design and optimal gate-width setting at Institute of Nuclear Physics and Chemistry.展开更多
In this paper, the temporal different characteristics between the target and background pixels are used to detect dim moving targets in the slow-evolving complex background. A local and global variance filter on tempo...In this paper, the temporal different characteristics between the target and background pixels are used to detect dim moving targets in the slow-evolving complex background. A local and global variance filter on temporal profiles is presented that addresses the temporal characteristics of the target and background pixels to eliminate the large variation of background temporal profiles. Firstly, the temporal behaviors of different types of image pixels of practical infrared scenes are analyzed.Then, the new local and global variance filter is proposed. The baseline of the fluctuation level of background temporal profiles is obtained by using the local and global variance filter. The height of the target pulse signal is extracted by subtracting the baseline from the original temporal profiles. Finally, a new target detection criterion is designed. The proposed method is applied to detect dim and small targets in practical infrared sequence images. The experimental results show that the proposed algorithm has good detection performance for dim moving small targets in the complex background.展开更多
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
文摘This article studies the optimal proportional reinsurance and investment problem under a constant elasticity of variance (CEV) model. Assume that the insurer's surplus process follows a jump-diffusion process, the insurer can purchase proportional reinsurance from the reinsurer via the variance principle and invest in a risk-free asset and a risky asset whose price is modeled by a CEV model. The diffusion term can explain the uncertainty associated with the surplus of the insurer or the additional small claims. The objective of the insurer is to maximize the expected exponential utility of terminal wealth. This optimization problem is studied in two cases depending on the diffusion term's explanation. In all cases, by using techniques of stochastic control theory, closed-form expressions for the value functions and optimal strategies are obtained.
基金The project is partly supported by NSFC (19971085)the Doctoral Program Foundation of the Institute of High Education and the Special Foundation of Chinese Academy of Sciences.
文摘Bayes decision rule of variance components for one-way random effects model is derived and empirical Bayes (EB) decision rules are constructed by kernel estimation method. Under suitable conditions, it is shown that the proposed EB decision rules are asymptotically optimal with convergence rates near O(n-1/2). Finally, an example concerning the main result is given.
基金Project supported by the National Basic Research Program of China(Grant No.2012CB955902)the National Natural Science Foundation of China(Grant Nos.41275074,41475073,and 41175084)
文摘In the present paper, a comparison of the performance between moving cutting data-rescaled range analysis (MC- R/S) and moving cutting data-rescaled variance analysis (MC-V/S) is made. The results clearly indicate that the operating efficiency of the MC-R/S algorithm is higher than that of the MC-V/S algorithm. In our numerical test, the computer time consumed by MC-V/S is approximately 25 times that by MC-R/S for an identical window size in artificial data. Except for the difference in operating efficiency, there are no significant differences in performance between MC-R/S and MC-V/S for the abrupt dynamic change detection. Mc-R/s and MC-V/S both display some degree of anti-noise ability. However, it is important to consider the influences of strong noise on the detection results of MC-R/S and MC-V/S in practical application
基金CRC MiningThe University of Queensland for their financial support for this study
文摘Data collected from truck payload management systems at various surface mines shows that the payload variance is significant and must be considered in analysing the mine productivity,energy consumption,greenhouse gas emissions and associated cost.Payload variance causes significant differences in gross vehicle weights.Heavily loaded trucks travel slower up ramps than lightly loaded trucks.Faster trucks are slowed by the presence of slower trucks,resulting in‘bunching’,production losses and increasing fuel consumptions.This paper simulates the truck bunching phenomena in large surface mines to improve truck and shovel systems’efficiency and minimise fuel consumption.The study concentrated on completing a practical simulation model based on a discrete event method which is most commonly used in this field of research in other industries.The simulation model has been validated by a dataset collected from a large surface mine in Arizona state,USA.The results have shown that there is a good agreement between the actual and estimated values of investigated parameters.
文摘Background:Large area forest inventories often use regular grids(with a single random start)of sample locations to ensure a uniform sampling intensity across the space of the surveyed populations.A design-unbiased estimator of variance does not exist for this design.Oftentimes,a quasi-default estimator applicable to simple random sampling(SRS)is used,even if it carries with it the likely risk of overestimating the variance by a practically important margin.To better exploit the precision of systematic sampling we assess the performance of five estimators of variance,including the quasi default.In this study,simulated systematic sampling was applied to artificial populations with contrasting covariance structures and with or without linear trends.We compared the results obtained with the SRS,Matern’s,successive difference replication,Ripley’s,and D’Orazio’s variance estimators.Results:The variances obtained with the four alternatives to the SRS estimator of variance were strongly correlated,and in all study settings consistently closer to the target design variance than the estimator for SRS.The latter always produced the greatest overestimation.In populations with a near zero spatial autocorrelation,all estimators,performed equally,and delivered estimates close to the actual design variance.Conclusion:Without a linear trend,the SDR and DOR estimators were best with variance estimates more narrowly distributed around the benchmark;yet in terms of the least average absolute deviation,Matern’s estimator held a narrow lead.With a strong or moderate linear trend,Matern’s estimator is choice.In large populations,and a low sampling intensity,the performance of the investigated estimators becomes more similar.
基金This work was supported in part by the National Natural Science Foundation of China(No.51575008).
文摘A process parameter optimization method for mold wear during die forging process is proposed and a mold life prediction method based on polynomial fitting is presented,by combining the variance analysis method in the orthogonal test with the finite element simulation test in the forging process.The process parameters with the greatest influence on the mold wear during the die forging process and the optimal solution of the process parameters to minimize the wear depth of the mold are derived.The hot die forging process is taken as an example,and a mold wear correction model for hot forging processes is derived based on the Archard wear model.Finite element simulation analysis of die wear process in hot die forging based on deform software is performed to study the relationship between the wear depth of the mold working surface and the die forging process parameters during hot forging process.The optimized process parameters suitable for hot forging are derived by orthogonal experimental design and analysis of variance.The average wear amount of the mold during the die forging process is derived by calculating the wear depth of a plurality of key nodes on the mold surface.Mold life for the entire production process is predicted based on average mold wear depth and polynomial fitting.
基金Supported by the Shanghai Sailing Program,China(No.21YF1421100)the Startup Fund for Youngman Research at SJTU。
文摘A global variance reduction(GVR)method based on the SPN method is proposed.First,the global multi-group cross-sections are obtained by Monte Carlo(MC)global homogenization.Then,the SP3 equation is solved to obtain the global flux distribution.Finally,the global weight windows are approximated by the global flux distribution,and the GVR simulation is performed.This GVR method is implemented as an automatic process in the RMC code.The SP3-coupled GVR method was tested on a modified version of C5 G7 benchmark with a thickened water shield.The results show that the SP3-coupled GVR method can improve the efficiency of MC criticality calculation.
基金supported by the National Natural Science Foundation of China(Grant No.12005065)the Guangdong Basic and Applied Basic Research Fund(Grant No.2021A1515010317)。
文摘The zero-energy variance principle can be exploited in variational quantum eigensolvers for solving general eigenstates but its capacity for obtaining a specified eigenstate,such as ground state,is limited as all eigenstates are of zero energy variance.We propose a variance-based variational quantum eigensolver for solving the ground state by searching in an enlarged space of wavefunction and Hamiltonian.With a mutual variance-Hamiltonian optimization procedure,the Hamiltonian is iteratively updated to guild the state towards to the ground state of the target Hamiltonian by minimizing the energy variance in each iteration.We demonstrate the performance and properties of the algorithm with numeral simulations.Our work suggests an avenue for utilizing guided Hamiltonian in hybrid quantum-classical algorithms.
基金supported by the National Key R&D Program of China(2021YFD2200203)Heilongjiang Province Key R&D Program of China(GA21B010)+1 种基金Heilongjiang Touyan Innovation Team Program(Tree Genetics and Breeding Innovation Team)Heilongjiang Postdoctoral Financial Assistance(LBH-Z21097)。
文摘Doubled haploid(DH)plants have been widely used for breeding and biological research in crops.Pop ulus spp.have been used as model woody plant species for biological research.However,the induction of DH poplar plants is onerous,and limited biological or breeding work has been carried out on DH individuals or populations.In this study,we provide an effective protocol for poplar haploid induction based on an anther culture method.A total of 96 whole DH plant lines were obtained using an F1hybrid of Populus simonii×P.nigra as a donor tree.The phenotypes of the DH population showed exceptionally high variance when compared to those of half-sib progeny of the donor tree.Each DH line displayed distinct features compared to those of the other DH lines or the donor tree.Additionally,some excellent homozygous lines have the potential to be model plants in genetic and breeding studies.
基金partly supported by National Key Basic Research Program of China(2016YFB1000100)partly supported by National Natural Science Foundation of China(NO.61402490)。
文摘To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.
基金Supported by Foundation of Education Department of Shaanxi Provincial Government(2010JK561) Supported by Basic Research Foundation of Xi’an Polytechnic University(2010JC07)+1 种基金 Supported by the Special Funds of the National Natural Science Foundation of China(11026135) Supported by Chinese Ministry of Education Funds for Young Scientists(10YJC910007)
文摘This paper considers the estimation problem of a variance change-point in linear process.Consistency of a SCUSUM type change-point estimator is proved and its rate of convergence is established.The mean-unknown case is also considered.
基金Research Joint Venture Agreement 17-JV-11242306045,“Old Growth Forest Dynamics and Structure,”between the USDA Forest Service and the University of New Hampshire.Additional support to MJD was provided by the USDA National Institute of Food and Agriculture McIntire-Stennis Project Accession Number 1020142,“Forest Structure,Volume,and Biomass in the Northeastern United States.”TBL:This work was supported by the USDA National Institute of Food and Agriculture,McIntire-Stennis project OKL02834 and the Division of Agricultural Sciences and Natural Resources at Oklahoma State University.
文摘Background:The double sampling method known as“big BAF sampling”has been advocated as a way to reduce sampling effort while still maintaining a reasonably precise estimate of volume.A well-known method for variance determination,Bruce’s method,is customarily used because the volume estimator takes the form of a product of random variables.However,the genesis of Bruce’s method is not known to most foresters who use the method in practice.Methods:We establish that the Taylor series approximation known as the Delta method provides a plausible explanation for the origins of Bruce’s method.Simulations were conducted on two different tree populations to ascertain the similarities of the Delta method to the exact variance of a product.Additionally,two alternative estimators for the variance of individual tree volume-basal area ratios,which are part of the estimation process,were compared within the overall variance estimation procedure.Results:The simulation results demonstrate that Bruce’s method provides a robust method for estimating the variance of inventories conducted with the big BAF method.The simulations also demonstrate that the variance of the mean volume-basal area ratios can be computed using either the usual sample variance of the mean or the ratio variance estimators with equal accuracy,which had not been shown previously for Big BAF sampling.Conclusions:A plausible explanation for the origins of Bruce’s method has been set forth both historically and mathematically in the Delta Method.In most settings,there is evidently no practical difference between applying the exact variance of a product or the Delta method—either can be used.A caution is articulated concerning the aggregation of tree-wise attributes into point-wise summaries in order to test the correlation between the two as a possible indicator of the need for further covariance augmentation.
基金supported by the National Special Project for Magnetic Confined Nuclear Fusion Energy(Nos.2013GB108004 and2015GB108002)the Chinese National Natural Science Foundation(No.11175207)
文摘It can be difficult to calculate some under-sampled regions in global Monte Carlo radiation transport calculations. The global variance reduction(GVR) method is a useful solution to the problem of variance reduction everywhere in a phase space. In this research, a GVR procedure was developed and applied to the Chinese Fusion Engineering Testing Reactor(CFETR). A cylindrical CFETR model was utilized for comparing various implementations of the GVR method to find the optimum.It was found that the flux-based GVR method could ensure more reliable statistical results, achieving an efficiency being 7.43 times that of the analog case. A mesh tally of the scalar neutron flux was chosen for the GVR method to simulate global neutron transport in the CFETR model.Particles distributed uniformly in the system were sampled adequately through ten iterations of GVR weight window.All voxels were scored, and the average relative error was 2.4% in the ultimate step of the GVR iteration.
基金The authors gratefully acknowledge the support of the National Natural Science Foundation of China(No.11574250).
文摘Underwater acoustic signal processing is one of the research hotspots in underwater acoustics.Noise reduction of underwater acoustic signals is the key to underwater acoustic signal processing.Owing to the complexity of marine environment and the particularity of underwater acoustic channel,noise reduction of underwater acoustic signals has always been a difficult challenge in the field of underwater acoustic signal processing.In order to solve the dilemma,we proposed a novel noise reduction technique for underwater acoustic signals based on complete ensemble empirical mode decomposition with adaptive noise(CEEMDAN),minimum mean square variance criterion(MMSVC) and least mean square adaptive filter(LMSAF).This noise reduction technique,named CEEMDAN-MMSVC-LMSAF,has three main advantages:(i) as an improved algorithm of empirical mode decomposition(EMD) and ensemble EMD(EEMD),CEEMDAN can better suppress mode mixing,and can avoid selecting the number of decomposition in variational mode decomposition(VMD);(ii) MMSVC can identify noisy intrinsic mode function(IMF),and can avoid selecting thresholds of different permutation entropies;(iii) for noise reduction of noisy IMFs,LMSAF overcomes the selection of deco mposition number and basis function for wavelet noise reduction.Firstly,CEEMDAN decomposes the original signal into IMFs,which can be divided into noisy IMFs and real IMFs.Then,MMSVC and LMSAF are used to detect identify noisy IMFs and remove noise components from noisy IMFs.Finally,both denoised noisy IMFs and real IMFs are reconstructed and the final denoised signal is obtained.Compared with other noise reduction techniques,the validity of CEEMDAN-MMSVC-LMSAF can be proved by the analysis of simulation signals and real underwater acoustic signals,which has the better noise reduction effect and has practical application value.CEEMDAN-MMSVC-LMSAF also provides a reliable basis for the detection,feature extraction,classification and recognition of underwater acoustic signals.
基金The first author’s research was supported by the National Natural Science Foundation of China(Grant No.198310110 and Grant No.19871003)the partly support of the Doctoral Foundation of China and the last three authors’research was supported by a gra
文摘This paper considers local median estimation in fixed design regression problems. The proposed method is employed to estimate the median function and the variance function of a heteroscedastic regression model. Strong convergence rates of the proposed estimators are obtained. Simulation results are given to show the performance of the proposed methods.
基金supported by the Chinese TMSR Strategic Pioneer Science and Technology Project(No.XDA02010000)the Frontier Science Key Program of Chinese Academy of Sciences(No.QYZDY-SSW-JSC016)
文摘The solid fuel thorium molten salt reactor(TMSR-SF1) is a 10-MWth fluoride-cooled pebble bed reactor. As a new reactor concept, one of the major limiting factors to reactor lifetime is radiation-induced material damage. The fast neutron flux(E > 0.1 MeV) can be used to assess possible radiation damage. Hence, a method for calculating high-resolution fast neutron flux distribution of the full-scale TMSR-SF1 reactor is required. In this study,a two-step subsection approach based on MCNP5 involving a global variance reduction method, referred to as forward-weighted consistent adjoint-driven importance sampling, was implemented to provide fast neutron flux distribution throughout the TMSR-SF1 facility. In addition,instead of using the general source specification cards, the user-provided SOURCE subroutine in MCNP5 source code was employed to implement a source biasing technique specialized for TMSR-SF1. In contrast to the one-step analog approach, the two-step subsection approach eliminates zero-scored mesh tally cells and obtains tally results with extremely uniform and low relative uncertainties.Furthermore, the maximum fast neutron fluxes of the main components in TMSR-SF1 are provided, which can be used for radiation damage assessment of the structural materials.
基金Supported by the National Natural Science Foundation of China(No.11375158)Science and Technology Development Foundation of CAEP(No.2013B0103009)
文摘Passive neutron multiplicity counting is widely used as a nondestructive assay technique to quantify mass of plutonium material. One goal of this technique is to achieve good precision in a short measurement time. In this paper, we describe a procedure to derive mass assay variance for multiplicity counting based on the threeparameter model, and analytical equations are established using the measured neutron multiplicity distribution.Monte Carlo simulations are performed to evaluate precision versus plutonium mass under a fixed measurement time with the equations. Experimental data of seven weapons-grade plutonium samples are presented to test the expected performance. This variance analysis has been used for the counter design and optimal gate-width setting at Institute of Nuclear Physics and Chemistry.
基金National Natural Science Foundation of China(61774120)
文摘In this paper, the temporal different characteristics between the target and background pixels are used to detect dim moving targets in the slow-evolving complex background. A local and global variance filter on temporal profiles is presented that addresses the temporal characteristics of the target and background pixels to eliminate the large variation of background temporal profiles. Firstly, the temporal behaviors of different types of image pixels of practical infrared scenes are analyzed.Then, the new local and global variance filter is proposed. The baseline of the fluctuation level of background temporal profiles is obtained by using the local and global variance filter. The height of the target pulse signal is extracted by subtracting the baseline from the original temporal profiles. Finally, a new target detection criterion is designed. The proposed method is applied to detect dim and small targets in practical infrared sequence images. The experimental results show that the proposed algorithm has good detection performance for dim moving small targets in the complex background.