In engineering application,there is only one adaptive weights estimated by most of traditional early warning radars for adaptive interference suppression in a pulse reputation interval(PRI).Therefore,if the training s...In engineering application,there is only one adaptive weights estimated by most of traditional early warning radars for adaptive interference suppression in a pulse reputation interval(PRI).Therefore,if the training samples used to calculate the weight vector does not contain the jamming,then the jamming cannot be removed by adaptive spatial filtering.If the weight vector is constantly updated in the range dimension,the training data may contain target echo signals,resulting in signal cancellation effect.To cope with the situation that the training samples are contaminated by target signal,an iterative training sample selection method based on non-homogeneous detector(NHD)is proposed in this paper for updating the weight vector in entire range dimension.The principle is presented,and the validity is proven by simulation results.展开更多
The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ...The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.展开更多
One of the detection objectives of the Chinese Asteroid Exploration mission is to investigate the space environment near the Main-belt Comet(MBC,Active Asteroid)311P/PANSTARRS.This paper outlines the scientific object...One of the detection objectives of the Chinese Asteroid Exploration mission is to investigate the space environment near the Main-belt Comet(MBC,Active Asteroid)311P/PANSTARRS.This paper outlines the scientific objectives,measurement targets,and measurement requirements for the proposed Gas and Ion Analyzer(GIA).The GIA is designed for in-situ mass spectrometry of neutral gases and low-energy ions,such as hydrogen,carbon,and oxygen,in the vicinity of 311P.Ion sampling techniques are essential for the GIA's Time-of-Flight(TOF)mass analysis capabilities.In this paper,we present an enhanced ion sampling technique through the development of an ion attraction model and an ion source model.The ion attraction model demonstrates that adjusting attraction grid voltage can enhance the detection efficiency of low-energy ions and mitigate the repulsive force of ions during sampling,which is influenced by the satellite's surface positive charging.The ion source model simulates the processes of gas ionization and ion multiplication.Simulation results indicate that the GIA can achieve a lower pressure limit below 10-13Pa and possess a dynamic range exceeding 10~9.These performances ensure the generation of ions with stable and consistent current,which is crucial for high-resolution and broad dynamic range mass spectrometer analysis.Preliminary testing experiments have verified GIA's capability to detect gas compositions such as H2O and N2.In-situ measurements near 311P using GIA are expected to significantly contribute to our understanding of asteroid activity mechanisms,the evolution of the atmospheric and ionized environments of main-belt comets,the interactions with solar wind,and the origin of Earth's water.展开更多
Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by ...Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by simulations. The several Bayesian approaches are discussed and some limitations are founded. An improvement is put forward after limitations of Bayesian approaches available are analyzed and the improved approach is applied to assessment of some new weapon performance.展开更多
Gold has been present throughout the history of mankind and used to make jewelry and coins, and recently, acquired several industrial uses. The price of gold in international market had a significant increasing, surpa...Gold has been present throughout the history of mankind and used to make jewelry and coins, and recently, acquired several industrial uses. The price of gold in international market had a significant increasing, surpassing 100% in the last five years. Thereby, deposits with low levels of gold content, gold with complex associations or in a very fine particle size became exploitable again, allowing new projects and expansion of existing ones. However, for maximum process efficiency is indispensable a deep knowledge of the characteristics of these minerals and their behavior in face of beneficiation processes. Consequently, an accurate routine for mineralogical and technological characterization is essential.展开更多
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs ty...The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches.展开更多
Although real-world experiences show that preparing one image per person is more convenient, most of the appearance-based face recognition methods degrade or fail to work if there is only a single sample per person(SS...Although real-world experiences show that preparing one image per person is more convenient, most of the appearance-based face recognition methods degrade or fail to work if there is only a single sample per person(SSPP). In this work, we introduce a novel supervised learning method called supervised locality preserving multimanifold(SLPMM) for face recognition with SSPP. In SLPMM, two graphs: within-manifold graph and between-manifold graph are made to represent the information inside every manifold and the information among different manifolds, respectively. SLPMM simultaneously maximizes the between-manifold scatter and minimizes the within-manifold scatter which leads to discriminant space by adopting locality preserving projection(LPP) concept. Experimental results on two widely used face databases FERET and AR face database are presented to prove the efficacy of the proposed approach.展开更多
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di...Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.展开更多
The deformation and failure of coal and rock is energy-driving results according to thermodynamics.It is important to study the strain energy characteristics of coal-rock composite samples to better understand the def...The deformation and failure of coal and rock is energy-driving results according to thermodynamics.It is important to study the strain energy characteristics of coal-rock composite samples to better understand the deformation and failure mechanism of of coal-rock composite structures.In this research,laboratory tests and numerical simulation of uniaxial compressions of coal-rock composite samples were carried out with five different loading rates.The test results show that strength,deformation,acoustic emission(AE)and energy evolution of coal-rock composite sample all have obvious loading rate effects.The uniaxial compressive strength and elastic modulus increase with the increase of loading rate.And with the increase of loading rate,the AE energy at the peak strength of coal-rock composites increases first,then decreases,and then increases.With the increase of loading rate,the AE cumulative count first decreases and then increases.And the total absorption energy and dissipation energy of coal-rock composite samples show non-linear increasing trends,while release elastic strain energy increases first and then decreases.The laboratory experiments conducted on coal-rock composite samples were simulated numerically using the particle flow code(PFC).With careful selection of suitable material constitutive models for coal and rock,and accurate estimation and calibration of mechanical parameters of coal-rock composite sample,it was possible to obtain a good agreement between the laboratory experimental and numerical results.This research can provide references for understanding failure of underground coalrock composite structure by using energy related measuring methods.展开更多
Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,su...Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,such as the detection of military objects,as in these instances,a large number of samples is hard to obtain.In order to solve this problem,this paper proposes the use of Gabor-CNN for object detection based on a small number of samples.First of all,a feature extraction convolution kernel library composed of multi-shape Gabor and color Gabor is constructed,and the optimal Gabor convolution kernel group is obtained by means of training and screening,which is convolved with the input image to obtain feature information of objects with strong auxiliary function.Then,the k-means clustering algorithm is adopted to construct several different sizes of anchor boxes,which improves the quality of the regional proposals.We call this regional proposal process the Gabor-assisted Region Proposal Network(Gabor-assisted RPN).Finally,the Deeply-Utilized Feature Pyramid Network(DU-FPN)method is proposed to strengthen the feature expression of objects in the image.A bottom-up and a topdown feature pyramid is constructed in ResNet-50 and feature information of objects is deeply utilized through the transverse connection and integration of features at various scales.Experimental results show that the method proposed in this paper achieves better results than the state-of-art contrast models on data sets with small samples in terms of accuracy and recall rate,and thus has a strong application prospect.展开更多
The identification of nonlinear systems with multiple sampled rates is a difficult task.The motivation of our paper is to study the parameter estimation problem of Hammerstein systems with dead-zone characteristics by...The identification of nonlinear systems with multiple sampled rates is a difficult task.The motivation of our paper is to study the parameter estimation problem of Hammerstein systems with dead-zone characteristics by using the dual-rate sampled data.Firstly,the auxiliary model identification principle is used to estimate the unmeasurable variables,and the recursive estimation algorithm is proposed to identify the parameters of the static nonlinear model with the dead-zone function and the parameters of the dynamic linear system model.Then,the convergence of the proposed identification algorithm is analyzed by using the martingale convergence theorem.It is proved theoretically that the estimated parameters can converge to the real values under the condition of continuous excitation.Finally,the validity of the proposed algorithm is proved by the identification of the dual-rate sampled nonlinear systems.展开更多
Based on the multi-model principle, the fuzzy identification for nonlinear systems with multirate sampled data is studied.Firstly, the nonlinear system with multirate sampled data can be shown as the nonlinear weighte...Based on the multi-model principle, the fuzzy identification for nonlinear systems with multirate sampled data is studied.Firstly, the nonlinear system with multirate sampled data can be shown as the nonlinear weighted combination of some linear models at multiple local working points. On this basis, the fuzzy model of the multirate sampled nonlinear system is built. The premise structure of the fuzzy model is confirmed by using fuzzy competitive learning, and the conclusion parameters of the fuzzy model are estimated by the random gradient descent algorithm. The convergence of the proposed identification algorithm is given by using the martingale theorem and lemmas. The fuzzy model of the PH neutralization process of acid-base titration for hair quality detection is constructed to demonstrate the effectiveness of the proposed method.展开更多
Matrix effects can significantly hamper the accuracy and precision of the analysis results of perfluorinated acids (PFAs) in environmental solid samples. Several methods, such as standard addition, isotopically labe...Matrix effects can significantly hamper the accuracy and precision of the analysis results of perfluorinated acids (PFAs) in environmental solid samples. Several methods, such as standard addition, isotopically labeled internal standards, clean-up of SPE (solid phase extraction) eluents by dispersive graphitized carbon sorbent and substitution of eletrospray ionization (ESI) source by atmosphere pressure photoionization (APPI) source, were demonstrated for elimination of matrix effects in quantitative analysis of PFAs in solid samples. The resuRs indicate that matrix effects can be effectively eliminated by standard addition, but instrumental analysis time will be multiplied. Isotopically labeled internal standards can effectively negate matrix effects of PFAs with the same perfluorocarbon chain length, but is not valid for the other analytes. Although APPI can eliminate matrix effects for all analytes, it is only suitable for analysis of high pollution levels samples. Clean-up of SPE eluents by dispersive graphitized carbon sorbent not only effectively negate the impact of matrix effect, but also avoid frequent clean of the ESI in order to maintain instrumental sensitivity. Therefore, the best method for elimination of matrix effects is the usage of dispersive graphitized carbon sorbent for clean-up of SPE elution.展开更多
A novel cloud-point extraction (CPE) was successfully used in preconcentration of biphenol A (BPA) from aqueous solutions. Majority of BPA is extracted into the surfactant-rich phase. The parameters affecting the ...A novel cloud-point extraction (CPE) was successfully used in preconcentration of biphenol A (BPA) from aqueous solutions. Majority of BPA is extracted into the surfactant-rich phase. The parameters affecting the CPE such as concentration of surfactant and electrolyte, equilibration temperature and time and pH of sample solution were investigated. The samples were analyzed by high-performance liquid chromatography with ultraviolet detection. Under the optimized conditions, preconcentration of 10 mL sample gives a preconcentration factor of 11. The limit of detection (LOD) and limit of quantification (LOQ) are 0.1 μg/L and 0.33 μg/L, respectively. The linear range of the proposed method is 0.2-20 μg/L with correlation coefficients greater than 0.998 7 and the spiking recove6es are 97.96%-100.42%. The interference factor was tested and the extraction mechanism was also investigated. Thus, the developed CPE has proven to be an efficient, green, rapid and inexpensive approach for extraction and preconcentration of BPA from water samples.展开更多
An improved ethylene blue method for determination of sulfide is developed. It has been adapted to a direct determination of sulfide by both common spectrophotometric method and total differential spectrophotometric m...An improved ethylene blue method for determination of sulfide is developed. It has been adapted to a direct determination of sulfide by both common spectrophotometric method and total differential spectrophotometric method. In common spectrophotometric method, the calibration curve is A=1.69ρ + 0.006 and the correlation coefficient is 0.9994.The apparent molar absorptivity is 5.42×10 4 L·mol -1 ·cm -1 and calibration curve is liner when ρ is in the range of 0 0.9 mg·L -1 . In total differential spectrophotometric method, the calibration curve is A=9.25ρ +0.004 and the correlation coefficient is 0.9996. The apparent molar absorptivity is 2.96×10 5 L·mol -1 ·cm -1 and calibration curve is liner when ρ is in the range of 0 0.10 mg·L -1 . The sensitivity of this method is increased significantly compared with the former ethylene blue method. The speed of reaction is also faster than the former one. The limit of detection is found to be 1.0 ng·mL -1 by both common spectrophotometric method and total differential spectrophotometric method. Ten replicate analyses of a sample solution containing 100 ng·mL -1 sulfide give a relative standard deviation of 1.8%. The effects of various cations and anions on the determination of sulfide are studied and procedures for removal of interference is described. The method is used for the determination of sulfide in environment samples with satisfactory results.展开更多
Traditional object detectors based on deep learning rely on plenty of labeled samples,which are expensive to obtain.Few-shot object detection(FSOD)attempts to solve this problem,learning detection objects from a few l...Traditional object detectors based on deep learning rely on plenty of labeled samples,which are expensive to obtain.Few-shot object detection(FSOD)attempts to solve this problem,learning detection objects from a few labeled samples,but the performance is often unsatisfactory due to the scarcity of samples.We believe that the main reasons that restrict the performance of few-shot detectors are:(1)the positive samples is scarce,and(2)the quality of positive samples is low.Therefore,we put forward a novel few-shot object detector based on YOLOv4,starting from both improving the quantity and quality of positive samples.First,we design a hybrid multivariate positive sample augmentation(HMPSA)module to amplify the quantity of positive samples and increase positive sample diversity while suppressing negative samples.Then,we design a selective non-local fusion attention(SNFA)module to help the detector better learn the target features and improve the feature quality of positive samples.Finally,we optimize the loss function to make it more suitable for the task of FSOD.Experimental results on PASCAL VOC and MS COCO demonstrate that our designed few-shot object detector has competitive performance with other state-of-the-art detectors.展开更多
为保证虚拟手术系统中的网格质量,提出一种基于Loose r sample理论的快速表面网格重建算法。记录满足Loose r sample采样定理的点集,用以描述物体的轮廓。通过约束Delaunay方法对该点集进行三角化,标记顶点和Delaunay单元,重构新的网格...为保证虚拟手术系统中的网格质量,提出一种基于Loose r sample理论的快速表面网格重建算法。记录满足Loose r sample采样定理的点集,用以描述物体的轮廓。通过约束Delaunay方法对该点集进行三角化,标记顶点和Delaunay单元,重构新的网格。实验结果表明,该算法能够保证生成网格的质量,简化仿真复杂度。展开更多
基金supported by the National Natural Science Foundation of China(62371049)。
文摘In engineering application,there is only one adaptive weights estimated by most of traditional early warning radars for adaptive interference suppression in a pulse reputation interval(PRI).Therefore,if the training samples used to calculate the weight vector does not contain the jamming,then the jamming cannot be removed by adaptive spatial filtering.If the weight vector is constantly updated in the range dimension,the training data may contain target echo signals,resulting in signal cancellation effect.To cope with the situation that the training samples are contaminated by target signal,an iterative training sample selection method based on non-homogeneous detector(NHD)is proposed in this paper for updating the weight vector in entire range dimension.The principle is presented,and the validity is proven by simulation results.
基金the National Natural Science Foundation of China(Grant No.61973033)Preliminary Research of Equipment(Grant No.9090102010305)for funding the experiments。
文摘The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.
基金Supported by the National Natural Science Foundation of China(42474239,41204128)China National Space Administration(Pre-research project on Civil Aerospace Technologies No.D010301)Strategic Priority Research Program of the Chinese Academy of Sciences(XDA17010303)。
文摘One of the detection objectives of the Chinese Asteroid Exploration mission is to investigate the space environment near the Main-belt Comet(MBC,Active Asteroid)311P/PANSTARRS.This paper outlines the scientific objectives,measurement targets,and measurement requirements for the proposed Gas and Ion Analyzer(GIA).The GIA is designed for in-situ mass spectrometry of neutral gases and low-energy ions,such as hydrogen,carbon,and oxygen,in the vicinity of 311P.Ion sampling techniques are essential for the GIA's Time-of-Flight(TOF)mass analysis capabilities.In this paper,we present an enhanced ion sampling technique through the development of an ion attraction model and an ion source model.The ion attraction model demonstrates that adjusting attraction grid voltage can enhance the detection efficiency of low-energy ions and mitigate the repulsive force of ions during sampling,which is influenced by the satellite's surface positive charging.The ion source model simulates the processes of gas ionization and ion multiplication.Simulation results indicate that the GIA can achieve a lower pressure limit below 10-13Pa and possess a dynamic range exceeding 10~9.These performances ensure the generation of ions with stable and consistent current,which is crucial for high-resolution and broad dynamic range mass spectrometer analysis.Preliminary testing experiments have verified GIA's capability to detect gas compositions such as H2O and N2.In-situ measurements near 311P using GIA are expected to significantly contribute to our understanding of asteroid activity mechanisms,the evolution of the atmospheric and ionized environments of main-belt comets,the interactions with solar wind,and the origin of Earth's water.
文摘Abundant test data are required in assessment of weapon performance. When weapon test data are insufficient, Bayesian analyses in small sample circumstance should be considered and the test data should be provided by simulations. The several Bayesian approaches are discussed and some limitations are founded. An improvement is put forward after limitations of Bayesian approaches available are analyzed and the improved approach is applied to assessment of some new weapon performance.
文摘Gold has been present throughout the history of mankind and used to make jewelry and coins, and recently, acquired several industrial uses. The price of gold in international market had a significant increasing, surpassing 100% in the last five years. Thereby, deposits with low levels of gold content, gold with complex associations or in a very fine particle size became exploitable again, allowing new projects and expansion of existing ones. However, for maximum process efficiency is indispensable a deep knowledge of the characteristics of these minerals and their behavior in face of beneficiation processes. Consequently, an accurate routine for mineralogical and technological characterization is essential.
基金supported by the National Natural Science Foundation of China (70961005)211 Project for Postgraduate Student Program of Inner Mongolia University+1 种基金National Natural Science Foundation of Inner Mongolia (2010Zd342011MS1002)
文摘The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches.
文摘Although real-world experiences show that preparing one image per person is more convenient, most of the appearance-based face recognition methods degrade or fail to work if there is only a single sample per person(SSPP). In this work, we introduce a novel supervised learning method called supervised locality preserving multimanifold(SLPMM) for face recognition with SSPP. In SLPMM, two graphs: within-manifold graph and between-manifold graph are made to represent the information inside every manifold and the information among different manifolds, respectively. SLPMM simultaneously maximizes the between-manifold scatter and minimizes the within-manifold scatter which leads to discriminant space by adopting locality preserving projection(LPP) concept. Experimental results on two widely used face databases FERET and AR face database are presented to prove the efficacy of the proposed approach.
文摘Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.
基金Projects(51774196,51804181,51874190)supported by the National Natural Science Foundation of ChinaProject(2019GSF111020)supported by the Key R&D Program of Shandong Province,ChinaProject(201908370205)supported by the China Scholarship Council。
文摘The deformation and failure of coal and rock is energy-driving results according to thermodynamics.It is important to study the strain energy characteristics of coal-rock composite samples to better understand the deformation and failure mechanism of of coal-rock composite structures.In this research,laboratory tests and numerical simulation of uniaxial compressions of coal-rock composite samples were carried out with five different loading rates.The test results show that strength,deformation,acoustic emission(AE)and energy evolution of coal-rock composite sample all have obvious loading rate effects.The uniaxial compressive strength and elastic modulus increase with the increase of loading rate.And with the increase of loading rate,the AE energy at the peak strength of coal-rock composites increases first,then decreases,and then increases.With the increase of loading rate,the AE cumulative count first decreases and then increases.And the total absorption energy and dissipation energy of coal-rock composite samples show non-linear increasing trends,while release elastic strain energy increases first and then decreases.The laboratory experiments conducted on coal-rock composite samples were simulated numerically using the particle flow code(PFC).With careful selection of suitable material constitutive models for coal and rock,and accurate estimation and calibration of mechanical parameters of coal-rock composite sample,it was possible to obtain a good agreement between the laboratory experimental and numerical results.This research can provide references for understanding failure of underground coalrock composite structure by using energy related measuring methods.
基金supported by the National Natural Science Foundation of China(grant number:61671470)the National Key Research and Development Program of China(grant number:2016YFC0802904)the Postdoctoral Science Foundation Funded Project of China(grant number:2017M623423).
文摘Object detection models based on convolutional neural networks(CNN)have achieved state-of-the-art performance by heavily rely on large-scale training samples.They are insufficient when used in specific applications,such as the detection of military objects,as in these instances,a large number of samples is hard to obtain.In order to solve this problem,this paper proposes the use of Gabor-CNN for object detection based on a small number of samples.First of all,a feature extraction convolution kernel library composed of multi-shape Gabor and color Gabor is constructed,and the optimal Gabor convolution kernel group is obtained by means of training and screening,which is convolved with the input image to obtain feature information of objects with strong auxiliary function.Then,the k-means clustering algorithm is adopted to construct several different sizes of anchor boxes,which improves the quality of the regional proposals.We call this regional proposal process the Gabor-assisted Region Proposal Network(Gabor-assisted RPN).Finally,the Deeply-Utilized Feature Pyramid Network(DU-FPN)method is proposed to strengthen the feature expression of objects in the image.A bottom-up and a topdown feature pyramid is constructed in ResNet-50 and feature information of objects is deeply utilized through the transverse connection and integration of features at various scales.Experimental results show that the method proposed in this paper achieves better results than the state-of-art contrast models on data sets with small samples in terms of accuracy and recall rate,and thus has a strong application prospect.
基金supported by the National Natural Science Foundation of China(61863034)
文摘The identification of nonlinear systems with multiple sampled rates is a difficult task.The motivation of our paper is to study the parameter estimation problem of Hammerstein systems with dead-zone characteristics by using the dual-rate sampled data.Firstly,the auxiliary model identification principle is used to estimate the unmeasurable variables,and the recursive estimation algorithm is proposed to identify the parameters of the static nonlinear model with the dead-zone function and the parameters of the dynamic linear system model.Then,the convergence of the proposed identification algorithm is analyzed by using the martingale convergence theorem.It is proved theoretically that the estimated parameters can converge to the real values under the condition of continuous excitation.Finally,the validity of the proposed algorithm is proved by the identification of the dual-rate sampled nonlinear systems.
基金supported by the National Natural Science Foundation of China(61863034)。
文摘Based on the multi-model principle, the fuzzy identification for nonlinear systems with multirate sampled data is studied.Firstly, the nonlinear system with multirate sampled data can be shown as the nonlinear weighted combination of some linear models at multiple local working points. On this basis, the fuzzy model of the multirate sampled nonlinear system is built. The premise structure of the fuzzy model is confirmed by using fuzzy competitive learning, and the conclusion parameters of the fuzzy model are estimated by the random gradient descent algorithm. The convergence of the proposed identification algorithm is given by using the martingale theorem and lemmas. The fuzzy model of the PH neutralization process of acid-base titration for hair quality detection is constructed to demonstrate the effectiveness of the proposed method.
基金Foundation item: Projects(51108197, 51205215) supported by the National Natural Science Foundation of ChinaProjects(2011J05135, 2011J01318) supported by the Natural Science Foundation of Fujian Province, China+1 种基金Project(11QZR08) supported by the Scientific Research Foundation of Overseas Chinese Affairs Office of the State Council,ChinaProject(10BS213) supported by the Scientific Research Foundation for Advanced Talents,Huaqiao University,China
文摘Matrix effects can significantly hamper the accuracy and precision of the analysis results of perfluorinated acids (PFAs) in environmental solid samples. Several methods, such as standard addition, isotopically labeled internal standards, clean-up of SPE (solid phase extraction) eluents by dispersive graphitized carbon sorbent and substitution of eletrospray ionization (ESI) source by atmosphere pressure photoionization (APPI) source, were demonstrated for elimination of matrix effects in quantitative analysis of PFAs in solid samples. The resuRs indicate that matrix effects can be effectively eliminated by standard addition, but instrumental analysis time will be multiplied. Isotopically labeled internal standards can effectively negate matrix effects of PFAs with the same perfluorocarbon chain length, but is not valid for the other analytes. Although APPI can eliminate matrix effects for all analytes, it is only suitable for analysis of high pollution levels samples. Clean-up of SPE eluents by dispersive graphitized carbon sorbent not only effectively negate the impact of matrix effect, but also avoid frequent clean of the ESI in order to maintain instrumental sensitivity. Therefore, the best method for elimination of matrix effects is the usage of dispersive graphitized carbon sorbent for clean-up of SPE elution.
基金Project(20956001) supported by the National Natural Science Foundation of ChinaProject(CX2011B083) supported by Hunan Provincial Innovation Foundation for Postgraduate, ChinaProject(K1104026-11) supported by Project of Changsha Science and Technology Bureau, China
文摘A novel cloud-point extraction (CPE) was successfully used in preconcentration of biphenol A (BPA) from aqueous solutions. Majority of BPA is extracted into the surfactant-rich phase. The parameters affecting the CPE such as concentration of surfactant and electrolyte, equilibration temperature and time and pH of sample solution were investigated. The samples were analyzed by high-performance liquid chromatography with ultraviolet detection. Under the optimized conditions, preconcentration of 10 mL sample gives a preconcentration factor of 11. The limit of detection (LOD) and limit of quantification (LOQ) are 0.1 μg/L and 0.33 μg/L, respectively. The linear range of the proposed method is 0.2-20 μg/L with correlation coefficients greater than 0.998 7 and the spiking recove6es are 97.96%-100.42%. The interference factor was tested and the extraction mechanism was also investigated. Thus, the developed CPE has proven to be an efficient, green, rapid and inexpensive approach for extraction and preconcentration of BPA from water samples.
文摘An improved ethylene blue method for determination of sulfide is developed. It has been adapted to a direct determination of sulfide by both common spectrophotometric method and total differential spectrophotometric method. In common spectrophotometric method, the calibration curve is A=1.69ρ + 0.006 and the correlation coefficient is 0.9994.The apparent molar absorptivity is 5.42×10 4 L·mol -1 ·cm -1 and calibration curve is liner when ρ is in the range of 0 0.9 mg·L -1 . In total differential spectrophotometric method, the calibration curve is A=9.25ρ +0.004 and the correlation coefficient is 0.9996. The apparent molar absorptivity is 2.96×10 5 L·mol -1 ·cm -1 and calibration curve is liner when ρ is in the range of 0 0.10 mg·L -1 . The sensitivity of this method is increased significantly compared with the former ethylene blue method. The speed of reaction is also faster than the former one. The limit of detection is found to be 1.0 ng·mL -1 by both common spectrophotometric method and total differential spectrophotometric method. Ten replicate analyses of a sample solution containing 100 ng·mL -1 sulfide give a relative standard deviation of 1.8%. The effects of various cations and anions on the determination of sulfide are studied and procedures for removal of interference is described. The method is used for the determination of sulfide in environment samples with satisfactory results.
基金the China National Key Research and Development Program(Grant No.2016YFC0802904)National Natural Science Foundation of China(Grant No.61671470)62nd batch of funded projects of China Postdoctoral Science Foundation(Grant No.2017M623423)to provide fund for conducting experiments。
文摘Traditional object detectors based on deep learning rely on plenty of labeled samples,which are expensive to obtain.Few-shot object detection(FSOD)attempts to solve this problem,learning detection objects from a few labeled samples,but the performance is often unsatisfactory due to the scarcity of samples.We believe that the main reasons that restrict the performance of few-shot detectors are:(1)the positive samples is scarce,and(2)the quality of positive samples is low.Therefore,we put forward a novel few-shot object detector based on YOLOv4,starting from both improving the quantity and quality of positive samples.First,we design a hybrid multivariate positive sample augmentation(HMPSA)module to amplify the quantity of positive samples and increase positive sample diversity while suppressing negative samples.Then,we design a selective non-local fusion attention(SNFA)module to help the detector better learn the target features and improve the feature quality of positive samples.Finally,we optimize the loss function to make it more suitable for the task of FSOD.Experimental results on PASCAL VOC and MS COCO demonstrate that our designed few-shot object detector has competitive performance with other state-of-the-art detectors.
文摘为保证虚拟手术系统中的网格质量,提出一种基于Loose r sample理论的快速表面网格重建算法。记录满足Loose r sample采样定理的点集,用以描述物体的轮廓。通过约束Delaunay方法对该点集进行三角化,标记顶点和Delaunay单元,重构新的网格。实验结果表明,该算法能够保证生成网格的质量,简化仿真复杂度。